Test Report: KVM_Linux_crio 19662

                    
                      3f64d3c641e64b460ff7a3cff080aebef74ca5ca:2024-09-17:36258
                    
                

Test fail (30/312)

Order failed test Duration
33 TestAddons/parallel/Registry 74.41
34 TestAddons/parallel/Ingress 152.59
36 TestAddons/parallel/MetricsServer 346.04
164 TestMultiControlPlane/serial/StopSecondaryNode 142
166 TestMultiControlPlane/serial/RestartSecondaryNode 57.37
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 379.8
171 TestMultiControlPlane/serial/StopCluster 141.93
231 TestMultiNode/serial/RestartKeepsNodes 331.9
233 TestMultiNode/serial/StopMultiNode 141.61
240 TestPreload 272.66
248 TestKubernetesUpgrade 495.09
276 TestPause/serial/SecondStartNoReconfiguration 81.91
319 TestStartStop/group/old-k8s-version/serial/FirstStart 299.81
339 TestStartStop/group/embed-certs/serial/Stop 139.22
342 TestStartStop/group/no-preload/serial/Stop 139.06
345 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.12
346 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
347 TestStartStop/group/old-k8s-version/serial/DeployApp 0.48
348 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 80.11
350 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
352 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
356 TestStartStop/group/old-k8s-version/serial/SecondStart 739.95
357 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.57
358 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.47
359 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.48
360 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.77
361 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 338.03
362 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 447.92
363 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 372.18
364 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 133.77
x
+
TestAddons/parallel/Registry (74.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 4.525737ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-5dzpj" [2f4278c0-9bc9-4d2d-8e73-43d39ddd1504] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008586334s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-84sgt" [93e3187d-0292-45df-9221-e406397b489f] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.006018111s
addons_test.go:342: (dbg) Run:  kubectl --context addons-408385 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-408385 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-408385 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.084106667s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-408385 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-408385 ip
2024/09/17 17:08:04 [DEBUG] GET http://192.168.39.170:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-408385 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-408385 -n addons-408385
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-408385 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-408385 logs -n 25: (1.533907288s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-581824 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |                     |
	|         | -p download-only-581824                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| delete  | -p download-only-581824                                                                     | download-only-581824 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| start   | -o=json --download-only                                                                     | download-only-285125 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |                     |
	|         | -p download-only-285125                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| delete  | -p download-only-285125                                                                     | download-only-285125 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| delete  | -p download-only-581824                                                                     | download-only-581824 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| delete  | -p download-only-285125                                                                     | download-only-285125 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-510758 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |                     |
	|         | binary-mirror-510758                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:36709                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-510758                                                                     | binary-mirror-510758 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| addons  | disable dashboard -p                                                                        | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |                     |
	|         | addons-408385                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |                     |
	|         | addons-408385                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-408385 --wait=true                                                                | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:58 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-408385 ssh cat                                                                       | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:07 UTC | 17 Sep 24 17:07 UTC |
	|         | /opt/local-path-provisioner/pvc-909e1d4d-bf3e-45b2-8d6d-fc1ce31d7fc6_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-408385 addons disable                                                                | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:07 UTC | 17 Sep 24 17:07 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-408385 addons                                                                        | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:07 UTC | 17 Sep 24 17:07 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-408385 addons                                                                        | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:07 UTC | 17 Sep 24 17:07 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-408385 addons disable                                                                | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:07 UTC | 17 Sep 24 17:07 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:07 UTC | 17 Sep 24 17:07 UTC |
	|         | addons-408385                                                                               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:08 UTC |                     |
	|         | addons-408385                                                                               |                      |         |         |                     |                     |
	| ip      | addons-408385 ip                                                                            | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:08 UTC | 17 Sep 24 17:08 UTC |
	| addons  | addons-408385 addons disable                                                                | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:08 UTC | 17 Sep 24 17:08 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 16:55:51
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 16:55:51.791795   18924 out.go:345] Setting OutFile to fd 1 ...
	I0917 16:55:51.792044   18924 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:55:51.792053   18924 out.go:358] Setting ErrFile to fd 2...
	I0917 16:55:51.792058   18924 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:55:51.792230   18924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 16:55:51.792827   18924 out.go:352] Setting JSON to false
	I0917 16:55:51.793665   18924 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2267,"bootTime":1726589885,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 16:55:51.793765   18924 start.go:139] virtualization: kvm guest
	I0917 16:55:51.795973   18924 out.go:177] * [addons-408385] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 16:55:51.797387   18924 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 16:55:51.797381   18924 notify.go:220] Checking for updates...
	I0917 16:55:51.798951   18924 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 16:55:51.800529   18924 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 16:55:51.801832   18924 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 16:55:51.803070   18924 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 16:55:51.804253   18924 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 16:55:51.805653   18924 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 16:55:51.838070   18924 out.go:177] * Using the kvm2 driver based on user configuration
	I0917 16:55:51.839376   18924 start.go:297] selected driver: kvm2
	I0917 16:55:51.839394   18924 start.go:901] validating driver "kvm2" against <nil>
	I0917 16:55:51.839405   18924 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 16:55:51.840126   18924 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 16:55:51.840207   18924 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19662-11085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0917 16:55:51.855471   18924 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0917 16:55:51.855528   18924 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 16:55:51.855817   18924 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 16:55:51.855861   18924 cni.go:84] Creating CNI manager for ""
	I0917 16:55:51.855920   18924 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 16:55:51.855931   18924 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 16:55:51.855997   18924 start.go:340] cluster config:
	{Name:addons-408385 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-408385 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 16:55:51.856122   18924 iso.go:125] acquiring lock: {Name:mkee7131e09b1c5eb94d106bda884bcbdbf6f906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 16:55:51.858118   18924 out.go:177] * Starting "addons-408385" primary control-plane node in "addons-408385" cluster
	I0917 16:55:51.859487   18924 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 16:55:51.859520   18924 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0917 16:55:51.859551   18924 cache.go:56] Caching tarball of preloaded images
	I0917 16:55:51.859643   18924 preload.go:172] Found /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 16:55:51.859654   18924 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0917 16:55:51.859979   18924 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/config.json ...
	I0917 16:55:51.860003   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/config.json: {Name:mkaab3d4715b6a1329fbbb57cdab9fd6bad92461 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:55:51.860158   18924 start.go:360] acquireMachinesLock for addons-408385: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 16:55:51.860218   18924 start.go:364] duration metric: took 44.183µs to acquireMachinesLock for "addons-408385"
	I0917 16:55:51.860239   18924 start.go:93] Provisioning new machine with config: &{Name:addons-408385 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-408385 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 16:55:51.860305   18924 start.go:125] createHost starting for "" (driver="kvm2")
	I0917 16:55:51.862121   18924 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0917 16:55:51.862257   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:55:51.862301   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:55:51.877059   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34091
	I0917 16:55:51.877513   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:55:51.877999   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:55:51.878018   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:55:51.878383   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:55:51.878572   18924 main.go:141] libmachine: (addons-408385) Calling .GetMachineName
	I0917 16:55:51.878714   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:55:51.878883   18924 start.go:159] libmachine.API.Create for "addons-408385" (driver="kvm2")
	I0917 16:55:51.878911   18924 client.go:168] LocalClient.Create starting
	I0917 16:55:51.878946   18924 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem
	I0917 16:55:51.947974   18924 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem
	I0917 16:55:52.056813   18924 main.go:141] libmachine: Running pre-create checks...
	I0917 16:55:52.056834   18924 main.go:141] libmachine: (addons-408385) Calling .PreCreateCheck
	I0917 16:55:52.057355   18924 main.go:141] libmachine: (addons-408385) Calling .GetConfigRaw
	I0917 16:55:52.057806   18924 main.go:141] libmachine: Creating machine...
	I0917 16:55:52.057820   18924 main.go:141] libmachine: (addons-408385) Calling .Create
	I0917 16:55:52.057938   18924 main.go:141] libmachine: (addons-408385) Creating KVM machine...
	I0917 16:55:52.059242   18924 main.go:141] libmachine: (addons-408385) DBG | found existing default KVM network
	I0917 16:55:52.060009   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:52.059868   18946 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123a60}
	I0917 16:55:52.060022   18924 main.go:141] libmachine: (addons-408385) DBG | created network xml: 
	I0917 16:55:52.060030   18924 main.go:141] libmachine: (addons-408385) DBG | <network>
	I0917 16:55:52.060035   18924 main.go:141] libmachine: (addons-408385) DBG |   <name>mk-addons-408385</name>
	I0917 16:55:52.060041   18924 main.go:141] libmachine: (addons-408385) DBG |   <dns enable='no'/>
	I0917 16:55:52.060045   18924 main.go:141] libmachine: (addons-408385) DBG |   
	I0917 16:55:52.060051   18924 main.go:141] libmachine: (addons-408385) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0917 16:55:52.060058   18924 main.go:141] libmachine: (addons-408385) DBG |     <dhcp>
	I0917 16:55:52.060064   18924 main.go:141] libmachine: (addons-408385) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0917 16:55:52.060070   18924 main.go:141] libmachine: (addons-408385) DBG |     </dhcp>
	I0917 16:55:52.060083   18924 main.go:141] libmachine: (addons-408385) DBG |   </ip>
	I0917 16:55:52.060092   18924 main.go:141] libmachine: (addons-408385) DBG |   
	I0917 16:55:52.060101   18924 main.go:141] libmachine: (addons-408385) DBG | </network>
	I0917 16:55:52.060112   18924 main.go:141] libmachine: (addons-408385) DBG | 
	I0917 16:55:52.065525   18924 main.go:141] libmachine: (addons-408385) DBG | trying to create private KVM network mk-addons-408385 192.168.39.0/24...
	I0917 16:55:52.130546   18924 main.go:141] libmachine: (addons-408385) DBG | private KVM network mk-addons-408385 192.168.39.0/24 created
	I0917 16:55:52.130574   18924 main.go:141] libmachine: (addons-408385) Setting up store path in /home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385 ...
	I0917 16:55:52.130589   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:52.130546   18946 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 16:55:52.130612   18924 main.go:141] libmachine: (addons-408385) Building disk image from file:///home/jenkins/minikube-integration/19662-11085/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0917 16:55:52.130765   18924 main.go:141] libmachine: (addons-408385) Downloading /home/jenkins/minikube-integration/19662-11085/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19662-11085/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0917 16:55:52.385741   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:52.385631   18946 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa...
	I0917 16:55:52.511387   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:52.511277   18946 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/addons-408385.rawdisk...
	I0917 16:55:52.511413   18924 main.go:141] libmachine: (addons-408385) DBG | Writing magic tar header
	I0917 16:55:52.511427   18924 main.go:141] libmachine: (addons-408385) DBG | Writing SSH key tar header
	I0917 16:55:52.511451   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:52.511387   18946 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385 ...
	I0917 16:55:52.511506   18924 main.go:141] libmachine: (addons-408385) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385
	I0917 16:55:52.511525   18924 main.go:141] libmachine: (addons-408385) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube/machines
	I0917 16:55:52.511538   18924 main.go:141] libmachine: (addons-408385) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385 (perms=drwx------)
	I0917 16:55:52.511548   18924 main.go:141] libmachine: (addons-408385) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 16:55:52.511562   18924 main.go:141] libmachine: (addons-408385) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085
	I0917 16:55:52.511573   18924 main.go:141] libmachine: (addons-408385) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube/machines (perms=drwxr-xr-x)
	I0917 16:55:52.511586   18924 main.go:141] libmachine: (addons-408385) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube (perms=drwxr-xr-x)
	I0917 16:55:52.511598   18924 main.go:141] libmachine: (addons-408385) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085 (perms=drwxrwxr-x)
	I0917 16:55:52.511610   18924 main.go:141] libmachine: (addons-408385) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0917 16:55:52.511622   18924 main.go:141] libmachine: (addons-408385) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0917 16:55:52.511634   18924 main.go:141] libmachine: (addons-408385) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0917 16:55:52.511646   18924 main.go:141] libmachine: (addons-408385) DBG | Checking permissions on dir: /home/jenkins
	I0917 16:55:52.511656   18924 main.go:141] libmachine: (addons-408385) Creating domain...
	I0917 16:55:52.511669   18924 main.go:141] libmachine: (addons-408385) DBG | Checking permissions on dir: /home
	I0917 16:55:52.511682   18924 main.go:141] libmachine: (addons-408385) DBG | Skipping /home - not owner
	I0917 16:55:52.512603   18924 main.go:141] libmachine: (addons-408385) define libvirt domain using xml: 
	I0917 16:55:52.512625   18924 main.go:141] libmachine: (addons-408385) <domain type='kvm'>
	I0917 16:55:52.512635   18924 main.go:141] libmachine: (addons-408385)   <name>addons-408385</name>
	I0917 16:55:52.512642   18924 main.go:141] libmachine: (addons-408385)   <memory unit='MiB'>4000</memory>
	I0917 16:55:52.512649   18924 main.go:141] libmachine: (addons-408385)   <vcpu>2</vcpu>
	I0917 16:55:52.512661   18924 main.go:141] libmachine: (addons-408385)   <features>
	I0917 16:55:52.512670   18924 main.go:141] libmachine: (addons-408385)     <acpi/>
	I0917 16:55:52.512679   18924 main.go:141] libmachine: (addons-408385)     <apic/>
	I0917 16:55:52.512690   18924 main.go:141] libmachine: (addons-408385)     <pae/>
	I0917 16:55:52.512699   18924 main.go:141] libmachine: (addons-408385)     
	I0917 16:55:52.512706   18924 main.go:141] libmachine: (addons-408385)   </features>
	I0917 16:55:52.512714   18924 main.go:141] libmachine: (addons-408385)   <cpu mode='host-passthrough'>
	I0917 16:55:52.512721   18924 main.go:141] libmachine: (addons-408385)   
	I0917 16:55:52.512730   18924 main.go:141] libmachine: (addons-408385)   </cpu>
	I0917 16:55:52.512740   18924 main.go:141] libmachine: (addons-408385)   <os>
	I0917 16:55:52.512749   18924 main.go:141] libmachine: (addons-408385)     <type>hvm</type>
	I0917 16:55:52.512760   18924 main.go:141] libmachine: (addons-408385)     <boot dev='cdrom'/>
	I0917 16:55:52.512769   18924 main.go:141] libmachine: (addons-408385)     <boot dev='hd'/>
	I0917 16:55:52.512778   18924 main.go:141] libmachine: (addons-408385)     <bootmenu enable='no'/>
	I0917 16:55:52.512784   18924 main.go:141] libmachine: (addons-408385)   </os>
	I0917 16:55:52.512790   18924 main.go:141] libmachine: (addons-408385)   <devices>
	I0917 16:55:52.512802   18924 main.go:141] libmachine: (addons-408385)     <disk type='file' device='cdrom'>
	I0917 16:55:52.512812   18924 main.go:141] libmachine: (addons-408385)       <source file='/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/boot2docker.iso'/>
	I0917 16:55:52.512819   18924 main.go:141] libmachine: (addons-408385)       <target dev='hdc' bus='scsi'/>
	I0917 16:55:52.512824   18924 main.go:141] libmachine: (addons-408385)       <readonly/>
	I0917 16:55:52.512834   18924 main.go:141] libmachine: (addons-408385)     </disk>
	I0917 16:55:52.512861   18924 main.go:141] libmachine: (addons-408385)     <disk type='file' device='disk'>
	I0917 16:55:52.512887   18924 main.go:141] libmachine: (addons-408385)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0917 16:55:52.512920   18924 main.go:141] libmachine: (addons-408385)       <source file='/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/addons-408385.rawdisk'/>
	I0917 16:55:52.512947   18924 main.go:141] libmachine: (addons-408385)       <target dev='hda' bus='virtio'/>
	I0917 16:55:52.512961   18924 main.go:141] libmachine: (addons-408385)     </disk>
	I0917 16:55:52.512977   18924 main.go:141] libmachine: (addons-408385)     <interface type='network'>
	I0917 16:55:52.512987   18924 main.go:141] libmachine: (addons-408385)       <source network='mk-addons-408385'/>
	I0917 16:55:52.512994   18924 main.go:141] libmachine: (addons-408385)       <model type='virtio'/>
	I0917 16:55:52.512999   18924 main.go:141] libmachine: (addons-408385)     </interface>
	I0917 16:55:52.513008   18924 main.go:141] libmachine: (addons-408385)     <interface type='network'>
	I0917 16:55:52.513020   18924 main.go:141] libmachine: (addons-408385)       <source network='default'/>
	I0917 16:55:52.513030   18924 main.go:141] libmachine: (addons-408385)       <model type='virtio'/>
	I0917 16:55:52.513041   18924 main.go:141] libmachine: (addons-408385)     </interface>
	I0917 16:55:52.513054   18924 main.go:141] libmachine: (addons-408385)     <serial type='pty'>
	I0917 16:55:52.513065   18924 main.go:141] libmachine: (addons-408385)       <target port='0'/>
	I0917 16:55:52.513074   18924 main.go:141] libmachine: (addons-408385)     </serial>
	I0917 16:55:52.513083   18924 main.go:141] libmachine: (addons-408385)     <console type='pty'>
	I0917 16:55:52.513090   18924 main.go:141] libmachine: (addons-408385)       <target type='serial' port='0'/>
	I0917 16:55:52.513100   18924 main.go:141] libmachine: (addons-408385)     </console>
	I0917 16:55:52.513110   18924 main.go:141] libmachine: (addons-408385)     <rng model='virtio'>
	I0917 16:55:52.513123   18924 main.go:141] libmachine: (addons-408385)       <backend model='random'>/dev/random</backend>
	I0917 16:55:52.513136   18924 main.go:141] libmachine: (addons-408385)     </rng>
	I0917 16:55:52.513146   18924 main.go:141] libmachine: (addons-408385)     
	I0917 16:55:52.513151   18924 main.go:141] libmachine: (addons-408385)     
	I0917 16:55:52.513161   18924 main.go:141] libmachine: (addons-408385)   </devices>
	I0917 16:55:52.513168   18924 main.go:141] libmachine: (addons-408385) </domain>
	I0917 16:55:52.513179   18924 main.go:141] libmachine: (addons-408385) 
	I0917 16:55:52.519149   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:10:0b:0b in network default
	I0917 16:55:52.519688   18924 main.go:141] libmachine: (addons-408385) Ensuring networks are active...
	I0917 16:55:52.519712   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:55:52.520323   18924 main.go:141] libmachine: (addons-408385) Ensuring network default is active
	I0917 16:55:52.520629   18924 main.go:141] libmachine: (addons-408385) Ensuring network mk-addons-408385 is active
	I0917 16:55:52.521053   18924 main.go:141] libmachine: (addons-408385) Getting domain xml...
	I0917 16:55:52.521710   18924 main.go:141] libmachine: (addons-408385) Creating domain...
	I0917 16:55:53.811430   18924 main.go:141] libmachine: (addons-408385) Waiting to get IP...
	I0917 16:55:53.812152   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:55:53.812522   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:55:53.812543   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:53.812493   18946 retry.go:31] will retry after 197.5195ms: waiting for machine to come up
	I0917 16:55:54.012026   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:55:54.012441   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:55:54.012468   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:54.012412   18946 retry.go:31] will retry after 326.010953ms: waiting for machine to come up
	I0917 16:55:54.339858   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:55:54.340287   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:55:54.340312   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:54.340239   18946 retry.go:31] will retry after 296.869686ms: waiting for machine to come up
	I0917 16:55:54.638673   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:55:54.639104   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:55:54.639128   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:54.639060   18946 retry.go:31] will retry after 392.314611ms: waiting for machine to come up
	I0917 16:55:55.032985   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:55:55.033655   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:55:55.033684   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:55.033600   18946 retry.go:31] will retry after 585.264566ms: waiting for machine to come up
	I0917 16:55:55.620073   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:55:55.620498   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:55:55.620534   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:55.620466   18946 retry.go:31] will retry after 797.322744ms: waiting for machine to come up
	I0917 16:55:56.419607   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:55:56.420088   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:55:56.420115   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:56.420046   18946 retry.go:31] will retry after 1.028584855s: waiting for machine to come up
	I0917 16:55:57.450058   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:55:57.450474   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:55:57.450503   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:57.450420   18946 retry.go:31] will retry after 1.43599402s: waiting for machine to come up
	I0917 16:55:58.888104   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:55:58.888459   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:55:58.888481   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:58.888437   18946 retry.go:31] will retry after 1.280603811s: waiting for machine to come up
	I0917 16:56:00.170844   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:00.171138   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:56:00.171158   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:56:00.171116   18946 retry.go:31] will retry after 1.674811656s: waiting for machine to come up
	I0917 16:56:01.848038   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:01.848477   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:56:01.848503   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:56:01.848445   18946 retry.go:31] will retry after 2.792716027s: waiting for machine to come up
	I0917 16:56:04.644899   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:04.645317   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:56:04.645336   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:56:04.645282   18946 retry.go:31] will retry after 2.720169067s: waiting for machine to come up
	I0917 16:56:07.367470   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:07.367874   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:56:07.367899   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:56:07.367847   18946 retry.go:31] will retry after 4.528965555s: waiting for machine to come up
	I0917 16:56:11.898213   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:11.898579   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:56:11.898600   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:56:11.898539   18946 retry.go:31] will retry after 4.262922802s: waiting for machine to come up
	I0917 16:56:16.165468   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.165964   18924 main.go:141] libmachine: (addons-408385) Found IP for machine: 192.168.39.170
	I0917 16:56:16.165979   18924 main.go:141] libmachine: (addons-408385) Reserving static IP address...
	I0917 16:56:16.165988   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has current primary IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.166352   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find host DHCP lease matching {name: "addons-408385", mac: "52:54:00:69:b5:a2", ip: "192.168.39.170"} in network mk-addons-408385
	I0917 16:56:16.239610   18924 main.go:141] libmachine: (addons-408385) DBG | Getting to WaitForSSH function...
	I0917 16:56:16.239655   18924 main.go:141] libmachine: (addons-408385) Reserved static IP address: 192.168.39.170
	I0917 16:56:16.239670   18924 main.go:141] libmachine: (addons-408385) Waiting for SSH to be available...
	I0917 16:56:16.242205   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.242648   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:minikube Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:16.242681   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.242868   18924 main.go:141] libmachine: (addons-408385) DBG | Using SSH client type: external
	I0917 16:56:16.242892   18924 main.go:141] libmachine: (addons-408385) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa (-rw-------)
	I0917 16:56:16.242919   18924 main.go:141] libmachine: (addons-408385) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.170 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 16:56:16.242929   18924 main.go:141] libmachine: (addons-408385) DBG | About to run SSH command:
	I0917 16:56:16.242938   18924 main.go:141] libmachine: (addons-408385) DBG | exit 0
	I0917 16:56:16.377461   18924 main.go:141] libmachine: (addons-408385) DBG | SSH cmd err, output: <nil>: 
	I0917 16:56:16.377719   18924 main.go:141] libmachine: (addons-408385) KVM machine creation complete!
	I0917 16:56:16.378103   18924 main.go:141] libmachine: (addons-408385) Calling .GetConfigRaw
	I0917 16:56:16.378639   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:16.378776   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:16.378886   18924 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0917 16:56:16.378895   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:16.380224   18924 main.go:141] libmachine: Detecting operating system of created instance...
	I0917 16:56:16.380240   18924 main.go:141] libmachine: Waiting for SSH to be available...
	I0917 16:56:16.380247   18924 main.go:141] libmachine: Getting to WaitForSSH function...
	I0917 16:56:16.380282   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:16.382400   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.382795   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:16.382826   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.382937   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:16.383090   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:16.383243   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:16.383336   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:16.383453   18924 main.go:141] libmachine: Using SSH client type: native
	I0917 16:56:16.383654   18924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0917 16:56:16.383667   18924 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0917 16:56:16.496650   18924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 16:56:16.496684   18924 main.go:141] libmachine: Detecting the provisioner...
	I0917 16:56:16.496692   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:16.499052   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.499387   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:16.499419   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.499509   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:16.499704   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:16.499841   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:16.499969   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:16.500153   18924 main.go:141] libmachine: Using SSH client type: native
	I0917 16:56:16.500355   18924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0917 16:56:16.500368   18924 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0917 16:56:16.614164   18924 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0917 16:56:16.614231   18924 main.go:141] libmachine: found compatible host: buildroot
	I0917 16:56:16.614239   18924 main.go:141] libmachine: Provisioning with buildroot...
	I0917 16:56:16.614251   18924 main.go:141] libmachine: (addons-408385) Calling .GetMachineName
	I0917 16:56:16.614509   18924 buildroot.go:166] provisioning hostname "addons-408385"
	I0917 16:56:16.614541   18924 main.go:141] libmachine: (addons-408385) Calling .GetMachineName
	I0917 16:56:16.614725   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:16.616892   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.617265   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:16.617292   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.617459   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:16.617618   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:16.617766   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:16.617880   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:16.618037   18924 main.go:141] libmachine: Using SSH client type: native
	I0917 16:56:16.618259   18924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0917 16:56:16.618274   18924 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-408385 && echo "addons-408385" | sudo tee /etc/hostname
	I0917 16:56:16.748306   18924 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-408385
	
	I0917 16:56:16.748338   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:16.751036   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.751353   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:16.751375   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.751594   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:16.751810   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:16.751967   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:16.752091   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:16.752236   18924 main.go:141] libmachine: Using SSH client type: native
	I0917 16:56:16.752408   18924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0917 16:56:16.752423   18924 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-408385' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-408385/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-408385' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 16:56:16.874871   18924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 16:56:16.874903   18924 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 16:56:16.874921   18924 buildroot.go:174] setting up certificates
	I0917 16:56:16.874931   18924 provision.go:84] configureAuth start
	I0917 16:56:16.874941   18924 main.go:141] libmachine: (addons-408385) Calling .GetMachineName
	I0917 16:56:16.875174   18924 main.go:141] libmachine: (addons-408385) Calling .GetIP
	I0917 16:56:16.877616   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.877962   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:16.877988   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.878128   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:16.879974   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.880235   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:16.880259   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.880362   18924 provision.go:143] copyHostCerts
	I0917 16:56:16.880447   18924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 16:56:16.880581   18924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 16:56:16.880694   18924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 16:56:16.880808   18924 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.addons-408385 san=[127.0.0.1 192.168.39.170 addons-408385 localhost minikube]
	I0917 16:56:17.201888   18924 provision.go:177] copyRemoteCerts
	I0917 16:56:17.201953   18924 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 16:56:17.201979   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:17.204413   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.204738   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:17.204767   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.204895   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:17.205077   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:17.205246   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:17.205392   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:17.291808   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 16:56:17.316923   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 16:56:17.341072   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 16:56:17.365516   18924 provision.go:87] duration metric: took 490.573886ms to configureAuth
	I0917 16:56:17.365539   18924 buildroot.go:189] setting minikube options for container-runtime
	I0917 16:56:17.365730   18924 config.go:182] Loaded profile config "addons-408385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 16:56:17.365826   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:17.368283   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.368639   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:17.368670   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.368823   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:17.369022   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:17.369153   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:17.369339   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:17.369514   18924 main.go:141] libmachine: Using SSH client type: native
	I0917 16:56:17.369693   18924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0917 16:56:17.369712   18924 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 16:56:17.597824   18924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 16:56:17.597848   18924 main.go:141] libmachine: Checking connection to Docker...
	I0917 16:56:17.597855   18924 main.go:141] libmachine: (addons-408385) Calling .GetURL
	I0917 16:56:17.599183   18924 main.go:141] libmachine: (addons-408385) DBG | Using libvirt version 6000000
	I0917 16:56:17.601596   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.601942   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:17.602006   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.602117   18924 main.go:141] libmachine: Docker is up and running!
	I0917 16:56:17.602131   18924 main.go:141] libmachine: Reticulating splines...
	I0917 16:56:17.602139   18924 client.go:171] duration metric: took 25.723220135s to LocalClient.Create
	I0917 16:56:17.602162   18924 start.go:167] duration metric: took 25.723279645s to libmachine.API.Create "addons-408385"
	I0917 16:56:17.602175   18924 start.go:293] postStartSetup for "addons-408385" (driver="kvm2")
	I0917 16:56:17.602188   18924 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 16:56:17.602210   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:17.602465   18924 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 16:56:17.602494   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:17.604650   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.604946   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:17.604964   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.605100   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:17.605274   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:17.605409   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:17.605565   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:17.694995   18924 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 16:56:17.699639   18924 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 16:56:17.699666   18924 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 16:56:17.699739   18924 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 16:56:17.699761   18924 start.go:296] duration metric: took 97.580146ms for postStartSetup
	I0917 16:56:17.699789   18924 main.go:141] libmachine: (addons-408385) Calling .GetConfigRaw
	I0917 16:56:17.700415   18924 main.go:141] libmachine: (addons-408385) Calling .GetIP
	I0917 16:56:17.702737   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.703149   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:17.703177   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.703448   18924 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/config.json ...
	I0917 16:56:17.703625   18924 start.go:128] duration metric: took 25.843310151s to createHost
	I0917 16:56:17.703646   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:17.705890   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.706224   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:17.706252   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.706358   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:17.706557   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:17.706719   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:17.706848   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:17.706979   18924 main.go:141] libmachine: Using SSH client type: native
	I0917 16:56:17.707143   18924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0917 16:56:17.707155   18924 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 16:56:17.822141   18924 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726592177.789241010
	
	I0917 16:56:17.822164   18924 fix.go:216] guest clock: 1726592177.789241010
	I0917 16:56:17.822171   18924 fix.go:229] Guest: 2024-09-17 16:56:17.78924101 +0000 UTC Remote: 2024-09-17 16:56:17.703636441 +0000 UTC m=+25.947315089 (delta=85.604569ms)
	I0917 16:56:17.822210   18924 fix.go:200] guest clock delta is within tolerance: 85.604569ms
	I0917 16:56:17.822215   18924 start.go:83] releasing machines lock for "addons-408385", held for 25.961986034s
	I0917 16:56:17.822238   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:17.822502   18924 main.go:141] libmachine: (addons-408385) Calling .GetIP
	I0917 16:56:17.825005   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.825336   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:17.825360   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.825513   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:17.826069   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:17.826274   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:17.826383   18924 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 16:56:17.826443   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:17.826489   18924 ssh_runner.go:195] Run: cat /version.json
	I0917 16:56:17.826513   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:17.829125   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.829486   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:17.829512   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.829533   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.829632   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:17.829794   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:17.829906   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:17.829934   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.829954   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:17.830071   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:17.830128   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:17.830224   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:17.830373   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:17.830521   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:17.951534   18924 ssh_runner.go:195] Run: systemctl --version
	I0917 16:56:17.958040   18924 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 16:56:18.115686   18924 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 16:56:18.123126   18924 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 16:56:18.123194   18924 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 16:56:18.140793   18924 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 16:56:18.140817   18924 start.go:495] detecting cgroup driver to use...
	I0917 16:56:18.140888   18924 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 16:56:18.158500   18924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 16:56:18.173453   18924 docker.go:217] disabling cri-docker service (if available) ...
	I0917 16:56:18.173513   18924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 16:56:18.187957   18924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 16:56:18.202598   18924 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 16:56:18.333027   18924 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 16:56:18.469130   18924 docker.go:233] disabling docker service ...
	I0917 16:56:18.469199   18924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 16:56:18.484667   18924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 16:56:18.498998   18924 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 16:56:18.641389   18924 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 16:56:18.776008   18924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 16:56:18.790837   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 16:56:18.812674   18924 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 16:56:18.812737   18924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 16:56:18.823898   18924 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 16:56:18.823956   18924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 16:56:18.834933   18924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 16:56:18.845553   18924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 16:56:18.856619   18924 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 16:56:18.868015   18924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 16:56:18.879257   18924 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 16:56:18.899805   18924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 16:56:18.911427   18924 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 16:56:18.921735   18924 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 16:56:18.921790   18924 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 16:56:18.936457   18924 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 16:56:18.946747   18924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 16:56:19.065494   18924 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 16:56:19.226108   18924 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 16:56:19.226205   18924 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 16:56:19.231213   18924 start.go:563] Will wait 60s for crictl version
	I0917 16:56:19.231297   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:56:19.235087   18924 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 16:56:19.281633   18924 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 16:56:19.281783   18924 ssh_runner.go:195] Run: crio --version
	I0917 16:56:19.311850   18924 ssh_runner.go:195] Run: crio --version
	I0917 16:56:19.341785   18924 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 16:56:19.343242   18924 main.go:141] libmachine: (addons-408385) Calling .GetIP
	I0917 16:56:19.345825   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:19.346167   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:19.346191   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:19.346407   18924 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0917 16:56:19.350778   18924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 16:56:19.364110   18924 kubeadm.go:883] updating cluster {Name:addons-408385 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-408385 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 16:56:19.364217   18924 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 16:56:19.364273   18924 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 16:56:19.396930   18924 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0917 16:56:19.397013   18924 ssh_runner.go:195] Run: which lz4
	I0917 16:56:19.401270   18924 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 16:56:19.405740   18924 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 16:56:19.405769   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0917 16:56:20.822525   18924 crio.go:462] duration metric: took 1.421306506s to copy over tarball
	I0917 16:56:20.822624   18924 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 16:56:23.006691   18924 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.184029221s)
	I0917 16:56:23.006730   18924 crio.go:469] duration metric: took 2.18417646s to extract the tarball
	I0917 16:56:23.006741   18924 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 16:56:23.043946   18924 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 16:56:23.086263   18924 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 16:56:23.086285   18924 cache_images.go:84] Images are preloaded, skipping loading
	I0917 16:56:23.086293   18924 kubeadm.go:934] updating node { 192.168.39.170 8443 v1.31.1 crio true true} ...
	I0917 16:56:23.086391   18924 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-408385 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-408385 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 16:56:23.086476   18924 ssh_runner.go:195] Run: crio config
	I0917 16:56:23.135589   18924 cni.go:84] Creating CNI manager for ""
	I0917 16:56:23.135612   18924 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 16:56:23.135622   18924 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 16:56:23.135642   18924 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.170 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-408385 NodeName:addons-408385 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 16:56:23.135765   18924 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.170
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-408385"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.170
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.170"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 16:56:23.135824   18924 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 16:56:23.146424   18924 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 16:56:23.146483   18924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 16:56:23.156664   18924 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0917 16:56:23.176236   18924 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 16:56:23.195926   18924 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0917 16:56:23.215956   18924 ssh_runner.go:195] Run: grep 192.168.39.170	control-plane.minikube.internal$ /etc/hosts
	I0917 16:56:23.220278   18924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.170	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 16:56:23.233718   18924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 16:56:23.361479   18924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 16:56:23.378343   18924 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385 for IP: 192.168.39.170
	I0917 16:56:23.378364   18924 certs.go:194] generating shared ca certs ...
	I0917 16:56:23.378379   18924 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:23.378538   18924 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 16:56:23.468659   18924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt ...
	I0917 16:56:23.468687   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt: {Name:mk4b2dc121f54e472a610da41ce39781730efcb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:23.468849   18924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key ...
	I0917 16:56:23.468860   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key: {Name:mk39fdbf9eb5c96a10b5f07aaa642e9ef6ef62c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:23.468930   18924 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 16:56:23.595987   18924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt ...
	I0917 16:56:23.596018   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt: {Name:mk688819f8e2946789f357ecd51fe07706693989 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:23.596170   18924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key ...
	I0917 16:56:23.596179   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key: {Name:mkcde83262d3acd542cf7897dccc5670ae8cce18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:23.596265   18924 certs.go:256] generating profile certs ...
	I0917 16:56:23.596328   18924 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.key
	I0917 16:56:23.596374   18924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt with IP's: []
	I0917 16:56:23.869724   18924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt ...
	I0917 16:56:23.869759   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: {Name:mk4d7f220fa0245c5bbf00a3bd85f1e0aa7b9b47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:23.869952   18924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.key ...
	I0917 16:56:23.869965   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.key: {Name:mka2d16d15d95cd3b1c29597e7f457020bb94a94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:23.870061   18924 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.key.7e131253
	I0917 16:56:23.870080   18924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.crt.7e131253 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.170]
	I0917 16:56:24.042828   18924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.crt.7e131253 ...
	I0917 16:56:24.042859   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.crt.7e131253: {Name:mkcf5a60df0a4773d88e8945f55342f4090e0047 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:24.043040   18924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.key.7e131253 ...
	I0917 16:56:24.043056   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.key.7e131253: {Name:mk4c9b250fe83846f2bf2a73f79edfbf255dff83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:24.043155   18924 certs.go:381] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.crt.7e131253 -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.crt
	I0917 16:56:24.043233   18924 certs.go:385] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.key.7e131253 -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.key
	I0917 16:56:24.043281   18924 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/proxy-client.key
	I0917 16:56:24.043297   18924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/proxy-client.crt with IP's: []
	I0917 16:56:24.187225   18924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/proxy-client.crt ...
	I0917 16:56:24.187252   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/proxy-client.crt: {Name:mk2cb67c490b7c4e2ac97ea0e98192c0133b5d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:24.187447   18924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/proxy-client.key ...
	I0917 16:56:24.187462   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/proxy-client.key: {Name:mk7886820c83ede55497d40d59a86ffc001d73bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:24.187650   18924 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 16:56:24.187683   18924 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 16:56:24.187708   18924 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 16:56:24.187731   18924 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 16:56:24.188296   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 16:56:24.217099   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 16:56:24.260095   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 16:56:24.286974   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 16:56:24.312555   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0917 16:56:24.338456   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 16:56:24.364498   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 16:56:24.390393   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 16:56:24.416565   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 16:56:24.441061   18924 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 16:56:24.459229   18924 ssh_runner.go:195] Run: openssl version
	I0917 16:56:24.466207   18924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 16:56:24.477993   18924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 16:56:24.482776   18924 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 16:56:24.482851   18924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 16:56:24.489986   18924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 16:56:24.501914   18924 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 16:56:24.506316   18924 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 16:56:24.506374   18924 kubeadm.go:392] StartCluster: {Name:addons-408385 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-408385 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 16:56:24.506440   18924 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 16:56:24.506497   18924 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 16:56:24.546313   18924 cri.go:89] found id: ""
	I0917 16:56:24.546370   18924 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 16:56:24.556630   18924 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 16:56:24.567104   18924 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 16:56:24.577871   18924 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 16:56:24.577897   18924 kubeadm.go:157] found existing configuration files:
	
	I0917 16:56:24.577941   18924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 16:56:24.588136   18924 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 16:56:24.588194   18924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 16:56:24.598858   18924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 16:56:24.608830   18924 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 16:56:24.608895   18924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 16:56:24.619369   18924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 16:56:24.630137   18924 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 16:56:24.630198   18924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 16:56:24.640661   18924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 16:56:24.650527   18924 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 16:56:24.650585   18924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 16:56:24.661071   18924 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 16:56:24.716386   18924 kubeadm.go:310] W0917 16:56:24.688487     813 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 16:56:24.717689   18924 kubeadm.go:310] W0917 16:56:24.690025     813 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 16:56:24.829103   18924 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 16:56:35.968996   18924 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 16:56:35.969071   18924 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 16:56:35.969172   18924 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 16:56:35.969326   18924 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 16:56:35.969456   18924 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 16:56:35.969552   18924 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 16:56:35.971346   18924 out.go:235]   - Generating certificates and keys ...
	I0917 16:56:35.971417   18924 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 16:56:35.971479   18924 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 16:56:35.971560   18924 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 16:56:35.971628   18924 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 16:56:35.971688   18924 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 16:56:35.971734   18924 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 16:56:35.971786   18924 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 16:56:35.971889   18924 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-408385 localhost] and IPs [192.168.39.170 127.0.0.1 ::1]
	I0917 16:56:35.971939   18924 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 16:56:35.972038   18924 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-408385 localhost] and IPs [192.168.39.170 127.0.0.1 ::1]
	I0917 16:56:35.972112   18924 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 16:56:35.972189   18924 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 16:56:35.972237   18924 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 16:56:35.972303   18924 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 16:56:35.972346   18924 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 16:56:35.972402   18924 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 16:56:35.972454   18924 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 16:56:35.972511   18924 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 16:56:35.972592   18924 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 16:56:35.972711   18924 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 16:56:35.972783   18924 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 16:56:35.975168   18924 out.go:235]   - Booting up control plane ...
	I0917 16:56:35.975264   18924 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 16:56:35.975333   18924 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 16:56:35.975390   18924 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 16:56:35.975497   18924 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 16:56:35.975587   18924 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 16:56:35.975627   18924 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 16:56:35.975737   18924 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 16:56:35.975844   18924 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 16:56:35.975901   18924 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001493427s
	I0917 16:56:35.975973   18924 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 16:56:35.976034   18924 kubeadm.go:310] [api-check] The API server is healthy after 5.001561419s
	I0917 16:56:35.976169   18924 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 16:56:35.976274   18924 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 16:56:35.976324   18924 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 16:56:35.976482   18924 kubeadm.go:310] [mark-control-plane] Marking the node addons-408385 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 16:56:35.976537   18924 kubeadm.go:310] [bootstrap-token] Using token: sa12t0.gjj5918ic1mqv0s7
	I0917 16:56:35.977945   18924 out.go:235]   - Configuring RBAC rules ...
	I0917 16:56:35.978054   18924 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 16:56:35.978128   18924 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 16:56:35.978288   18924 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 16:56:35.978410   18924 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 16:56:35.978518   18924 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 16:56:35.978615   18924 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 16:56:35.978719   18924 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 16:56:35.978764   18924 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 16:56:35.978818   18924 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 16:56:35.978838   18924 kubeadm.go:310] 
	I0917 16:56:35.978908   18924 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 16:56:35.978916   18924 kubeadm.go:310] 
	I0917 16:56:35.978996   18924 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 16:56:35.979002   18924 kubeadm.go:310] 
	I0917 16:56:35.979023   18924 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 16:56:35.979079   18924 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 16:56:35.979124   18924 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 16:56:35.979130   18924 kubeadm.go:310] 
	I0917 16:56:35.979179   18924 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 16:56:35.979186   18924 kubeadm.go:310] 
	I0917 16:56:35.979225   18924 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 16:56:35.979231   18924 kubeadm.go:310] 
	I0917 16:56:35.979277   18924 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 16:56:35.979341   18924 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 16:56:35.979408   18924 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 16:56:35.979414   18924 kubeadm.go:310] 
	I0917 16:56:35.979487   18924 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 16:56:35.979556   18924 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 16:56:35.979562   18924 kubeadm.go:310] 
	I0917 16:56:35.979647   18924 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sa12t0.gjj5918ic1mqv0s7 \
	I0917 16:56:35.979750   18924 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 \
	I0917 16:56:35.979771   18924 kubeadm.go:310] 	--control-plane 
	I0917 16:56:35.979776   18924 kubeadm.go:310] 
	I0917 16:56:35.979853   18924 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 16:56:35.979861   18924 kubeadm.go:310] 
	I0917 16:56:35.979942   18924 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sa12t0.gjj5918ic1mqv0s7 \
	I0917 16:56:35.980055   18924 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 
	I0917 16:56:35.980068   18924 cni.go:84] Creating CNI manager for ""
	I0917 16:56:35.980074   18924 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 16:56:35.982263   18924 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 16:56:35.983608   18924 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 16:56:35.994882   18924 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 16:56:36.019583   18924 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 16:56:36.019687   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:36.019738   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-408385 minikube.k8s.io/updated_at=2024_09_17T16_56_36_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=addons-408385 minikube.k8s.io/primary=true
	I0917 16:56:36.048300   18924 ops.go:34] apiserver oom_adj: -16
	I0917 16:56:36.170162   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:36.670383   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:37.170820   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:37.671076   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:38.170926   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:38.671033   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:39.170837   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:39.670394   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:40.171111   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:40.279973   18924 kubeadm.go:1113] duration metric: took 4.260359264s to wait for elevateKubeSystemPrivileges
	I0917 16:56:40.280020   18924 kubeadm.go:394] duration metric: took 15.773648579s to StartCluster
	I0917 16:56:40.280041   18924 settings.go:142] acquiring lock: {Name:mk34898861b3ed534da4bd8404feab95fe690001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:40.280170   18924 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 16:56:40.280550   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:40.280764   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 16:56:40.280775   18924 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 16:56:40.280828   18924 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0917 16:56:40.280929   18924 addons.go:69] Setting inspektor-gadget=true in profile "addons-408385"
	I0917 16:56:40.280942   18924 addons.go:69] Setting volcano=true in profile "addons-408385"
	I0917 16:56:40.280954   18924 addons.go:234] Setting addon volcano=true in "addons-408385"
	I0917 16:56:40.280953   18924 addons.go:69] Setting storage-provisioner=true in profile "addons-408385"
	I0917 16:56:40.280966   18924 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-408385"
	I0917 16:56:40.280977   18924 config.go:182] Loaded profile config "addons-408385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 16:56:40.280993   18924 addons.go:69] Setting volumesnapshots=true in profile "addons-408385"
	I0917 16:56:40.280996   18924 addons.go:69] Setting metrics-server=true in profile "addons-408385"
	I0917 16:56:40.281007   18924 addons.go:234] Setting addon volumesnapshots=true in "addons-408385"
	I0917 16:56:40.281017   18924 addons.go:69] Setting helm-tiller=true in profile "addons-408385"
	I0917 16:56:40.280959   18924 addons.go:69] Setting cloud-spanner=true in profile "addons-408385"
	I0917 16:56:40.281025   18924 addons.go:69] Setting ingress-dns=true in profile "addons-408385"
	I0917 16:56:40.281032   18924 addons.go:69] Setting default-storageclass=true in profile "addons-408385"
	I0917 16:56:40.281032   18924 addons.go:69] Setting gcp-auth=true in profile "addons-408385"
	I0917 16:56:40.281038   18924 addons.go:234] Setting addon ingress-dns=true in "addons-408385"
	I0917 16:56:40.281029   18924 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-408385"
	I0917 16:56:40.281044   18924 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-408385"
	I0917 16:56:40.281049   18924 mustload.go:65] Loading cluster: addons-408385
	I0917 16:56:40.281053   18924 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-408385"
	I0917 16:56:40.281053   18924 addons.go:234] Setting addon cloud-spanner=true in "addons-408385"
	I0917 16:56:40.281064   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.280954   18924 addons.go:234] Setting addon inspektor-gadget=true in "addons-408385"
	I0917 16:56:40.281084   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.281092   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.281104   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.280980   18924 addons.go:234] Setting addon storage-provisioner=true in "addons-408385"
	I0917 16:56:40.281211   18924 config.go:182] Loaded profile config "addons-408385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 16:56:40.281258   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.281535   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.281547   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.281572   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.281587   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.281033   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.281010   18924 addons.go:234] Setting addon metrics-server=true in "addons-408385"
	I0917 16:56:40.281537   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.280928   18924 addons.go:69] Setting yakd=true in profile "addons-408385"
	I0917 16:56:40.280984   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.281654   18924 addons.go:234] Setting addon yakd=true in "addons-408385"
	I0917 16:56:40.280987   18924 addons.go:69] Setting registry=true in profile "addons-408385"
	I0917 16:56:40.281672   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.281014   18924 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-408385"
	I0917 16:56:40.281028   18924 addons.go:234] Setting addon helm-tiller=true in "addons-408385"
	I0917 16:56:40.281535   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.281702   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.281674   18924 addons.go:234] Setting addon registry=true in "addons-408385"
	I0917 16:56:40.281712   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.281541   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.281732   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.281742   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.281021   18924 addons.go:69] Setting ingress=true in profile "addons-408385"
	I0917 16:56:40.281764   18924 addons.go:234] Setting addon ingress=true in "addons-408385"
	I0917 16:56:40.280936   18924 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-408385"
	I0917 16:56:40.281825   18924 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-408385"
	I0917 16:56:40.281873   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.281950   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.282079   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.282161   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.282186   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.282226   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.282238   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.282249   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.282261   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.282263   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.282286   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.282307   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.282229   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.282101   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.282491   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.282578   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.282605   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.282779   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.282827   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.282866   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.283094   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.283133   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.286312   18924 out.go:177] * Verifying Kubernetes components...
	I0917 16:56:40.287618   18924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 16:56:40.298908   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41809
	I0917 16:56:40.299068   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41359
	I0917 16:56:40.309608   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.309649   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.309700   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46171
	I0917 16:56:40.309807   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42879
	I0917 16:56:40.310065   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.310120   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.311980   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.312056   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.312293   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.312867   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.312887   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.313019   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.313031   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.313141   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.313152   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.313482   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.313528   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.313558   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.313604   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.314157   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.314183   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.314467   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.314488   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.314708   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.314760   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.315123   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.315527   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.315559   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.315951   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.320566   18924 addons.go:234] Setting addon default-storageclass=true in "addons-408385"
	I0917 16:56:40.320611   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.320981   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.321026   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.346242   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43905
	I0917 16:56:40.346807   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.347541   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.347571   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.348071   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.353705   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.357689   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37035
	I0917 16:56:40.358028   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38937
	I0917 16:56:40.358152   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37233
	I0917 16:56:40.358342   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36079
	I0917 16:56:40.358940   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34773
	I0917 16:56:40.359063   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.359591   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.359683   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.359700   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.359848   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.359959   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39249
	I0917 16:56:40.360078   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.360347   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.360572   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.360585   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.360591   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.360604   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.360670   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.360866   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.360880   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.360892   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.360995   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.361576   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.361617   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.361638   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.361651   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.361713   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.361760   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.361816   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.362002   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.362194   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.362474   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.362507   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.363919   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36061
	I0917 16:56:40.364019   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.364306   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.364485   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.364489   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.364503   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.365582   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33787
	I0917 16:56:40.365887   18924 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-408385"
	I0917 16:56:40.365928   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.366157   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.366191   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.366314   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.366338   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.366584   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.366594   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40661
	I0917 16:56:40.385020   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44237
	I0917 16:56:40.385053   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33747
	I0917 16:56:40.385026   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38213
	I0917 16:56:40.385345   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.385360   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.385377   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.385392   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.385441   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.385849   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.385945   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.386207   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.386235   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.386770   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.386839   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.386838   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.386856   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.386896   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.387149   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.387217   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.387351   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.387369   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.387504   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.387514   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.387573   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.387705   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.387723   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.387999   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.388667   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.388686   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.388751   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39331
	I0917 16:56:40.389456   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.389491   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.390745   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.390799   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.390825   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.390929   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.391352   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.391419   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.391635   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45913
	I0917 16:56:40.391780   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.391820   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.391906   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.391922   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.392274   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.392725   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.392756   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.392952   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33421
	I0917 16:56:40.393072   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.393464   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.393477   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.393806   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.393834   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.393926   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.394284   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.394301   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.394504   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.395211   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.395377   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.395596   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.395806   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.396088   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.396426   18924 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0917 16:56:40.396481   18924 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0917 16:56:40.398128   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.398337   18924 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 16:56:40.398355   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0917 16:56:40.398374   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.398436   18924 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0917 16:56:40.398463   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.398937   18924 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0917 16:56:40.399242   18924 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0917 16:56:40.399265   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.399639   18924 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0917 16:56:40.400906   18924 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0917 16:56:40.400950   18924 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0917 16:56:40.401326   18924 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0917 16:56:40.401347   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.401945   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.402595   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.402623   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.402809   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.402906   18924 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0917 16:56:40.402919   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0917 16:56:40.402936   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.402975   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.403463   18924 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 16:56:40.403552   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.403728   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.406113   18924 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 16:56:40.406794   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.406822   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42877
	I0917 16:56:40.406831   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.406851   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.406868   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.407039   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.407412   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.407477   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46253
	I0917 16:56:40.407478   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.407595   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.407739   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.407938   18924 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 16:56:40.407953   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0917 16:56:40.407967   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.408588   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.408686   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.408706   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.408715   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.409106   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.409130   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.409365   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.409533   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.409652   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.409751   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.410256   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.410275   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.410457   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.410629   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.410870   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.410885   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.410934   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.411338   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.411612   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.411868   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38341
	I0917 16:56:40.412251   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.412291   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.412296   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.412474   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.412838   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.412875   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.412954   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.412975   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.413015   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.413065   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.413625   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.413666   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.413850   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.414043   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.414175   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.414769   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.415585   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.417193   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.417660   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45933
	I0917 16:56:40.418242   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.418815   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.418842   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.419070   18924 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0917 16:56:40.419428   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.420115   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.420155   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.420492   18924 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0917 16:56:40.420506   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0917 16:56:40.420522   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.422261   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44323
	I0917 16:56:40.423377   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.423827   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.424457   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.424478   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.424549   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.424568   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.424717   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.424845   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.424938   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.425025   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.425342   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.427630   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37231
	I0917 16:56:40.428234   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.428248   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33441
	I0917 16:56:40.428817   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.428839   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.428912   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.429324   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.429475   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.429488   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.429563   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.429599   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.429886   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45523
	I0917 16:56:40.430318   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.430434   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.430844   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.430971   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.430982   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.431353   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.431404   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.431891   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.433596   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.435129   18924 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0917 16:56:40.435873   18924 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 16:56:40.436250   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.436549   18924 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0917 16:56:40.436566   18924 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0917 16:56:40.436587   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.437385   18924 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 16:56:40.437402   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 16:56:40.437420   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.437904   18924 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0917 16:56:40.439229   18924 out.go:177]   - Using image docker.io/registry:2.8.3
	I0917 16:56:40.440893   18924 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0917 16:56:40.440910   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0917 16:56:40.440929   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.442279   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.442325   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36387
	I0917 16:56:40.442832   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.443262   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.443270   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.443295   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.443522   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.443551   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.443747   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.443765   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.443793   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.443812   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.443956   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.443983   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.444081   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.444089   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.444200   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42843
	I0917 16:56:40.444225   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.444247   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.444406   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.444567   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.444597   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.445584   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.445602   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.446414   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.446488   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41543
	I0917 16:56:40.446604   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.447066   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.447281   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.448242   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.448260   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.448317   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.448690   18924 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0917 16:56:40.448734   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.449156   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.449170   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.449190   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.449336   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.449490   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.449542   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.449675   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.449801   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.450234   18924 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0917 16:56:40.451504   18924 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0917 16:56:40.451601   18924 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 16:56:40.451622   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0917 16:56:40.451645   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.452360   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40021
	I0917 16:56:40.452543   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.452876   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.453320   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.453340   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.453678   18924 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0917 16:56:40.453678   18924 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0917 16:56:40.453929   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.454105   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.454452   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39863
	I0917 16:56:40.454781   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.455056   18924 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 16:56:40.455077   18924 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 16:56:40.455161   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.455248   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.455502   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.455528   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.455769   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.455786   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.455855   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.455994   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.456119   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.456170   18924 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0917 16:56:40.456370   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.456909   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.457348   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.457979   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.458463   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.458485   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.458484   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.458600   18924 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0917 16:56:40.458671   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.458710   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:40.458729   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:40.458887   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.458918   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:40.458929   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:40.458938   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:40.458939   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:40.459021   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:40.460243   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:40.460246   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:40.460261   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.460263   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:40.460261   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	W0917 16:56:40.460349   18924 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0917 16:56:40.460425   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.460624   18924 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 16:56:40.460639   18924 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 16:56:40.460661   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.460968   18924 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0917 16:56:40.463033   18924 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0917 16:56:40.463859   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.464289   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.464310   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.464521   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.464735   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.464912   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.465060   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.465353   18924 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0917 16:56:40.466408   18924 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0917 16:56:40.466430   18924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0917 16:56:40.466455   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.468998   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.469414   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36913
	I0917 16:56:40.469615   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.469633   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.469650   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.469800   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.469875   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.470033   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.470168   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.470428   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.470451   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.470890   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.471071   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.472593   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.474373   18924 out.go:177]   - Using image docker.io/busybox:stable
	I0917 16:56:40.475735   18924 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0917 16:56:40.477135   18924 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 16:56:40.477147   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0917 16:56:40.477165   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.480812   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.481316   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.481354   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.481624   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.481827   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.481966   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.482082   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.887038   18924 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0917 16:56:40.887063   18924 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0917 16:56:40.957503   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 16:56:40.957833   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 16:56:40.990013   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 16:56:40.996441   18924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 16:56:40.996591   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0917 16:56:41.047793   18924 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0917 16:56:41.047816   18924 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0917 16:56:41.050251   18924 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0917 16:56:41.050266   18924 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0917 16:56:41.052602   18924 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0917 16:56:41.052619   18924 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0917 16:56:41.070072   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 16:56:41.070385   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 16:56:41.085507   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0917 16:56:41.098190   18924 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0917 16:56:41.098217   18924 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0917 16:56:41.112724   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 16:56:41.177089   18924 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0917 16:56:41.177113   18924 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0917 16:56:41.200547   18924 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0917 16:56:41.200577   18924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0917 16:56:41.201601   18924 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 16:56:41.201619   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0917 16:56:41.263241   18924 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0917 16:56:41.263268   18924 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0917 16:56:41.284538   18924 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0917 16:56:41.284563   18924 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0917 16:56:41.462449   18924 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0917 16:56:41.462479   18924 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0917 16:56:41.516502   18924 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0917 16:56:41.516526   18924 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0917 16:56:41.527742   18924 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0917 16:56:41.527763   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0917 16:56:41.592582   18924 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 16:56:41.592603   18924 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 16:56:41.692484   18924 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0917 16:56:41.692515   18924 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0917 16:56:41.707737   18924 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0917 16:56:41.707771   18924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0917 16:56:41.725728   18924 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0917 16:56:41.725752   18924 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0917 16:56:41.751606   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0917 16:56:41.763147   18924 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0917 16:56:41.763174   18924 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0917 16:56:41.845855   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0917 16:56:41.917959   18924 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0917 16:56:41.917982   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0917 16:56:41.932379   18924 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0917 16:56:41.932409   18924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0917 16:56:41.933743   18924 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 16:56:41.933758   18924 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 16:56:42.000189   18924 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 16:56:42.000209   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0917 16:56:42.019019   18924 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0917 16:56:42.019039   18924 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0917 16:56:42.120876   18924 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0917 16:56:42.120903   18924 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0917 16:56:42.215490   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0917 16:56:42.219259   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 16:56:42.235839   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 16:56:42.249709   18924 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0917 16:56:42.249738   18924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0917 16:56:42.408626   18924 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0917 16:56:42.408660   18924 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0917 16:56:42.597811   18924 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0917 16:56:42.597836   18924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0917 16:56:42.832549   18924 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 16:56:42.832574   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0917 16:56:42.877638   18924 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0917 16:56:42.877673   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0917 16:56:43.070157   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 16:56:43.223931   18924 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0917 16:56:43.223966   18924 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0917 16:56:43.642910   18924 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0917 16:56:43.642945   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0917 16:56:44.074864   18924 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0917 16:56:44.074888   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0917 16:56:44.426715   18924 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 16:56:44.426745   18924 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0917 16:56:44.816971   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 16:56:47.444904   18924 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0917 16:56:47.444944   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:47.448454   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:47.448848   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:47.448876   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:47.449068   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:47.449290   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:47.449479   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:47.449640   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:48.201028   18924 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0917 16:56:48.440942   18924 addons.go:234] Setting addon gcp-auth=true in "addons-408385"
	I0917 16:56:48.440997   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:48.441325   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:48.441359   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:48.457638   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42811
	I0917 16:56:48.458035   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:48.458476   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:48.458498   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:48.459269   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:48.459712   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:48.459740   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:48.475904   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46001
	I0917 16:56:48.476401   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:48.476926   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:48.476955   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:48.477337   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:48.477515   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:48.479054   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:48.479263   18924 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0917 16:56:48.479286   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:48.481756   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:48.482133   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:48.482152   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:48.482342   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:48.482542   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:48.482682   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:48.482802   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:50.488236   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.53069821s)
	I0917 16:56:50.488278   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.530418125s)
	I0917 16:56:50.488291   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.488303   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.488312   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.488328   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.488345   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.49830433s)
	I0917 16:56:50.488378   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.488393   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.488405   18924 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.491938735s)
	I0917 16:56:50.488459   18924 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (9.491834553s)
	I0917 16:56:50.488485   18924 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0917 16:56:50.488684   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.418586295s)
	I0917 16:56:50.488715   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.488725   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.488806   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.418403871s)
	I0917 16:56:50.488819   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.488826   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.488884   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.403354884s)
	I0917 16:56:50.488898   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.488905   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.488948   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.376200431s)
	I0917 16:56:50.488960   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.488968   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.489028   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.737398015s)
	I0917 16:56:50.489042   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.489053   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.489103   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.643223171s)
	I0917 16:56:50.489114   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.489123   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.489186   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.273670484s)
	I0917 16:56:50.489198   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.489205   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.489334   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.27004628s)
	W0917 16:56:50.489366   18924 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 16:56:50.489413   18924 retry.go:31] will retry after 216.517027ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 16:56:50.489488   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.253623687s)
	I0917 16:56:50.489516   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.489528   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.489623   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.419429218s)
	I0917 16:56:50.489637   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.489645   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.490730   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.490746   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.490761   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.490769   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.490773   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.490776   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.490780   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.490789   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.490794   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.490846   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.490846   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.490867   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.490875   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.490876   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.490881   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.490883   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.490889   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.490891   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.490898   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.490930   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.490950   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.490956   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.490963   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.490969   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.491011   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.491030   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.491037   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.491044   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.491051   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.491088   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.491105   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.491111   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.491119   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.491128   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.491168   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.491188   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.491195   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.491202   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.491208   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.491246   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.491266   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.491272   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.491281   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.491288   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.491328   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.491348   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.491353   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.491360   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.491366   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.491403   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.491422   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.491428   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.491435   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.491441   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.491476   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.491493   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.491498   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.491505   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.491511   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.492188   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.492223   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.492231   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.494284   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.494315   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.494322   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.494514   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.494536   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.494542   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.494578   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.494609   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.494616   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.494691   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.494714   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.494720   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.494832   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.494873   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.494880   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.495687   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.495706   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.495732   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.495738   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.495746   18924 addons.go:475] Verifying addon registry=true in "addons-408385"
	I0917 16:56:50.496538   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.496544   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.496555   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.496564   18924 addons.go:475] Verifying addon metrics-server=true in "addons-408385"
	I0917 16:56:50.496566   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.496573   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.496624   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.496639   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.496683   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.496717   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.496720   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.496727   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.496735   18924 addons.go:475] Verifying addon ingress=true in "addons-408385"
	I0917 16:56:50.496808   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.496815   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.497192   18924 node_ready.go:35] waiting up to 6m0s for node "addons-408385" to be "Ready" ...
	I0917 16:56:50.497351   18924 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-408385 service yakd-dashboard -n yakd-dashboard
	
	I0917 16:56:50.497389   18924 out.go:177] * Verifying registry addon...
	I0917 16:56:50.498257   18924 out.go:177] * Verifying ingress addon...
	I0917 16:56:50.500180   18924 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0917 16:56:50.500419   18924 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0917 16:56:50.518284   18924 node_ready.go:49] node "addons-408385" has status "Ready":"True"
	I0917 16:56:50.518306   18924 node_ready.go:38] duration metric: took 21.091831ms for node "addons-408385" to be "Ready" ...
	I0917 16:56:50.518315   18924 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 16:56:50.520856   18924 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0917 16:56:50.520883   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:50.523079   18924 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0917 16:56:50.523105   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:50.546145   18924 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6scmn" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:50.581347   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.581372   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.581745   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.581768   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.581818   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.581841   18924 pod_ready.go:93] pod "coredns-7c65d6cfc9-6scmn" in "kube-system" namespace has status "Ready":"True"
	I0917 16:56:50.581859   18924 pod_ready.go:82] duration metric: took 35.685801ms for pod "coredns-7c65d6cfc9-6scmn" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:50.581871   18924 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mhzww" in "kube-system" namespace to be "Ready" ...
	W0917 16:56:50.581910   18924 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0917 16:56:50.586512   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.586530   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.586847   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.586867   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.596137   18924 pod_ready.go:93] pod "coredns-7c65d6cfc9-mhzww" in "kube-system" namespace has status "Ready":"True"
	I0917 16:56:50.596162   18924 pod_ready.go:82] duration metric: took 14.284009ms for pod "coredns-7c65d6cfc9-mhzww" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:50.596172   18924 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-408385" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:50.623810   18924 pod_ready.go:93] pod "etcd-addons-408385" in "kube-system" namespace has status "Ready":"True"
	I0917 16:56:50.623835   18924 pod_ready.go:82] duration metric: took 27.656536ms for pod "etcd-addons-408385" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:50.623845   18924 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-408385" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:50.706847   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 16:56:50.717893   18924 pod_ready.go:93] pod "kube-apiserver-addons-408385" in "kube-system" namespace has status "Ready":"True"
	I0917 16:56:50.717915   18924 pod_ready.go:82] duration metric: took 94.063278ms for pod "kube-apiserver-addons-408385" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:50.717925   18924 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-408385" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:50.902706   18924 pod_ready.go:93] pod "kube-controller-manager-addons-408385" in "kube-system" namespace has status "Ready":"True"
	I0917 16:56:50.902732   18924 pod_ready.go:82] duration metric: took 184.800591ms for pod "kube-controller-manager-addons-408385" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:50.902744   18924 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6blpt" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:50.993709   18924 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-408385" context rescaled to 1 replicas
	I0917 16:56:51.006258   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:51.006412   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:51.311675   18924 pod_ready.go:93] pod "kube-proxy-6blpt" in "kube-system" namespace has status "Ready":"True"
	I0917 16:56:51.311702   18924 pod_ready.go:82] duration metric: took 408.951515ms for pod "kube-proxy-6blpt" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:51.311711   18924 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-408385" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:51.511546   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:51.512343   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:51.712678   18924 pod_ready.go:93] pod "kube-scheduler-addons-408385" in "kube-system" namespace has status "Ready":"True"
	I0917 16:56:51.712702   18924 pod_ready.go:82] duration metric: took 400.983783ms for pod "kube-scheduler-addons-408385" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:51.712710   18924 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:52.025749   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:52.026250   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:52.190681   18924 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.711392152s)
	I0917 16:56:52.191047   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.373996255s)
	I0917 16:56:52.191104   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:52.191125   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:52.191470   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:52.191517   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:52.191536   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:52.191553   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:52.191566   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:52.191792   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:52.191805   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:52.191826   18924 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-408385"
	I0917 16:56:52.192415   18924 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 16:56:52.193515   18924 out.go:177] * Verifying csi-hostpath-driver addon...
	I0917 16:56:52.195286   18924 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0917 16:56:52.196006   18924 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0917 16:56:52.196821   18924 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0917 16:56:52.196837   18924 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0917 16:56:52.214434   18924 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0917 16:56:52.214458   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:52.371675   18924 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0917 16:56:52.371704   18924 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0917 16:56:52.497363   18924 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 16:56:52.497383   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0917 16:56:52.504719   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:52.505342   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:52.564224   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 16:56:52.700595   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:53.011701   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:53.012015   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:53.159940   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.453036172s)
	I0917 16:56:53.160005   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:53.160022   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:53.160284   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:53.160332   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:53.160341   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:53.160357   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:53.160374   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:53.160616   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:53.160633   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:53.201585   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:53.506249   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:53.506293   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:53.709697   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:53.738231   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:56:53.984139   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.419872413s)
	I0917 16:56:53.984190   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:53.984212   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:53.984568   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:53.984589   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:53.984604   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:53.984612   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:53.984834   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:53.984853   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:53.987027   18924 addons.go:475] Verifying addon gcp-auth=true in "addons-408385"
	I0917 16:56:53.988873   18924 out.go:177] * Verifying gcp-auth addon...
	I0917 16:56:53.990825   18924 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0917 16:56:54.055092   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:54.055115   18924 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0917 16:56:54.055131   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:54.055387   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:54.202926   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:54.494716   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:54.506148   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:54.506174   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:54.701636   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:54.994373   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:55.005045   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:55.005494   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:55.200848   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:55.495663   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:55.504909   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:55.506086   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:55.855465   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:56:55.856656   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:55.993948   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:56.005711   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:56.006104   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:56.201254   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:56.494421   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:56.505090   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:56.505414   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:56.701176   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:56.995390   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:57.004844   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:57.005282   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:57.200660   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:57.494627   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:57.504621   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:57.505103   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:57.700909   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:57.994928   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:58.004757   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:58.005263   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:58.201434   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:58.219886   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:56:58.495575   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:58.504836   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:58.505317   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:58.701773   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:58.994959   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:59.005951   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:59.006611   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:59.201975   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:59.495332   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:59.506814   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:59.507819   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:59.700658   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:59.995245   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:00.004708   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:00.006302   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:00.200967   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:00.219938   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:00.495921   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:00.506377   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:00.506950   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:00.703768   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:00.995363   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:01.010398   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:01.011329   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:01.202047   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:01.495085   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:01.504652   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:01.505645   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:01.702029   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:01.994945   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:02.006766   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:02.008040   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:02.200473   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:02.221720   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:02.495451   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:02.504315   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:02.506062   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:02.700326   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:02.995096   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:03.005924   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:03.006819   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:03.201912   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:03.495000   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:03.504765   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:03.505943   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:03.701922   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:03.995337   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:04.004819   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:04.005035   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:04.201761   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:04.494642   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:04.504915   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:04.505321   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:04.702013   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:04.719604   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:04.995214   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:05.004602   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:05.005121   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:05.200850   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:05.494936   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:05.505716   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:05.506224   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:05.700440   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:05.994611   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:06.004208   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:06.006099   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:06.200977   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:06.528028   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:06.528127   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:06.528173   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:06.701154   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:06.994040   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:07.004294   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:07.004738   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:07.200229   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:07.219592   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:07.495326   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:07.504606   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:07.505193   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:07.700901   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:07.995249   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:08.004764   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:08.004900   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:08.200699   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:08.495328   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:08.503987   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:08.506826   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:08.700862   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:08.994609   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:09.004062   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:09.004349   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:09.202126   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:09.220482   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:09.494945   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:09.505116   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:09.506159   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:09.701734   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:09.996629   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:10.019821   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:10.021645   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:10.201473   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:10.495799   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:10.504801   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:10.506075   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:10.704466   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:10.994193   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:11.005581   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:11.005762   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:11.201601   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:11.495169   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:11.504802   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:11.505211   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:11.700302   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:11.719276   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:11.994525   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:12.004692   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:12.005129   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:12.201376   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:12.494979   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:12.505561   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:12.505703   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:12.975902   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:12.995801   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:13.004147   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:13.006830   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:13.200882   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:13.496008   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:13.506567   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:13.507195   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:13.701055   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:13.719675   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:13.994939   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:14.004466   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:14.004915   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:14.202094   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:14.495836   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:14.507503   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:14.508148   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:14.700728   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:14.996044   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:15.006105   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:15.006707   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:15.201653   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:15.494526   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:15.504505   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:15.505363   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:15.703586   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:15.994788   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:16.005108   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:16.005808   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:16.206044   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:16.220095   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:16.494315   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:16.505315   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:16.506169   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:16.704765   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:16.995307   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:17.096405   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:17.096552   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:17.200374   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:17.495743   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:17.505031   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:17.506329   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:17.721075   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:17.995723   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:18.004552   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:18.005928   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:18.200087   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:18.495274   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:18.504597   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:18.507379   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:18.700946   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:18.719392   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:18.994993   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:19.004577   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:19.005098   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:19.589168   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:19.589327   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:19.589667   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:19.589832   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:19.700535   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:19.994305   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:20.004913   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:20.005728   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:20.200701   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:20.494743   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:20.504820   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:20.506113   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:20.702270   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:20.995072   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:21.004890   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:21.005076   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:21.201054   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:21.219658   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:21.495297   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:21.505528   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:21.506012   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:21.702119   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:21.996390   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:22.005561   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:22.005652   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:22.200739   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:22.494563   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:22.506327   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:22.506676   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:22.700496   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:23.032136   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:23.032957   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:23.033036   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:23.202150   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:23.494360   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:23.504706   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:23.505348   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:23.947525   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:23.948575   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:23.994678   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:24.004245   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:24.005329   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:24.201222   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:24.495096   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:24.508318   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:24.510378   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:24.701555   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:24.995276   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:25.004269   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:25.007124   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:25.201504   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:25.495365   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:25.505283   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:25.505799   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:25.700648   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:26.039815   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:26.040228   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:26.040316   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:26.210495   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:26.220088   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:26.495232   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:26.510833   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:26.511093   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:26.700936   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:26.996436   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:27.004910   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:27.005741   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:27.202425   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:27.495288   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:27.505457   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:27.508530   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:27.700773   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:27.995376   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:28.005437   18924 kapi.go:107] duration metric: took 37.50525233s to wait for kubernetes.io/minikube-addons=registry ...
	I0917 16:57:28.005661   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:28.201963   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:28.495032   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:28.505610   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:28.701512   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:28.728312   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:28.995608   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:29.005993   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:29.202300   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:29.497995   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:29.504870   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:29.700212   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:29.995246   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:30.004884   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:30.202534   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:30.495333   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:30.505996   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:30.702019   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:30.994099   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:31.005314   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:31.202708   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:31.229988   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:31.493840   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:31.504120   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:31.701449   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:31.994920   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:32.004766   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:32.357159   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:32.495449   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:32.505535   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:32.701208   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:32.995100   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:33.004376   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:33.200664   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:33.498557   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:33.507115   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:33.700821   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:33.718587   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:33.995468   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:34.005462   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:34.201071   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:34.495519   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:34.505080   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:34.701276   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:34.995558   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:35.004981   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:35.203003   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:35.494303   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:35.504708   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:35.700739   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:35.718782   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:35.994881   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:36.097365   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:36.201890   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:36.495139   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:36.505487   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:36.701057   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:36.996834   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:37.005523   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:37.410454   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:37.516803   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:37.517120   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:37.701410   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:37.729938   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:37.996501   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:38.005193   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:38.200777   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:38.494507   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:38.504434   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:38.701189   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:38.994900   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:39.004122   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:39.201715   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:39.496473   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:39.506073   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:39.703094   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:39.994841   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:40.004452   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:40.201004   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:40.218439   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:40.495729   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:40.504143   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:40.853096   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:41.158440   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:41.159441   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:41.203681   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:41.494298   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:41.505342   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:41.701128   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:41.993947   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:42.005059   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:42.201190   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:42.219465   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:42.495543   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:42.505413   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:42.701555   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:42.995239   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:43.004317   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:43.201671   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:43.495708   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:43.505113   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:43.702002   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:43.997002   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:44.004765   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:44.200983   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:44.507042   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:44.510903   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:44.702550   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:44.723909   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:44.996307   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:45.004982   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:45.201479   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:45.495981   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:45.505405   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:45.700916   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:45.998807   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:46.011459   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:46.201895   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:46.495657   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:46.506169   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:46.701933   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:46.999183   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:47.006964   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:47.203049   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:47.219100   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:47.498008   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:47.506371   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:47.707797   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:47.996867   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:48.004924   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:48.201042   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:48.495636   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:48.511151   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:48.701120   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:48.996590   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:49.012436   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:49.202003   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:49.494728   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:49.505025   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:49.785313   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:49.788215   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:49.994837   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:50.004304   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:50.201021   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:50.495181   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:50.505534   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:50.701006   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:50.994275   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:51.005002   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:51.203019   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:51.495078   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:51.596463   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:51.701421   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:51.994676   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:52.005680   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:52.200539   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:52.218738   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:52.497799   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:52.504566   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:52.700922   18924 kapi.go:107] duration metric: took 1m0.504912498s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0917 16:57:52.995147   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:53.004995   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:53.494512   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:53.505190   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:53.994795   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:54.004440   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:54.219175   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:54.495330   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:54.505134   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:54.995438   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:55.004940   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:55.495125   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:55.504590   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:55.995062   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:56.004478   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:56.225636   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:56.499868   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:56.505194   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:56.996724   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:57.005674   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:57.495684   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:57.505781   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:57.995631   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:58.008631   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:58.494323   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:58.504300   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:58.718959   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:58.993981   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:59.004453   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:59.494338   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:59.505823   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:59.995264   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:00.005723   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:00.500318   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:00.507383   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:00.937272   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:00.995905   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:01.004584   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:01.494776   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:01.504156   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:01.995441   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:02.005703   18924 kapi.go:107] duration metric: took 1m11.505283995s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0917 16:58:02.495427   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:02.994984   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:03.220146   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:03.494557   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:03.995254   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:04.495578   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:04.995390   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:05.592860   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:05.720255   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:05.995373   18924 kapi.go:107] duration metric: took 1m12.00454435s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0917 16:58:05.997195   18924 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-408385 cluster.
	I0917 16:58:05.998523   18924 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0917 16:58:05.999866   18924 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0917 16:58:06.001358   18924 out.go:177] * Enabled addons: helm-tiller, nvidia-device-plugin, storage-provisioner, inspektor-gadget, metrics-server, ingress-dns, cloud-spanner, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0917 16:58:06.002828   18924 addons.go:510] duration metric: took 1m25.721995771s for enable addons: enabled=[helm-tiller nvidia-device-plugin storage-provisioner inspektor-gadget metrics-server ingress-dns cloud-spanner yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0917 16:58:08.220603   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:10.720898   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:13.220582   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:15.719610   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:18.218981   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:20.219159   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:22.219968   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:24.220176   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:26.719061   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:28.719706   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:31.220522   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:33.222077   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:35.720029   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:37.220204   18924 pod_ready.go:93] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"True"
	I0917 16:58:37.220228   18924 pod_ready.go:82] duration metric: took 1m45.507511223s for pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace to be "Ready" ...
	I0917 16:58:37.220238   18924 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-95n5v" in "kube-system" namespace to be "Ready" ...
	I0917 16:58:37.225164   18924 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-95n5v" in "kube-system" namespace has status "Ready":"True"
	I0917 16:58:37.225186   18924 pod_ready.go:82] duration metric: took 4.941018ms for pod "nvidia-device-plugin-daemonset-95n5v" in "kube-system" namespace to be "Ready" ...
	I0917 16:58:37.225205   18924 pod_ready.go:39] duration metric: took 1m46.70687885s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 16:58:37.225220   18924 api_server.go:52] waiting for apiserver process to appear ...
	I0917 16:58:37.225261   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 16:58:37.225308   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 16:58:37.279317   18924 cri.go:89] found id: "bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454"
	I0917 16:58:37.279344   18924 cri.go:89] found id: ""
	I0917 16:58:37.279354   18924 logs.go:276] 1 containers: [bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454]
	I0917 16:58:37.279413   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:37.283927   18924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 16:58:37.283993   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 16:58:37.333054   18924 cri.go:89] found id: "535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15"
	I0917 16:58:37.333075   18924 cri.go:89] found id: ""
	I0917 16:58:37.333082   18924 logs.go:276] 1 containers: [535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15]
	I0917 16:58:37.333127   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:37.337854   18924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 16:58:37.337913   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 16:58:37.376799   18924 cri.go:89] found id: "bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707"
	I0917 16:58:37.376819   18924 cri.go:89] found id: ""
	I0917 16:58:37.376826   18924 logs.go:276] 1 containers: [bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707]
	I0917 16:58:37.376871   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:37.381347   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 16:58:37.381426   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 16:58:37.427851   18924 cri.go:89] found id: "5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad"
	I0917 16:58:37.427871   18924 cri.go:89] found id: ""
	I0917 16:58:37.427878   18924 logs.go:276] 1 containers: [5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad]
	I0917 16:58:37.427920   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:37.432240   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 16:58:37.432302   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 16:58:37.479690   18924 cri.go:89] found id: "78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0"
	I0917 16:58:37.479709   18924 cri.go:89] found id: ""
	I0917 16:58:37.479720   18924 logs.go:276] 1 containers: [78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0]
	I0917 16:58:37.479769   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:37.484307   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 16:58:37.484359   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 16:58:37.530462   18924 cri.go:89] found id: "eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44"
	I0917 16:58:37.530482   18924 cri.go:89] found id: ""
	I0917 16:58:37.530490   18924 logs.go:276] 1 containers: [eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44]
	I0917 16:58:37.530536   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:37.534804   18924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 16:58:37.534867   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 16:58:37.576826   18924 cri.go:89] found id: ""
	I0917 16:58:37.576855   18924 logs.go:276] 0 containers: []
	W0917 16:58:37.576867   18924 logs.go:278] No container was found matching "kindnet"
	I0917 16:58:37.576879   18924 logs.go:123] Gathering logs for kube-apiserver [bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454] ...
	I0917 16:58:37.576897   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454"
	I0917 16:58:37.628751   18924 logs.go:123] Gathering logs for etcd [535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15] ...
	I0917 16:58:37.628793   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15"
	I0917 16:58:37.693416   18924 logs.go:123] Gathering logs for kube-proxy [78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0] ...
	I0917 16:58:37.693451   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0"
	I0917 16:58:37.734110   18924 logs.go:123] Gathering logs for CRI-O ...
	I0917 16:58:37.734140   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 16:58:38.409207   18924 logs.go:123] Gathering logs for container status ...
	I0917 16:58:38.409261   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 16:58:38.463953   18924 logs.go:123] Gathering logs for kubelet ...
	I0917 16:58:38.463988   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 16:58:38.554114   18924 logs.go:123] Gathering logs for dmesg ...
	I0917 16:58:38.554151   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 16:58:38.572938   18924 logs.go:123] Gathering logs for describe nodes ...
	I0917 16:58:38.572963   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 16:58:38.770050   18924 logs.go:123] Gathering logs for coredns [bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707] ...
	I0917 16:58:38.770086   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707"
	I0917 16:58:38.817495   18924 logs.go:123] Gathering logs for kube-scheduler [5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad] ...
	I0917 16:58:38.817523   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad"
	I0917 16:58:38.864149   18924 logs.go:123] Gathering logs for kube-controller-manager [eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44] ...
	I0917 16:58:38.864183   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44"
	I0917 16:58:41.429718   18924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 16:58:41.453455   18924 api_server.go:72] duration metric: took 2m1.172653121s to wait for apiserver process to appear ...
	I0917 16:58:41.453496   18924 api_server.go:88] waiting for apiserver healthz status ...
	I0917 16:58:41.453536   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 16:58:41.453601   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 16:58:41.494855   18924 cri.go:89] found id: "bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454"
	I0917 16:58:41.494880   18924 cri.go:89] found id: ""
	I0917 16:58:41.494890   18924 logs.go:276] 1 containers: [bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454]
	I0917 16:58:41.494938   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:41.499492   18924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 16:58:41.499556   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 16:58:41.538940   18924 cri.go:89] found id: "535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15"
	I0917 16:58:41.538965   18924 cri.go:89] found id: ""
	I0917 16:58:41.538974   18924 logs.go:276] 1 containers: [535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15]
	I0917 16:58:41.539031   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:41.543179   18924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 16:58:41.543238   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 16:58:41.592083   18924 cri.go:89] found id: "bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707"
	I0917 16:58:41.592107   18924 cri.go:89] found id: ""
	I0917 16:58:41.592115   18924 logs.go:276] 1 containers: [bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707]
	I0917 16:58:41.592162   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:41.596864   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 16:58:41.596926   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 16:58:41.642101   18924 cri.go:89] found id: "5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad"
	I0917 16:58:41.642126   18924 cri.go:89] found id: ""
	I0917 16:58:41.642136   18924 logs.go:276] 1 containers: [5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad]
	I0917 16:58:41.642182   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:41.647074   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 16:58:41.647150   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 16:58:41.689215   18924 cri.go:89] found id: "78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0"
	I0917 16:58:41.689253   18924 cri.go:89] found id: ""
	I0917 16:58:41.689262   18924 logs.go:276] 1 containers: [78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0]
	I0917 16:58:41.689322   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:41.693834   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 16:58:41.693902   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 16:58:41.736215   18924 cri.go:89] found id: "eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44"
	I0917 16:58:41.736241   18924 cri.go:89] found id: ""
	I0917 16:58:41.736251   18924 logs.go:276] 1 containers: [eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44]
	I0917 16:58:41.736309   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:41.740897   18924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 16:58:41.740965   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 16:58:41.782588   18924 cri.go:89] found id: ""
	I0917 16:58:41.782611   18924 logs.go:276] 0 containers: []
	W0917 16:58:41.782619   18924 logs.go:278] No container was found matching "kindnet"
	I0917 16:58:41.782626   18924 logs.go:123] Gathering logs for etcd [535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15] ...
	I0917 16:58:41.782637   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15"
	I0917 16:58:41.843944   18924 logs.go:123] Gathering logs for coredns [bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707] ...
	I0917 16:58:41.843982   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707"
	I0917 16:58:41.886360   18924 logs.go:123] Gathering logs for kube-scheduler [5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad] ...
	I0917 16:58:41.886389   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad"
	I0917 16:58:41.932278   18924 logs.go:123] Gathering logs for kube-controller-manager [eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44] ...
	I0917 16:58:41.932318   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44"
	I0917 16:58:42.000845   18924 logs.go:123] Gathering logs for CRI-O ...
	I0917 16:58:42.000894   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 16:58:42.752922   18924 logs.go:123] Gathering logs for container status ...
	I0917 16:58:42.752965   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 16:58:42.806585   18924 logs.go:123] Gathering logs for dmesg ...
	I0917 16:58:42.806621   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 16:58:42.822915   18924 logs.go:123] Gathering logs for kube-apiserver [bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454] ...
	I0917 16:58:42.822950   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454"
	I0917 16:58:42.872331   18924 logs.go:123] Gathering logs for kube-proxy [78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0] ...
	I0917 16:58:42.872363   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0"
	I0917 16:58:42.911531   18924 logs.go:123] Gathering logs for kubelet ...
	I0917 16:58:42.911556   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 16:58:43.001970   18924 logs.go:123] Gathering logs for describe nodes ...
	I0917 16:58:43.002011   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 16:58:45.635837   18924 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I0917 16:58:45.642306   18924 api_server.go:279] https://192.168.39.170:8443/healthz returned 200:
	ok
	I0917 16:58:45.643242   18924 api_server.go:141] control plane version: v1.31.1
	I0917 16:58:45.643264   18924 api_server.go:131] duration metric: took 4.189760157s to wait for apiserver health ...
	I0917 16:58:45.643271   18924 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 16:58:45.643288   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 16:58:45.643328   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 16:58:45.693219   18924 cri.go:89] found id: "bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454"
	I0917 16:58:45.693256   18924 cri.go:89] found id: ""
	I0917 16:58:45.693265   18924 logs.go:276] 1 containers: [bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454]
	I0917 16:58:45.693322   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:45.698334   18924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 16:58:45.698400   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 16:58:45.762484   18924 cri.go:89] found id: "535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15"
	I0917 16:58:45.762509   18924 cri.go:89] found id: ""
	I0917 16:58:45.762517   18924 logs.go:276] 1 containers: [535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15]
	I0917 16:58:45.762574   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:45.767293   18924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 16:58:45.767362   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 16:58:45.815706   18924 cri.go:89] found id: "bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707"
	I0917 16:58:45.815734   18924 cri.go:89] found id: ""
	I0917 16:58:45.815743   18924 logs.go:276] 1 containers: [bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707]
	I0917 16:58:45.815801   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:45.821316   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 16:58:45.821379   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 16:58:45.872354   18924 cri.go:89] found id: "5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad"
	I0917 16:58:45.872375   18924 cri.go:89] found id: ""
	I0917 16:58:45.872384   18924 logs.go:276] 1 containers: [5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad]
	I0917 16:58:45.872457   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:45.876864   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 16:58:45.876916   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 16:58:45.933435   18924 cri.go:89] found id: "78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0"
	I0917 16:58:45.933456   18924 cri.go:89] found id: ""
	I0917 16:58:45.933464   18924 logs.go:276] 1 containers: [78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0]
	I0917 16:58:45.933522   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:45.937839   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 16:58:45.937893   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 16:58:45.990922   18924 cri.go:89] found id: "eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44"
	I0917 16:58:45.990950   18924 cri.go:89] found id: ""
	I0917 16:58:45.990960   18924 logs.go:276] 1 containers: [eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44]
	I0917 16:58:45.991013   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:45.995807   18924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 16:58:45.995870   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 16:58:46.057313   18924 cri.go:89] found id: ""
	I0917 16:58:46.057345   18924 logs.go:276] 0 containers: []
	W0917 16:58:46.057362   18924 logs.go:278] No container was found matching "kindnet"
	I0917 16:58:46.057372   18924 logs.go:123] Gathering logs for kubelet ...
	I0917 16:58:46.057385   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 16:58:46.149501   18924 logs.go:123] Gathering logs for describe nodes ...
	I0917 16:58:46.149539   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 16:58:46.282319   18924 logs.go:123] Gathering logs for kube-apiserver [bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454] ...
	I0917 16:58:46.282352   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454"
	I0917 16:58:46.337878   18924 logs.go:123] Gathering logs for coredns [bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707] ...
	I0917 16:58:46.337916   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707"
	I0917 16:58:46.391452   18924 logs.go:123] Gathering logs for kube-proxy [78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0] ...
	I0917 16:58:46.391485   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0"
	I0917 16:58:46.429573   18924 logs.go:123] Gathering logs for CRI-O ...
	I0917 16:58:46.429607   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 16:58:47.320590   18924 logs.go:123] Gathering logs for dmesg ...
	I0917 16:58:47.320629   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 16:58:47.339176   18924 logs.go:123] Gathering logs for etcd [535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15] ...
	I0917 16:58:47.339207   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15"
	I0917 16:58:47.401618   18924 logs.go:123] Gathering logs for kube-scheduler [5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad] ...
	I0917 16:58:47.401661   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad"
	I0917 16:58:47.448277   18924 logs.go:123] Gathering logs for kube-controller-manager [eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44] ...
	I0917 16:58:47.448312   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44"
	I0917 16:58:47.519002   18924 logs.go:123] Gathering logs for container status ...
	I0917 16:58:47.519038   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 16:58:50.090393   18924 system_pods.go:59] 18 kube-system pods found
	I0917 16:58:50.090427   18924 system_pods.go:61] "coredns-7c65d6cfc9-6scmn" [8db4f4dd-ff63-4e6e-8533-37fc690e481f] Running
	I0917 16:58:50.090432   18924 system_pods.go:61] "csi-hostpath-attacher-0" [65b71f4b-d36f-4dc6-bdae-333899320ff0] Running
	I0917 16:58:50.090436   18924 system_pods.go:61] "csi-hostpath-resizer-0" [c83c2084-ccc8-4b76-9ea2-170c35f90d38] Running
	I0917 16:58:50.090440   18924 system_pods.go:61] "csi-hostpathplugin-l4qgp" [3d956da8-0046-445f-91ca-13ca2f599dd9] Running
	I0917 16:58:50.090443   18924 system_pods.go:61] "etcd-addons-408385" [12d66991-8c52-4c93-bbc7-62243564fa8c] Running
	I0917 16:58:50.090446   18924 system_pods.go:61] "kube-apiserver-addons-408385" [e7968656-cd51-4c73-b4d3-8fdf9e3a0397] Running
	I0917 16:58:50.090449   18924 system_pods.go:61] "kube-controller-manager-addons-408385" [f969f875-2b8a-4c74-9989-03e557f8a909] Running
	I0917 16:58:50.090453   18924 system_pods.go:61] "kube-ingress-dns-minikube" [a365fa42-68bf-4f57-ad20-e437ef76117e] Running
	I0917 16:58:50.090456   18924 system_pods.go:61] "kube-proxy-6blpt" [fea4c9da-fbd5-4ac9-be8e-7cd0b574e3fc] Running
	I0917 16:58:50.090459   18924 system_pods.go:61] "kube-scheduler-addons-408385" [4c2a228c-678f-48c1-96df-80d490cf18de] Running
	I0917 16:58:50.090462   18924 system_pods.go:61] "metrics-server-84c5f94fbc-nxwr4" [b55954ef-19c5-428e-b2f5-64cb84921e99] Running
	I0917 16:58:50.090465   18924 system_pods.go:61] "nvidia-device-plugin-daemonset-95n5v" [48c0bfc6-64c7-473b-9f8c-429d8af8f349] Running
	I0917 16:58:50.090468   18924 system_pods.go:61] "registry-66c9cd494c-5dzpj" [2f4278c0-9bc9-4d2d-8e73-43d39ddd1504] Running
	I0917 16:58:50.090472   18924 system_pods.go:61] "registry-proxy-84sgt" [93e3187d-0292-45df-9221-e406397b489f] Running
	I0917 16:58:50.090477   18924 system_pods.go:61] "snapshot-controller-56fcc65765-hzt86" [80bf610f-3214-4cdb-90db-4fb1bf38882c] Running
	I0917 16:58:50.090480   18924 system_pods.go:61] "snapshot-controller-56fcc65765-v8kzp" [d6dcec3f-4138-4065-aa77-d339d5b2a2d6] Running
	I0917 16:58:50.090483   18924 system_pods.go:61] "storage-provisioner" [308ed5f9-f16c-45d9-b7c4-edb96a6aa2d1] Running
	I0917 16:58:50.090486   18924 system_pods.go:61] "tiller-deploy-b48cc5f79-r4h85" [6b8d783b-0417-4bca-bedd-0283ba1faf18] Running
	I0917 16:58:50.090492   18924 system_pods.go:74] duration metric: took 4.447215491s to wait for pod list to return data ...
	I0917 16:58:50.090505   18924 default_sa.go:34] waiting for default service account to be created ...
	I0917 16:58:50.093151   18924 default_sa.go:45] found service account: "default"
	I0917 16:58:50.093172   18924 default_sa.go:55] duration metric: took 2.662022ms for default service account to be created ...
	I0917 16:58:50.093180   18924 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 16:58:50.100564   18924 system_pods.go:86] 18 kube-system pods found
	I0917 16:58:50.100596   18924 system_pods.go:89] "coredns-7c65d6cfc9-6scmn" [8db4f4dd-ff63-4e6e-8533-37fc690e481f] Running
	I0917 16:58:50.100607   18924 system_pods.go:89] "csi-hostpath-attacher-0" [65b71f4b-d36f-4dc6-bdae-333899320ff0] Running
	I0917 16:58:50.100619   18924 system_pods.go:89] "csi-hostpath-resizer-0" [c83c2084-ccc8-4b76-9ea2-170c35f90d38] Running
	I0917 16:58:50.100623   18924 system_pods.go:89] "csi-hostpathplugin-l4qgp" [3d956da8-0046-445f-91ca-13ca2f599dd9] Running
	I0917 16:58:50.100628   18924 system_pods.go:89] "etcd-addons-408385" [12d66991-8c52-4c93-bbc7-62243564fa8c] Running
	I0917 16:58:50.100632   18924 system_pods.go:89] "kube-apiserver-addons-408385" [e7968656-cd51-4c73-b4d3-8fdf9e3a0397] Running
	I0917 16:58:50.100637   18924 system_pods.go:89] "kube-controller-manager-addons-408385" [f969f875-2b8a-4c74-9989-03e557f8a909] Running
	I0917 16:58:50.100640   18924 system_pods.go:89] "kube-ingress-dns-minikube" [a365fa42-68bf-4f57-ad20-e437ef76117e] Running
	I0917 16:58:50.100643   18924 system_pods.go:89] "kube-proxy-6blpt" [fea4c9da-fbd5-4ac9-be8e-7cd0b574e3fc] Running
	I0917 16:58:50.100647   18924 system_pods.go:89] "kube-scheduler-addons-408385" [4c2a228c-678f-48c1-96df-80d490cf18de] Running
	I0917 16:58:50.100650   18924 system_pods.go:89] "metrics-server-84c5f94fbc-nxwr4" [b55954ef-19c5-428e-b2f5-64cb84921e99] Running
	I0917 16:58:50.100657   18924 system_pods.go:89] "nvidia-device-plugin-daemonset-95n5v" [48c0bfc6-64c7-473b-9f8c-429d8af8f349] Running
	I0917 16:58:50.100664   18924 system_pods.go:89] "registry-66c9cd494c-5dzpj" [2f4278c0-9bc9-4d2d-8e73-43d39ddd1504] Running
	I0917 16:58:50.100667   18924 system_pods.go:89] "registry-proxy-84sgt" [93e3187d-0292-45df-9221-e406397b489f] Running
	I0917 16:58:50.100670   18924 system_pods.go:89] "snapshot-controller-56fcc65765-hzt86" [80bf610f-3214-4cdb-90db-4fb1bf38882c] Running
	I0917 16:58:50.100674   18924 system_pods.go:89] "snapshot-controller-56fcc65765-v8kzp" [d6dcec3f-4138-4065-aa77-d339d5b2a2d6] Running
	I0917 16:58:50.100677   18924 system_pods.go:89] "storage-provisioner" [308ed5f9-f16c-45d9-b7c4-edb96a6aa2d1] Running
	I0917 16:58:50.100680   18924 system_pods.go:89] "tiller-deploy-b48cc5f79-r4h85" [6b8d783b-0417-4bca-bedd-0283ba1faf18] Running
	I0917 16:58:50.100687   18924 system_pods.go:126] duration metric: took 7.502942ms to wait for k8s-apps to be running ...
	I0917 16:58:50.100695   18924 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 16:58:50.100746   18924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 16:58:50.115763   18924 system_svc.go:56] duration metric: took 15.057221ms WaitForService to wait for kubelet
	I0917 16:58:50.115798   18924 kubeadm.go:582] duration metric: took 2m9.83500224s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 16:58:50.115816   18924 node_conditions.go:102] verifying NodePressure condition ...
	I0917 16:58:50.119437   18924 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 16:58:50.119462   18924 node_conditions.go:123] node cpu capacity is 2
	I0917 16:58:50.119474   18924 node_conditions.go:105] duration metric: took 3.65352ms to run NodePressure ...
	I0917 16:58:50.119484   18924 start.go:241] waiting for startup goroutines ...
	I0917 16:58:50.119490   18924 start.go:246] waiting for cluster config update ...
	I0917 16:58:50.119505   18924 start.go:255] writing updated cluster config ...
	I0917 16:58:50.119789   18924 ssh_runner.go:195] Run: rm -f paused
	I0917 16:58:50.169934   18924 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 16:58:50.173108   18924 out.go:177] * Done! kubectl is now configured to use "addons-408385" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 17 17:08:06 addons-408385 crio[662]: time="2024-09-17 17:08:06.168870327Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726592886168840505,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:557374,},InodesUsed:&UInt64Value{Value:190,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8442064e-e9af-41f5-b398-bba6e5ebaf3b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:08:06 addons-408385 crio[662]: time="2024-09-17 17:08:06.169459192Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a7eead0-70bd-44f5-b14e-e850e2caff22 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:08:06 addons-408385 crio[662]: time="2024-09-17 17:08:06.169520271Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a7eead0-70bd-44f5-b14e-e850e2caff22 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:08:06 addons-408385 crio[662]: time="2024-09-17 17:08:06.169888859Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:397f39163049566d23be8376dbed334aadd27c8c56e83f417aa2c59f2a252f9b,PodSandboxId:6e889439508c6580558306b641e7b6cfc5d4ce54fb03881be02e737d80da3344,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726592879375090193,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 076f30f2-e5f7-4810-8e8d-613a12b5664c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4c5e175eedc073e97f9ea9680c41d68c3c000caa69196b46e03499304831c0d,PodSandboxId:e39bbf11dc16564121b75b3ae0c124b1d4b3e667c00c6d0e30fe71b8dcc2eb3d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726592284895958469,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-b7hz4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0d20c2e4-2c70-48c2-8fb9-a28309d6b41f,},Annotations:map[string]string{io.kubernetes.container.has
h: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60c71bad9fa6a1e36178abd84c1ba467a3b3a142b5718369b389d74af6857592,PodSandboxId:756a63bbbea0124b90aca00ac7b34c1a3af57e1bf3d5375aad710a8503d6d5c2,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726592281655849272,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-wqgsl,io.kubernetes.pod.namespace: ingress-nginx
,io.kubernetes.pod.uid: 049eb2d7-44c4-41a2-b1b5-bc7c2ca9a3d9,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:fbf8a94347cdf3d76d7c3a16e6e93cd904b404fc749bbf93aaf4c0045a2dea9d,PodSandboxId:d7e2e0870993796d606398fc8c186520db9a834b52f5c6551efda8ab541a156f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726592258234195990,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4b8gx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e9f69cee-9c47-4129-9373-eb8999ab009c,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8c16cc48c9cae24c50e551dc4ffb786f6502b1e65fc8bcf65397e3beba2dd,PodSandboxId:31ba27ee18db2a2dce5e74f5feccf64d3dc534d81ab0058f0a67afdd27002961,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726592257659953729,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-78945,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 46dcf364-4802-4e08-9db9-0e89c4984788,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:014331b175902c12522e8aea30abc488fdc5e3030b68c3331b8fadd59ee4616c,PodSandboxId:9e6c852eaf6dda1956e4a448deda9b06a6e53a374942a1908b0308edd8701668,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b
69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1726592254168967413,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-67d98fc6b-sjbnb,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 63f7f867-db6c-4e11-b32b-b52255c5a318,},Annotations:map[string]string{io.kubernetes.container.hash: e656c288,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c35ba12caa08b4b7dff115e7bec3ccfea7487866b24701619f1eb1dec01e5cdf,PodSandboxId:d5f73be9090dd4d27345f77aa93b450ea10f7e369ead9c0ec9078f75a9967238,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Imag
e:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726592248653089159,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-nxwr4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b55954ef-19c5-428e-b2f5-64cb84921e99,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4cc28dd223fae5ab7d3dfd2b1e58d3dcaa8b99dd03c48c005906680794deadd,P
odSandboxId:1618cee78b70d5876d3bbeaf46220d7c1c126ade06474a908e2f5f43dbb7de53,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,State:CONTAINER_EXITED,CreatedAt:1726592233284890841,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-bts6k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82c244a3-5a3d-428b-9b81-02ea087e5124,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1f87693e9c96233254fe7aec19cf95543b823ca22d9e30bdc64fbac01e1ae0,PodSandboxId:8135dd0e54a129fa9dbaf0b35434cdaeff325b3e93d2581fb18fc05c11523fd4,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:159abe21a6880acafcba64b5e25c48b3e74134ca6823dc553a29c127693ace3e,State:CONTAINER_RUNNING,CreatedAt:1726592228350903596,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-95n5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48c0bfc6-64c7-473b-9f8c-429d8af8f349,},Annotations:map[string]string{io.kubernetes.container.hash: 7c4b2818,io.kubernetes.containe
r.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18180b1d2a45e61bc74c08b02254b5c0a3901d01890ff4080dd549339ae22505,PodSandboxId:ba55d5dc247d0985b77a60f4327d66f79cd8b40c4857821a02aade515c531383,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726592218819928992,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a365fa42-68bf-4f57-ad20-e437ef76117e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b3332c3d67663e332214a4d2a7600673eb50a2e85f955febcae8ae6bdc61cea,PodSandboxId:1266dc642a5d5d817566a303855b89ad35e9ba5ce0cd09a6987308c623d146d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726592207362039245,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 308ed
5f9-f16c-45d9-b7c4-edb96a6aa2d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707,PodSandboxId:147e55def5b9c0c7a9d8b335c46dbfd7c65e5774b9251734e2b6257b17749d03,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726592204223691882,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6scmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db4f4dd-ff63-4e6e-8533-37fc690e481f
,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0,PodSandboxId:ba0e8772c0eef1236f4fa4985d24c32a210d0f1bdd86f2a9a4221eb1e6e06384,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726592200857172807,Labels:map[str
ing]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6blpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fea4c9da-fbd5-4ac9-be8e-7cd0b574e3fc,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad,PodSandboxId:900247dfddd23f3136780bf7695d0bc30603abe8a6d1321d2c4ce6551729d09a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726592190061688263,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3943b5aa55300847a88d97baf9f5fcd9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15,PodSandboxId:0d778775904583ba97eb27717ac52155cf18a8980c70a3e42c566fb034a6538c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726592190061957344,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b22844b2da94ceb2dd6e2ff998a06b7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44,PodSandboxId:b694988dbe8974b1414fe63b75517e2c0ec0abb7613fd6db4ec17ca7ba275fc4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726592190025949182,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.n
ame: kube-controller-manager-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc7054c0bf6f7ef8f456663c4c477a6e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454,PodSandboxId:b5606602c326c9d22fba7b773a1101738483f2c55a045ae530eb2568b3631e79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726592189934323923,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e28321a1692b9c5e59016b226421277,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a7eead0-70bd-44f5-b14e-e850e2caff22 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:08:06 addons-408385 crio[662]: time="2024-09-17 17:08:06.207070690Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d8846a72-2001-4639-bb3a-10258de1dff3 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:08:06 addons-408385 crio[662]: time="2024-09-17 17:08:06.207147298Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d8846a72-2001-4639-bb3a-10258de1dff3 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:08:06 addons-408385 crio[662]: time="2024-09-17 17:08:06.208694667Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9f35be3c-7b50-4954-986b-d108539d3d47 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:08:06 addons-408385 crio[662]: time="2024-09-17 17:08:06.209937355Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726592886209907651,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:557374,},InodesUsed:&UInt64Value{Value:190,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9f35be3c-7b50-4954-986b-d108539d3d47 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:08:06 addons-408385 crio[662]: time="2024-09-17 17:08:06.210871963Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b5ae63af-36b8-4fae-8e55-1d1d155042ca name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:08:06 addons-408385 crio[662]: time="2024-09-17 17:08:06.210933225Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b5ae63af-36b8-4fae-8e55-1d1d155042ca name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:08:06 addons-408385 crio[662]: time="2024-09-17 17:08:06.211310498Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:397f39163049566d23be8376dbed334aadd27c8c56e83f417aa2c59f2a252f9b,PodSandboxId:6e889439508c6580558306b641e7b6cfc5d4ce54fb03881be02e737d80da3344,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726592879375090193,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 076f30f2-e5f7-4810-8e8d-613a12b5664c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4c5e175eedc073e97f9ea9680c41d68c3c000caa69196b46e03499304831c0d,PodSandboxId:e39bbf11dc16564121b75b3ae0c124b1d4b3e667c00c6d0e30fe71b8dcc2eb3d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726592284895958469,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-b7hz4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0d20c2e4-2c70-48c2-8fb9-a28309d6b41f,},Annotations:map[string]string{io.kubernetes.container.has
h: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60c71bad9fa6a1e36178abd84c1ba467a3b3a142b5718369b389d74af6857592,PodSandboxId:756a63bbbea0124b90aca00ac7b34c1a3af57e1bf3d5375aad710a8503d6d5c2,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726592281655849272,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-wqgsl,io.kubernetes.pod.namespace: ingress-nginx
,io.kubernetes.pod.uid: 049eb2d7-44c4-41a2-b1b5-bc7c2ca9a3d9,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:fbf8a94347cdf3d76d7c3a16e6e93cd904b404fc749bbf93aaf4c0045a2dea9d,PodSandboxId:d7e2e0870993796d606398fc8c186520db9a834b52f5c6551efda8ab541a156f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726592258234195990,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4b8gx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e9f69cee-9c47-4129-9373-eb8999ab009c,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8c16cc48c9cae24c50e551dc4ffb786f6502b1e65fc8bcf65397e3beba2dd,PodSandboxId:31ba27ee18db2a2dce5e74f5feccf64d3dc534d81ab0058f0a67afdd27002961,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726592257659953729,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-78945,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 46dcf364-4802-4e08-9db9-0e89c4984788,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:014331b175902c12522e8aea30abc488fdc5e3030b68c3331b8fadd59ee4616c,PodSandboxId:9e6c852eaf6dda1956e4a448deda9b06a6e53a374942a1908b0308edd8701668,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b
69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1726592254168967413,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-67d98fc6b-sjbnb,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 63f7f867-db6c-4e11-b32b-b52255c5a318,},Annotations:map[string]string{io.kubernetes.container.hash: e656c288,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c35ba12caa08b4b7dff115e7bec3ccfea7487866b24701619f1eb1dec01e5cdf,PodSandboxId:d5f73be9090dd4d27345f77aa93b450ea10f7e369ead9c0ec9078f75a9967238,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Imag
e:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726592248653089159,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-nxwr4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b55954ef-19c5-428e-b2f5-64cb84921e99,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4cc28dd223fae5ab7d3dfd2b1e58d3dcaa8b99dd03c48c005906680794deadd,P
odSandboxId:1618cee78b70d5876d3bbeaf46220d7c1c126ade06474a908e2f5f43dbb7de53,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,State:CONTAINER_EXITED,CreatedAt:1726592233284890841,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-bts6k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82c244a3-5a3d-428b-9b81-02ea087e5124,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1f87693e9c96233254fe7aec19cf95543b823ca22d9e30bdc64fbac01e1ae0,PodSandboxId:8135dd0e54a129fa9dbaf0b35434cdaeff325b3e93d2581fb18fc05c11523fd4,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:159abe21a6880acafcba64b5e25c48b3e74134ca6823dc553a29c127693ace3e,State:CONTAINER_RUNNING,CreatedAt:1726592228350903596,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-95n5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48c0bfc6-64c7-473b-9f8c-429d8af8f349,},Annotations:map[string]string{io.kubernetes.container.hash: 7c4b2818,io.kubernetes.containe
r.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18180b1d2a45e61bc74c08b02254b5c0a3901d01890ff4080dd549339ae22505,PodSandboxId:ba55d5dc247d0985b77a60f4327d66f79cd8b40c4857821a02aade515c531383,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726592218819928992,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a365fa42-68bf-4f57-ad20-e437ef76117e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b3332c3d67663e332214a4d2a7600673eb50a2e85f955febcae8ae6bdc61cea,PodSandboxId:1266dc642a5d5d817566a303855b89ad35e9ba5ce0cd09a6987308c623d146d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726592207362039245,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 308ed
5f9-f16c-45d9-b7c4-edb96a6aa2d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707,PodSandboxId:147e55def5b9c0c7a9d8b335c46dbfd7c65e5774b9251734e2b6257b17749d03,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726592204223691882,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6scmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db4f4dd-ff63-4e6e-8533-37fc690e481f
,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0,PodSandboxId:ba0e8772c0eef1236f4fa4985d24c32a210d0f1bdd86f2a9a4221eb1e6e06384,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726592200857172807,Labels:map[str
ing]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6blpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fea4c9da-fbd5-4ac9-be8e-7cd0b574e3fc,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad,PodSandboxId:900247dfddd23f3136780bf7695d0bc30603abe8a6d1321d2c4ce6551729d09a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726592190061688263,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3943b5aa55300847a88d97baf9f5fcd9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15,PodSandboxId:0d778775904583ba97eb27717ac52155cf18a8980c70a3e42c566fb034a6538c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726592190061957344,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b22844b2da94ceb2dd6e2ff998a06b7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44,PodSandboxId:b694988dbe8974b1414fe63b75517e2c0ec0abb7613fd6db4ec17ca7ba275fc4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726592190025949182,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.n
ame: kube-controller-manager-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc7054c0bf6f7ef8f456663c4c477a6e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454,PodSandboxId:b5606602c326c9d22fba7b773a1101738483f2c55a045ae530eb2568b3631e79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726592189934323923,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e28321a1692b9c5e59016b226421277,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b5ae63af-36b8-4fae-8e55-1d1d155042ca name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:08:06 addons-408385 crio[662]: time="2024-09-17 17:08:06.259258806Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=70668fa9-ce8b-423f-92dc-ff70660d72a4 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:08:06 addons-408385 crio[662]: time="2024-09-17 17:08:06.259407688Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=70668fa9-ce8b-423f-92dc-ff70660d72a4 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:08:06 addons-408385 crio[662]: time="2024-09-17 17:08:06.261111030Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5521b163-294b-4b06-a4d8-90e413a019b2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:08:06 addons-408385 crio[662]: time="2024-09-17 17:08:06.262693521Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726592886262663913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:557374,},InodesUsed:&UInt64Value{Value:190,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5521b163-294b-4b06-a4d8-90e413a019b2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:08:06 addons-408385 crio[662]: time="2024-09-17 17:08:06.263272277Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eb92673c-547f-436f-b1dd-51172c8eaaa3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:08:06 addons-408385 crio[662]: time="2024-09-17 17:08:06.263339639Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eb92673c-547f-436f-b1dd-51172c8eaaa3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:08:06 addons-408385 crio[662]: time="2024-09-17 17:08:06.263859046Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:397f39163049566d23be8376dbed334aadd27c8c56e83f417aa2c59f2a252f9b,PodSandboxId:6e889439508c6580558306b641e7b6cfc5d4ce54fb03881be02e737d80da3344,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726592879375090193,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 076f30f2-e5f7-4810-8e8d-613a12b5664c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4c5e175eedc073e97f9ea9680c41d68c3c000caa69196b46e03499304831c0d,PodSandboxId:e39bbf11dc16564121b75b3ae0c124b1d4b3e667c00c6d0e30fe71b8dcc2eb3d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726592284895958469,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-b7hz4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0d20c2e4-2c70-48c2-8fb9-a28309d6b41f,},Annotations:map[string]string{io.kubernetes.container.has
h: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60c71bad9fa6a1e36178abd84c1ba467a3b3a142b5718369b389d74af6857592,PodSandboxId:756a63bbbea0124b90aca00ac7b34c1a3af57e1bf3d5375aad710a8503d6d5c2,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726592281655849272,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-wqgsl,io.kubernetes.pod.namespace: ingress-nginx
,io.kubernetes.pod.uid: 049eb2d7-44c4-41a2-b1b5-bc7c2ca9a3d9,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:fbf8a94347cdf3d76d7c3a16e6e93cd904b404fc749bbf93aaf4c0045a2dea9d,PodSandboxId:d7e2e0870993796d606398fc8c186520db9a834b52f5c6551efda8ab541a156f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726592258234195990,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4b8gx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e9f69cee-9c47-4129-9373-eb8999ab009c,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8c16cc48c9cae24c50e551dc4ffb786f6502b1e65fc8bcf65397e3beba2dd,PodSandboxId:31ba27ee18db2a2dce5e74f5feccf64d3dc534d81ab0058f0a67afdd27002961,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726592257659953729,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-78945,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 46dcf364-4802-4e08-9db9-0e89c4984788,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:014331b175902c12522e8aea30abc488fdc5e3030b68c3331b8fadd59ee4616c,PodSandboxId:9e6c852eaf6dda1956e4a448deda9b06a6e53a374942a1908b0308edd8701668,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b
69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1726592254168967413,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-67d98fc6b-sjbnb,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 63f7f867-db6c-4e11-b32b-b52255c5a318,},Annotations:map[string]string{io.kubernetes.container.hash: e656c288,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c35ba12caa08b4b7dff115e7bec3ccfea7487866b24701619f1eb1dec01e5cdf,PodSandboxId:d5f73be9090dd4d27345f77aa93b450ea10f7e369ead9c0ec9078f75a9967238,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Imag
e:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726592248653089159,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-nxwr4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b55954ef-19c5-428e-b2f5-64cb84921e99,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4cc28dd223fae5ab7d3dfd2b1e58d3dcaa8b99dd03c48c005906680794deadd,P
odSandboxId:1618cee78b70d5876d3bbeaf46220d7c1c126ade06474a908e2f5f43dbb7de53,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,State:CONTAINER_EXITED,CreatedAt:1726592233284890841,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-bts6k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82c244a3-5a3d-428b-9b81-02ea087e5124,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1f87693e9c96233254fe7aec19cf95543b823ca22d9e30bdc64fbac01e1ae0,PodSandboxId:8135dd0e54a129fa9dbaf0b35434cdaeff325b3e93d2581fb18fc05c11523fd4,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:159abe21a6880acafcba64b5e25c48b3e74134ca6823dc553a29c127693ace3e,State:CONTAINER_RUNNING,CreatedAt:1726592228350903596,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-95n5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48c0bfc6-64c7-473b-9f8c-429d8af8f349,},Annotations:map[string]string{io.kubernetes.container.hash: 7c4b2818,io.kubernetes.containe
r.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18180b1d2a45e61bc74c08b02254b5c0a3901d01890ff4080dd549339ae22505,PodSandboxId:ba55d5dc247d0985b77a60f4327d66f79cd8b40c4857821a02aade515c531383,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726592218819928992,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a365fa42-68bf-4f57-ad20-e437ef76117e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b3332c3d67663e332214a4d2a7600673eb50a2e85f955febcae8ae6bdc61cea,PodSandboxId:1266dc642a5d5d817566a303855b89ad35e9ba5ce0cd09a6987308c623d146d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726592207362039245,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 308ed
5f9-f16c-45d9-b7c4-edb96a6aa2d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707,PodSandboxId:147e55def5b9c0c7a9d8b335c46dbfd7c65e5774b9251734e2b6257b17749d03,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726592204223691882,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6scmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db4f4dd-ff63-4e6e-8533-37fc690e481f
,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0,PodSandboxId:ba0e8772c0eef1236f4fa4985d24c32a210d0f1bdd86f2a9a4221eb1e6e06384,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726592200857172807,Labels:map[str
ing]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6blpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fea4c9da-fbd5-4ac9-be8e-7cd0b574e3fc,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad,PodSandboxId:900247dfddd23f3136780bf7695d0bc30603abe8a6d1321d2c4ce6551729d09a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726592190061688263,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3943b5aa55300847a88d97baf9f5fcd9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15,PodSandboxId:0d778775904583ba97eb27717ac52155cf18a8980c70a3e42c566fb034a6538c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726592190061957344,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b22844b2da94ceb2dd6e2ff998a06b7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44,PodSandboxId:b694988dbe8974b1414fe63b75517e2c0ec0abb7613fd6db4ec17ca7ba275fc4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726592190025949182,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.n
ame: kube-controller-manager-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc7054c0bf6f7ef8f456663c4c477a6e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454,PodSandboxId:b5606602c326c9d22fba7b773a1101738483f2c55a045ae530eb2568b3631e79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726592189934323923,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e28321a1692b9c5e59016b226421277,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eb92673c-547f-436f-b1dd-51172c8eaaa3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:08:06 addons-408385 crio[662]: time="2024-09-17 17:08:06.301589539Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b9f19a22-d0b5-49a2-a69e-74b236384327 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:08:06 addons-408385 crio[662]: time="2024-09-17 17:08:06.301665898Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b9f19a22-d0b5-49a2-a69e-74b236384327 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:08:06 addons-408385 crio[662]: time="2024-09-17 17:08:06.303483420Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=73f3a729-d6ba-4461-a0e6-437f6bd5e6e5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:08:06 addons-408385 crio[662]: time="2024-09-17 17:08:06.304972050Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726592886304872129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:557374,},InodesUsed:&UInt64Value{Value:190,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=73f3a729-d6ba-4461-a0e6-437f6bd5e6e5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:08:06 addons-408385 crio[662]: time="2024-09-17 17:08:06.306108379Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d673ad13-8ee2-42f6-81f4-c367783f9457 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:08:06 addons-408385 crio[662]: time="2024-09-17 17:08:06.306171675Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d673ad13-8ee2-42f6-81f4-c367783f9457 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:08:06 addons-408385 crio[662]: time="2024-09-17 17:08:06.306617139Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:397f39163049566d23be8376dbed334aadd27c8c56e83f417aa2c59f2a252f9b,PodSandboxId:6e889439508c6580558306b641e7b6cfc5d4ce54fb03881be02e737d80da3344,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726592879375090193,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 076f30f2-e5f7-4810-8e8d-613a12b5664c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4c5e175eedc073e97f9ea9680c41d68c3c000caa69196b46e03499304831c0d,PodSandboxId:e39bbf11dc16564121b75b3ae0c124b1d4b3e667c00c6d0e30fe71b8dcc2eb3d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726592284895958469,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-b7hz4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0d20c2e4-2c70-48c2-8fb9-a28309d6b41f,},Annotations:map[string]string{io.kubernetes.container.has
h: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60c71bad9fa6a1e36178abd84c1ba467a3b3a142b5718369b389d74af6857592,PodSandboxId:756a63bbbea0124b90aca00ac7b34c1a3af57e1bf3d5375aad710a8503d6d5c2,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726592281655849272,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-wqgsl,io.kubernetes.pod.namespace: ingress-nginx
,io.kubernetes.pod.uid: 049eb2d7-44c4-41a2-b1b5-bc7c2ca9a3d9,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:fbf8a94347cdf3d76d7c3a16e6e93cd904b404fc749bbf93aaf4c0045a2dea9d,PodSandboxId:d7e2e0870993796d606398fc8c186520db9a834b52f5c6551efda8ab541a156f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726592258234195990,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4b8gx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e9f69cee-9c47-4129-9373-eb8999ab009c,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8c16cc48c9cae24c50e551dc4ffb786f6502b1e65fc8bcf65397e3beba2dd,PodSandboxId:31ba27ee18db2a2dce5e74f5feccf64d3dc534d81ab0058f0a67afdd27002961,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726592257659953729,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-78945,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 46dcf364-4802-4e08-9db9-0e89c4984788,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:014331b175902c12522e8aea30abc488fdc5e3030b68c3331b8fadd59ee4616c,PodSandboxId:9e6c852eaf6dda1956e4a448deda9b06a6e53a374942a1908b0308edd8701668,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b
69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1726592254168967413,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-67d98fc6b-sjbnb,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 63f7f867-db6c-4e11-b32b-b52255c5a318,},Annotations:map[string]string{io.kubernetes.container.hash: e656c288,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c35ba12caa08b4b7dff115e7bec3ccfea7487866b24701619f1eb1dec01e5cdf,PodSandboxId:d5f73be9090dd4d27345f77aa93b450ea10f7e369ead9c0ec9078f75a9967238,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Imag
e:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726592248653089159,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-nxwr4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b55954ef-19c5-428e-b2f5-64cb84921e99,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4cc28dd223fae5ab7d3dfd2b1e58d3dcaa8b99dd03c48c005906680794deadd,P
odSandboxId:1618cee78b70d5876d3bbeaf46220d7c1c126ade06474a908e2f5f43dbb7de53,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,State:CONTAINER_EXITED,CreatedAt:1726592233284890841,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-bts6k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82c244a3-5a3d-428b-9b81-02ea087e5124,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed1f87693e9c96233254fe7aec19cf95543b823ca22d9e30bdc64fbac01e1ae0,PodSandboxId:8135dd0e54a129fa9dbaf0b35434cdaeff325b3e93d2581fb18fc05c11523fd4,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:159abe21a6880acafcba64b5e25c48b3e74134ca6823dc553a29c127693ace3e,State:CONTAINER_RUNNING,CreatedAt:1726592228350903596,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-95n5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48c0bfc6-64c7-473b-9f8c-429d8af8f349,},Annotations:map[string]string{io.kubernetes.container.hash: 7c4b2818,io.kubernetes.containe
r.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18180b1d2a45e61bc74c08b02254b5c0a3901d01890ff4080dd549339ae22505,PodSandboxId:ba55d5dc247d0985b77a60f4327d66f79cd8b40c4857821a02aade515c531383,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726592218819928992,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a365fa42-68bf-4f57-ad20-e437ef76117e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b3332c3d67663e332214a4d2a7600673eb50a2e85f955febcae8ae6bdc61cea,PodSandboxId:1266dc642a5d5d817566a303855b89ad35e9ba5ce0cd09a6987308c623d146d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726592207362039245,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 308ed
5f9-f16c-45d9-b7c4-edb96a6aa2d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707,PodSandboxId:147e55def5b9c0c7a9d8b335c46dbfd7c65e5774b9251734e2b6257b17749d03,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726592204223691882,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6scmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db4f4dd-ff63-4e6e-8533-37fc690e481f
,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0,PodSandboxId:ba0e8772c0eef1236f4fa4985d24c32a210d0f1bdd86f2a9a4221eb1e6e06384,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726592200857172807,Labels:map[str
ing]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6blpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fea4c9da-fbd5-4ac9-be8e-7cd0b574e3fc,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad,PodSandboxId:900247dfddd23f3136780bf7695d0bc30603abe8a6d1321d2c4ce6551729d09a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726592190061688263,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3943b5aa55300847a88d97baf9f5fcd9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15,PodSandboxId:0d778775904583ba97eb27717ac52155cf18a8980c70a3e42c566fb034a6538c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726592190061957344,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b22844b2da94ceb2dd6e2ff998a06b7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44,PodSandboxId:b694988dbe8974b1414fe63b75517e2c0ec0abb7613fd6db4ec17ca7ba275fc4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726592190025949182,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.n
ame: kube-controller-manager-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc7054c0bf6f7ef8f456663c4c477a6e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454,PodSandboxId:b5606602c326c9d22fba7b773a1101738483f2c55a045ae530eb2568b3631e79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726592189934323923,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e28321a1692b9c5e59016b226421277,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d673ad13-8ee2-42f6-81f4-c367783f9457 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	397f391630495       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              7 seconds ago       Running             nginx                      0                   6e889439508c6       nginx
	f4c5e175eedc0       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 10 minutes ago      Running             gcp-auth                   0                   e39bbf11dc165       gcp-auth-89d5ffd79-b7hz4
	60c71bad9fa6a       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6             10 minutes ago      Running             controller                 0                   756a63bbbea01       ingress-nginx-controller-bc57996ff-wqgsl
	fbf8a94347cdf       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                             10 minutes ago      Exited              patch                      1                   d7e2e08709937       ingress-nginx-admission-patch-4b8gx
	6ea8c16cc48c9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   10 minutes ago      Exited              create                     0                   31ba27ee18db2       ingress-nginx-admission-create-78945
	014331b175902       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                              10 minutes ago      Running             yakd                       0                   9e6c852eaf6dd       yakd-dashboard-67d98fc6b-sjbnb
	c35ba12caa08b       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        10 minutes ago      Running             metrics-server             0                   d5f73be9090dd       metrics-server-84c5f94fbc-nxwr4
	c4cc28dd223fa       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               10 minutes ago      Exited              cloud-spanner-emulator     0                   1618cee78b70d       cloud-spanner-emulator-769b77f747-bts6k
	ed1f87693e9c9       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     10 minutes ago      Running             nvidia-device-plugin-ctr   0                   8135dd0e54a12       nvidia-device-plugin-daemonset-95n5v
	18180b1d2a45e       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             11 minutes ago      Running             minikube-ingress-dns       0                   ba55d5dc247d0       kube-ingress-dns-minikube
	4b3332c3d6766       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             11 minutes ago      Running             storage-provisioner        0                   1266dc642a5d5       storage-provisioner
	bc6baaebe3ad7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             11 minutes ago      Running             coredns                    0                   147e55def5b9c       coredns-7c65d6cfc9-6scmn
	78abe757b26b6       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             11 minutes ago      Running             kube-proxy                 0                   ba0e8772c0eef       kube-proxy-6blpt
	535459bc7374f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             11 minutes ago      Running             etcd                       0                   0d77877590458       etcd-addons-408385
	5e8239454541e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             11 minutes ago      Running             kube-scheduler             0                   900247dfddd23       kube-scheduler-addons-408385
	eb8765767a52a       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             11 minutes ago      Running             kube-controller-manager    0                   b694988dbe897       kube-controller-manager-addons-408385
	bd97816994086       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             11 minutes ago      Running             kube-apiserver             0                   b5606602c326c       kube-apiserver-addons-408385
	
	
	==> coredns [bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707] <==
	[INFO] 127.0.0.1:57157 - 41320 "HINFO IN 6395580120945152869.1644042831807943476. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013215393s
	[INFO] 10.244.0.7:46761 - 53795 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000416235s
	[INFO] 10.244.0.7:46761 - 26406 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000534252s
	[INFO] 10.244.0.7:48828 - 43868 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00012662s
	[INFO] 10.244.0.7:48828 - 6464 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000151166s
	[INFO] 10.244.0.7:37590 - 72 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000092427s
	[INFO] 10.244.0.7:37590 - 33095 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000167498s
	[INFO] 10.244.0.7:58968 - 53960 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000109529s
	[INFO] 10.244.0.7:58968 - 34006 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000103021s
	[INFO] 10.244.0.7:37473 - 44286 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000113543s
	[INFO] 10.244.0.7:37473 - 56545 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000093243s
	[INFO] 10.244.0.7:41216 - 28183 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000250593s
	[INFO] 10.244.0.7:41216 - 45082 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00008693s
	[INFO] 10.244.0.7:54147 - 34285 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000052976s
	[INFO] 10.244.0.7:54147 - 34283 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000055269s
	[INFO] 10.244.0.7:52498 - 26622 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000077619s
	[INFO] 10.244.0.7:52498 - 59135 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000083094s
	[INFO] 10.244.0.22:33436 - 15658 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000483817s
	[INFO] 10.244.0.22:54534 - 52664 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000596513s
	[INFO] 10.244.0.22:60274 - 25830 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000160929s
	[INFO] 10.244.0.22:55742 - 23361 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000135249s
	[INFO] 10.244.0.22:58422 - 120 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000117693s
	[INFO] 10.244.0.22:60253 - 8920 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000245419s
	[INFO] 10.244.0.22:47422 - 15749 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000974274s
	[INFO] 10.244.0.22:57287 - 962 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001082864s
	
	
	==> describe nodes <==
	Name:               addons-408385
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-408385
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=addons-408385
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T16_56_36_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-408385
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 16:56:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-408385
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:07:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:07:07 +0000   Tue, 17 Sep 2024 16:56:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:07:07 +0000   Tue, 17 Sep 2024 16:56:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:07:07 +0000   Tue, 17 Sep 2024 16:56:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:07:07 +0000   Tue, 17 Sep 2024 16:56:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.170
	  Hostname:    addons-408385
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 303ab64fe93940c69a272a146d3d7928
	  System UUID:                303ab64f-e939-40c6-9a27-2a146d3d7928
	  Boot ID:                    fb6d0db4-ddc4-405a-8acb-6d4fe2f98715
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  gcp-auth                    gcp-auth-89d5ffd79-b7hz4                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-wqgsl    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-6scmn                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     11m
	  kube-system                 etcd-addons-408385                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         11m
	  kube-system                 kube-apiserver-addons-408385                250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-addons-408385       200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-6blpt                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-addons-408385                100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-nxwr4             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         11m
	  kube-system                 nvidia-device-plugin-daemonset-95n5v        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-sjbnb              0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             588Mi (15%)  426Mi (11%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 11m   kube-proxy       
	  Normal  Starting                 11m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m   kubelet          Node addons-408385 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m   kubelet          Node addons-408385 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m   kubelet          Node addons-408385 status is now: NodeHasSufficientPID
	  Normal  NodeReady                11m   kubelet          Node addons-408385 status is now: NodeReady
	  Normal  RegisteredNode           11m   node-controller  Node addons-408385 event: Registered Node addons-408385 in Controller
	
	
	==> dmesg <==
	[  +5.511672] kauditd_printk_skb: 129 callbacks suppressed
	[  +5.014447] kauditd_printk_skb: 82 callbacks suppressed
	[Sep17 16:57] kauditd_printk_skb: 15 callbacks suppressed
	[ +12.722626] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.840032] kauditd_printk_skb: 27 callbacks suppressed
	[  +7.274919] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.009609] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.152447] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.508316] kauditd_printk_skb: 31 callbacks suppressed
	[  +9.343675] kauditd_printk_skb: 13 callbacks suppressed
	[Sep17 16:58] kauditd_printk_skb: 11 callbacks suppressed
	[  +9.434512] kauditd_printk_skb: 35 callbacks suppressed
	[ +36.392548] kauditd_printk_skb: 30 callbacks suppressed
	[Sep17 16:59] kauditd_printk_skb: 2 callbacks suppressed
	[Sep17 17:00] kauditd_printk_skb: 28 callbacks suppressed
	[Sep17 17:03] kauditd_printk_skb: 28 callbacks suppressed
	[Sep17 17:06] kauditd_printk_skb: 28 callbacks suppressed
	[Sep17 17:07] kauditd_printk_skb: 27 callbacks suppressed
	[  +6.051902] kauditd_printk_skb: 49 callbacks suppressed
	[ +21.859037] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.899065] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.636828] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.569598] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.336851] kauditd_printk_skb: 27 callbacks suppressed
	[Sep17 17:08] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15] <==
	{"level":"info","ts":"2024-09-17T16:57:40.825604Z","caller":"traceutil/trace.go:171","msg":"trace[707535934] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1019; }","duration":"149.5051ms","start":"2024-09-17T16:57:40.676093Z","end":"2024-09-17T16:57:40.825598Z","steps":["trace[707535934] 'agreement among raft nodes before linearized reading'  (duration: 149.444713ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T16:57:40.825821Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.125828ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-84c5f94fbc-nxwr4\" ","response":"range_response_count:1 size:4566"}
	{"level":"info","ts":"2024-09-17T16:57:40.825856Z","caller":"traceutil/trace.go:171","msg":"trace[1110547650] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-84c5f94fbc-nxwr4; range_end:; response_count:1; response_revision:1019; }","duration":"133.163284ms","start":"2024-09-17T16:57:40.692688Z","end":"2024-09-17T16:57:40.825851Z","steps":["trace[1110547650] 'agreement among raft nodes before linearized reading'  (duration: 133.076718ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T16:57:40.825938Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.331801ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T16:57:40.825956Z","caller":"traceutil/trace.go:171","msg":"trace[1450865517] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1019; }","duration":"142.345433ms","start":"2024-09-17T16:57:40.683601Z","end":"2024-09-17T16:57:40.825947Z","steps":["trace[1450865517] 'agreement among raft nodes before linearized reading'  (duration: 142.316331ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T16:57:41.126635Z","caller":"traceutil/trace.go:171","msg":"trace[957294072] linearizableReadLoop","detail":"{readStateIndex:1046; appliedIndex:1045; }","duration":"156.015075ms","start":"2024-09-17T16:57:40.970600Z","end":"2024-09-17T16:57:41.126615Z","steps":["trace[957294072] 'read index received'  (duration: 151.361865ms)","trace[957294072] 'applied index is now lower than readState.Index'  (duration: 4.652523ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-17T16:57:41.127181Z","caller":"traceutil/trace.go:171","msg":"trace[1537240876] transaction","detail":"{read_only:false; response_revision:1020; number_of_response:1; }","duration":"288.479372ms","start":"2024-09-17T16:57:40.838686Z","end":"2024-09-17T16:57:41.127165Z","steps":["trace[1537240876] 'process raft request'  (duration: 283.161611ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T16:57:41.127246Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.650442ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T16:57:41.129441Z","caller":"traceutil/trace.go:171","msg":"trace[1844929121] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1020; }","duration":"158.756306ms","start":"2024-09-17T16:57:40.970576Z","end":"2024-09-17T16:57:41.129332Z","steps":["trace[1844929121] 'agreement among raft nodes before linearized reading'  (duration: 156.630189ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T16:57:41.128691Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.896858ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T16:57:41.130001Z","caller":"traceutil/trace.go:171","msg":"trace[2083427081] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1020; }","duration":"150.161523ms","start":"2024-09-17T16:57:40.979772Z","end":"2024-09-17T16:57:41.129934Z","steps":["trace[2083427081] 'agreement among raft nodes before linearized reading'  (duration: 148.875086ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T16:57:49.758492Z","caller":"traceutil/trace.go:171","msg":"trace[1749067213] linearizableReadLoop","detail":"{readStateIndex:1128; appliedIndex:1127; }","duration":"142.014283ms","start":"2024-09-17T16:57:49.616449Z","end":"2024-09-17T16:57:49.758464Z","steps":["trace[1749067213] 'read index received'  (duration: 137.745885ms)","trace[1749067213] 'applied index is now lower than readState.Index'  (duration: 4.264398ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-17T16:57:49.761832Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.490472ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T16:57:49.761946Z","caller":"traceutil/trace.go:171","msg":"trace[402711062] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1100; }","duration":"145.520155ms","start":"2024-09-17T16:57:49.616413Z","end":"2024-09-17T16:57:49.761933Z","steps":["trace[402711062] 'agreement among raft nodes before linearized reading'  (duration: 142.267631ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T16:58:00.905621Z","caller":"traceutil/trace.go:171","msg":"trace[1306530275] linearizableReadLoop","detail":"{readStateIndex:1157; appliedIndex:1156; }","duration":"289.751652ms","start":"2024-09-17T16:58:00.615836Z","end":"2024-09-17T16:58:00.905587Z","steps":["trace[1306530275] 'read index received'  (duration: 289.557299ms)","trace[1306530275] 'applied index is now lower than readState.Index'  (duration: 193.815µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-17T16:58:00.905791Z","caller":"traceutil/trace.go:171","msg":"trace[759752851] transaction","detail":"{read_only:false; response_revision:1127; number_of_response:1; }","duration":"392.185176ms","start":"2024-09-17T16:58:00.513584Z","end":"2024-09-17T16:58:00.905769Z","steps":["trace[759752851] 'process raft request'  (duration: 391.871349ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T16:58:00.905917Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-17T16:58:00.513568Z","time spent":"392.247218ms","remote":"127.0.0.1:46318","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1125 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-09-17T16:58:00.906045Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"290.207792ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T16:58:00.906091Z","caller":"traceutil/trace.go:171","msg":"trace[250334734] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1127; }","duration":"290.252982ms","start":"2024-09-17T16:58:00.615830Z","end":"2024-09-17T16:58:00.906083Z","steps":["trace[250334734] 'agreement among raft nodes before linearized reading'  (duration: 290.193074ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T16:58:00.906401Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.852875ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-84c5f94fbc-nxwr4\" ","response":"range_response_count:1 size:4566"}
	{"level":"info","ts":"2024-09-17T16:58:00.906443Z","caller":"traceutil/trace.go:171","msg":"trace[1208448744] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-84c5f94fbc-nxwr4; range_end:; response_count:1; response_revision:1127; }","duration":"214.898501ms","start":"2024-09-17T16:58:00.691537Z","end":"2024-09-17T16:58:00.906435Z","steps":["trace[1208448744] 'agreement among raft nodes before linearized reading'  (duration: 214.748925ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T16:58:05.566022Z","caller":"traceutil/trace.go:171","msg":"trace[1110581115] transaction","detail":"{read_only:false; response_revision:1159; number_of_response:1; }","duration":"216.906105ms","start":"2024-09-17T16:58:05.349099Z","end":"2024-09-17T16:58:05.566005Z","steps":["trace[1110581115] 'process raft request'  (duration: 216.44414ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T17:06:31.055466Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1534}
	{"level":"info","ts":"2024-09-17T17:06:31.099441Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1534,"took":"43.089418ms","hash":2805165075,"current-db-size-bytes":6627328,"current-db-size":"6.6 MB","current-db-size-in-use-bytes":3305472,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-09-17T17:06:31.099569Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2805165075,"revision":1534,"compact-revision":-1}
	
	
	==> gcp-auth [f4c5e175eedc073e97f9ea9680c41d68c3c000caa69196b46e03499304831c0d] <==
	2024/09/17 16:58:05 GCP Auth Webhook started!
	2024/09/17 16:58:50 Ready to marshal response ...
	2024/09/17 16:58:50 Ready to write response ...
	2024/09/17 16:58:50 Ready to marshal response ...
	2024/09/17 16:58:50 Ready to write response ...
	2024/09/17 16:58:50 Ready to marshal response ...
	2024/09/17 16:58:50 Ready to write response ...
	2024/09/17 17:06:53 Ready to marshal response ...
	2024/09/17 17:06:53 Ready to write response ...
	2024/09/17 17:06:53 Ready to marshal response ...
	2024/09/17 17:06:53 Ready to write response ...
	2024/09/17 17:06:55 Ready to marshal response ...
	2024/09/17 17:06:55 Ready to write response ...
	2024/09/17 17:07:04 Ready to marshal response ...
	2024/09/17 17:07:04 Ready to write response ...
	2024/09/17 17:07:06 Ready to marshal response ...
	2024/09/17 17:07:06 Ready to write response ...
	2024/09/17 17:07:29 Ready to marshal response ...
	2024/09/17 17:07:29 Ready to write response ...
	2024/09/17 17:07:50 Ready to marshal response ...
	2024/09/17 17:07:50 Ready to write response ...
	2024/09/17 17:07:56 Ready to marshal response ...
	2024/09/17 17:07:56 Ready to write response ...
	
	
	==> kernel <==
	 17:08:06 up 12 min,  0 users,  load average: 1.23, 0.79, 0.57
	Linux addons-408385 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454] <==
	E0917 17:07:29.303576       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0917 17:07:30.315149       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0917 17:07:31.324258       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0917 17:07:32.332918       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0917 17:07:33.341648       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0917 17:07:34.356998       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0917 17:07:35.368574       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0917 17:07:36.377836       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0917 17:07:45.022032       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:07:45.022241       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 17:07:45.069544       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:07:45.070007       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 17:07:45.095472       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:07:45.095596       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 17:07:45.140075       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:07:45.140334       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 17:07:45.293286       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:07:45.293432       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0917 17:07:46.095884       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0917 17:07:46.293643       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0917 17:07:46.311410       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0917 17:07:56.668400       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0917 17:07:56.842947       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.139.217"}
	I0917 17:08:01.899778       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0917 17:08:03.039111       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44] <==
	W0917 17:07:49.635657       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:07:49.635791       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:07:50.356455       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:07:50.356548       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:07:53.536187       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:07:53.536418       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 17:07:54.731101       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	W0917 17:07:54.896028       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:07:54.896067       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:07:55.089867       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:07:55.090036       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 17:07:55.213605       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-b48cc5f79" duration="4.854µs"
	I0917 17:07:56.283059       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-769b77f747" duration="7.179µs"
	W0917 17:08:02.995924       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:08:02.995989       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0917 17:08:03.041047       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:08:04.388603       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:08:04.388778       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:08:04.587068       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:08:04.587123       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 17:08:05.131591       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="5.529µs"
	W0917 17:08:06.281035       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:08:06.281103       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:08:06.379283       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:08:06.379413       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 16:56:41.756177       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 16:56:41.772932       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.170"]
	E0917 16:56:41.773152       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 16:56:41.851988       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 16:56:41.852089       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 16:56:41.852113       1 server_linux.go:169] "Using iptables Proxier"
	I0917 16:56:41.860672       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 16:56:41.861044       1 server.go:483] "Version info" version="v1.31.1"
	I0917 16:56:41.861068       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 16:56:41.862987       1 config.go:199] "Starting service config controller"
	I0917 16:56:41.863008       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 16:56:41.863038       1 config.go:105] "Starting endpoint slice config controller"
	I0917 16:56:41.863044       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 16:56:41.863485       1 config.go:328] "Starting node config controller"
	I0917 16:56:41.863493       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 16:56:41.963269       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 16:56:41.963331       1 shared_informer.go:320] Caches are synced for service config
	I0917 16:56:41.963540       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad] <==
	W0917 16:56:32.615429       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0917 16:56:32.615914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:32.615566       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 16:56:32.615953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:32.615976       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 16:56:32.616015       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:33.493645       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0917 16:56:33.493786       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0917 16:56:33.498727       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0917 16:56:33.498778       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:33.500587       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0917 16:56:33.500622       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:33.737858       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0917 16:56:33.737918       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:33.762653       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0917 16:56:33.762736       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:33.781707       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 16:56:33.781836       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:33.846619       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 16:56:33.846670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:33.921594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 16:56:33.922678       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:33.924769       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 16:56:33.924819       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0917 16:56:35.706106       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 17 17:08:02 addons-408385 kubelet[1205]: I0917 17:08:02.649962    1205 scope.go:117] "RemoveContainer" containerID="990c80a8af92019944f8483b2113c5cf01492a67faed69a84a95418a19a84136"
	Sep 17 17:08:03 addons-408385 kubelet[1205]: I0917 17:08:03.296218    1205 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de28ed39-ce09-4f1d-be90-3b5c0d786949" path="/var/lib/kubelet/pods/de28ed39-ce09-4f1d-be90-3b5c0d786949/volumes"
	Sep 17 17:08:04 addons-408385 kubelet[1205]: I0917 17:08:04.762340    1205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdmm4\" (UniqueName: \"kubernetes.io/projected/65dd5017-35ea-4389-bc1a-89c709765c06-kube-api-access-wdmm4\") pod \"65dd5017-35ea-4389-bc1a-89c709765c06\" (UID: \"65dd5017-35ea-4389-bc1a-89c709765c06\") "
	Sep 17 17:08:04 addons-408385 kubelet[1205]: I0917 17:08:04.762444    1205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/65dd5017-35ea-4389-bc1a-89c709765c06-gcp-creds\") pod \"65dd5017-35ea-4389-bc1a-89c709765c06\" (UID: \"65dd5017-35ea-4389-bc1a-89c709765c06\") "
	Sep 17 17:08:04 addons-408385 kubelet[1205]: I0917 17:08:04.762552    1205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65dd5017-35ea-4389-bc1a-89c709765c06-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "65dd5017-35ea-4389-bc1a-89c709765c06" (UID: "65dd5017-35ea-4389-bc1a-89c709765c06"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 17 17:08:04 addons-408385 kubelet[1205]: I0917 17:08:04.765938    1205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65dd5017-35ea-4389-bc1a-89c709765c06-kube-api-access-wdmm4" (OuterVolumeSpecName: "kube-api-access-wdmm4") pod "65dd5017-35ea-4389-bc1a-89c709765c06" (UID: "65dd5017-35ea-4389-bc1a-89c709765c06"). InnerVolumeSpecName "kube-api-access-wdmm4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 17:08:04 addons-408385 kubelet[1205]: I0917 17:08:04.863173    1205 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wdmm4\" (UniqueName: \"kubernetes.io/projected/65dd5017-35ea-4389-bc1a-89c709765c06-kube-api-access-wdmm4\") on node \"addons-408385\" DevicePath \"\""
	Sep 17 17:08:04 addons-408385 kubelet[1205]: I0917 17:08:04.863204    1205 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/65dd5017-35ea-4389-bc1a-89c709765c06-gcp-creds\") on node \"addons-408385\" DevicePath \"\""
	Sep 17 17:08:05 addons-408385 kubelet[1205]: I0917 17:08:05.307694    1205 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65dd5017-35ea-4389-bc1a-89c709765c06" path="/var/lib/kubelet/pods/65dd5017-35ea-4389-bc1a-89c709765c06/volumes"
	Sep 17 17:08:05 addons-408385 kubelet[1205]: I0917 17:08:05.468558    1205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqgm6\" (UniqueName: \"kubernetes.io/projected/2f4278c0-9bc9-4d2d-8e73-43d39ddd1504-kube-api-access-nqgm6\") pod \"2f4278c0-9bc9-4d2d-8e73-43d39ddd1504\" (UID: \"2f4278c0-9bc9-4d2d-8e73-43d39ddd1504\") "
	Sep 17 17:08:05 addons-408385 kubelet[1205]: I0917 17:08:05.473690    1205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f4278c0-9bc9-4d2d-8e73-43d39ddd1504-kube-api-access-nqgm6" (OuterVolumeSpecName: "kube-api-access-nqgm6") pod "2f4278c0-9bc9-4d2d-8e73-43d39ddd1504" (UID: "2f4278c0-9bc9-4d2d-8e73-43d39ddd1504"). InnerVolumeSpecName "kube-api-access-nqgm6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 17:08:05 addons-408385 kubelet[1205]: I0917 17:08:05.569412    1205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmzpj\" (UniqueName: \"kubernetes.io/projected/93e3187d-0292-45df-9221-e406397b489f-kube-api-access-fmzpj\") pod \"93e3187d-0292-45df-9221-e406397b489f\" (UID: \"93e3187d-0292-45df-9221-e406397b489f\") "
	Sep 17 17:08:05 addons-408385 kubelet[1205]: I0917 17:08:05.569516    1205 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nqgm6\" (UniqueName: \"kubernetes.io/projected/2f4278c0-9bc9-4d2d-8e73-43d39ddd1504-kube-api-access-nqgm6\") on node \"addons-408385\" DevicePath \"\""
	Sep 17 17:08:05 addons-408385 kubelet[1205]: I0917 17:08:05.571494    1205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93e3187d-0292-45df-9221-e406397b489f-kube-api-access-fmzpj" (OuterVolumeSpecName: "kube-api-access-fmzpj") pod "93e3187d-0292-45df-9221-e406397b489f" (UID: "93e3187d-0292-45df-9221-e406397b489f"). InnerVolumeSpecName "kube-api-access-fmzpj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 17:08:05 addons-408385 kubelet[1205]: I0917 17:08:05.670402    1205 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fmzpj\" (UniqueName: \"kubernetes.io/projected/93e3187d-0292-45df-9221-e406397b489f-kube-api-access-fmzpj\") on node \"addons-408385\" DevicePath \"\""
	Sep 17 17:08:05 addons-408385 kubelet[1205]: I0917 17:08:05.680178    1205 scope.go:117] "RemoveContainer" containerID="c82247c904482f9e99cc154f1e8a135e6426d4448ab9f00b1c3b39c3223e99bb"
	Sep 17 17:08:05 addons-408385 kubelet[1205]: E0917 17:08:05.717860    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726592885715918302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:557374,},InodesUsed:&UInt64Value{Value:190,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:08:05 addons-408385 kubelet[1205]: E0917 17:08:05.717909    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726592885715918302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:557374,},InodesUsed:&UInt64Value{Value:190,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:08:05 addons-408385 kubelet[1205]: I0917 17:08:05.754261    1205 scope.go:117] "RemoveContainer" containerID="c82247c904482f9e99cc154f1e8a135e6426d4448ab9f00b1c3b39c3223e99bb"
	Sep 17 17:08:05 addons-408385 kubelet[1205]: E0917 17:08:05.755402    1205 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c82247c904482f9e99cc154f1e8a135e6426d4448ab9f00b1c3b39c3223e99bb\": container with ID starting with c82247c904482f9e99cc154f1e8a135e6426d4448ab9f00b1c3b39c3223e99bb not found: ID does not exist" containerID="c82247c904482f9e99cc154f1e8a135e6426d4448ab9f00b1c3b39c3223e99bb"
	Sep 17 17:08:05 addons-408385 kubelet[1205]: I0917 17:08:05.755461    1205 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c82247c904482f9e99cc154f1e8a135e6426d4448ab9f00b1c3b39c3223e99bb"} err="failed to get container status \"c82247c904482f9e99cc154f1e8a135e6426d4448ab9f00b1c3b39c3223e99bb\": rpc error: code = NotFound desc = could not find container \"c82247c904482f9e99cc154f1e8a135e6426d4448ab9f00b1c3b39c3223e99bb\": container with ID starting with c82247c904482f9e99cc154f1e8a135e6426d4448ab9f00b1c3b39c3223e99bb not found: ID does not exist"
	Sep 17 17:08:05 addons-408385 kubelet[1205]: I0917 17:08:05.755503    1205 scope.go:117] "RemoveContainer" containerID="ae70ed511aee3a13c46f574fbac71cd8a7567b24c4b40c87c322b1e8f8319bd8"
	Sep 17 17:08:05 addons-408385 kubelet[1205]: I0917 17:08:05.770287    1205 scope.go:117] "RemoveContainer" containerID="ae70ed511aee3a13c46f574fbac71cd8a7567b24c4b40c87c322b1e8f8319bd8"
	Sep 17 17:08:05 addons-408385 kubelet[1205]: E0917 17:08:05.771135    1205 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae70ed511aee3a13c46f574fbac71cd8a7567b24c4b40c87c322b1e8f8319bd8\": container with ID starting with ae70ed511aee3a13c46f574fbac71cd8a7567b24c4b40c87c322b1e8f8319bd8 not found: ID does not exist" containerID="ae70ed511aee3a13c46f574fbac71cd8a7567b24c4b40c87c322b1e8f8319bd8"
	Sep 17 17:08:05 addons-408385 kubelet[1205]: I0917 17:08:05.771191    1205 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae70ed511aee3a13c46f574fbac71cd8a7567b24c4b40c87c322b1e8f8319bd8"} err="failed to get container status \"ae70ed511aee3a13c46f574fbac71cd8a7567b24c4b40c87c322b1e8f8319bd8\": rpc error: code = NotFound desc = could not find container \"ae70ed511aee3a13c46f574fbac71cd8a7567b24c4b40c87c322b1e8f8319bd8\": container with ID starting with ae70ed511aee3a13c46f574fbac71cd8a7567b24c4b40c87c322b1e8f8319bd8 not found: ID does not exist"
	
	
	==> storage-provisioner [4b3332c3d67663e332214a4d2a7600673eb50a2e85f955febcae8ae6bdc61cea] <==
	I0917 16:56:47.969062       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 16:56:48.024282       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 16:56:48.024402       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0917 16:56:48.040055       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 16:56:48.041757       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f8b747fa-ca28-40fb-9f2b-ae004859bb2e", APIVersion:"v1", ResourceVersion:"609", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-408385_c2b88aec-d75b-48d3-8371-4055fb3d5c3d became leader
	I0917 16:56:48.043770       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-408385_c2b88aec-d75b-48d3-8371-4055fb3d5c3d!
	I0917 16:56:48.148903       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-408385_c2b88aec-d75b-48d3-8371-4055fb3d5c3d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-408385 -n addons-408385
helpers_test.go:261: (dbg) Run:  kubectl --context addons-408385 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-78945 ingress-nginx-admission-patch-4b8gx
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-408385 describe pod busybox ingress-nginx-admission-create-78945 ingress-nginx-admission-patch-4b8gx
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-408385 describe pod busybox ingress-nginx-admission-create-78945 ingress-nginx-admission-patch-4b8gx: exit status 1 (84.578033ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-408385/192.168.39.170
	Start Time:       Tue, 17 Sep 2024 16:58:50 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hhf5n (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hhf5n:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m17s                   default-scheduler  Successfully assigned default/busybox to addons-408385
	  Normal   Pulling    7m59s (x4 over 9m17s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m59s (x4 over 9m16s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m59s (x4 over 9m16s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m36s (x6 over 9m16s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m11s (x21 over 9m16s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-78945" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-4b8gx" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-408385 describe pod busybox ingress-nginx-admission-create-78945 ingress-nginx-admission-patch-4b8gx: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.41s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (152.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-408385 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-408385 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-408385 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [076f30f2-e5f7-4810-8e8d-613a12b5664c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [076f30f2-e5f7-4810-8e8d-613a12b5664c] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003623563s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-408385 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-408385 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.353622248s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-408385 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-408385 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.170
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-408385 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-408385 addons disable ingress-dns --alsologtostderr -v=1: (1.16447124s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-408385 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-408385 addons disable ingress --alsologtostderr -v=1: (7.765438931s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-408385 -n addons-408385
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-408385 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-408385 logs -n 25: (1.373942222s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-285125                                                                     | download-only-285125 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| delete  | -p download-only-581824                                                                     | download-only-581824 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| delete  | -p download-only-285125                                                                     | download-only-285125 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-510758 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |                     |
	|         | binary-mirror-510758                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:36709                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-510758                                                                     | binary-mirror-510758 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| addons  | disable dashboard -p                                                                        | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |                     |
	|         | addons-408385                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |                     |
	|         | addons-408385                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-408385 --wait=true                                                                | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:58 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-408385 ssh cat                                                                       | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:07 UTC | 17 Sep 24 17:07 UTC |
	|         | /opt/local-path-provisioner/pvc-909e1d4d-bf3e-45b2-8d6d-fc1ce31d7fc6_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-408385 addons disable                                                                | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:07 UTC | 17 Sep 24 17:07 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-408385 addons                                                                        | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:07 UTC | 17 Sep 24 17:07 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-408385 addons                                                                        | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:07 UTC | 17 Sep 24 17:07 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-408385 addons disable                                                                | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:07 UTC | 17 Sep 24 17:07 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:07 UTC | 17 Sep 24 17:07 UTC |
	|         | addons-408385                                                                               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:08 UTC | 17 Sep 24 17:08 UTC |
	|         | addons-408385                                                                               |                      |         |         |                     |                     |
	| ip      | addons-408385 ip                                                                            | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:08 UTC | 17 Sep 24 17:08 UTC |
	| addons  | addons-408385 addons disable                                                                | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:08 UTC | 17 Sep 24 17:08 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-408385 ssh curl -s                                                                   | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:08 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:08 UTC | 17 Sep 24 17:08 UTC |
	|         | -p addons-408385                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-408385 addons disable                                                                | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:08 UTC | 17 Sep 24 17:08 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-408385 addons disable                                                                | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:08 UTC | 17 Sep 24 17:08 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:08 UTC | 17 Sep 24 17:08 UTC |
	|         | -p addons-408385                                                                            |                      |         |         |                     |                     |
	| ip      | addons-408385 ip                                                                            | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:10 UTC | 17 Sep 24 17:10 UTC |
	| addons  | addons-408385 addons disable                                                                | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:10 UTC | 17 Sep 24 17:10 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-408385 addons disable                                                                | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:10 UTC | 17 Sep 24 17:10 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 16:55:51
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 16:55:51.791795   18924 out.go:345] Setting OutFile to fd 1 ...
	I0917 16:55:51.792044   18924 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:55:51.792053   18924 out.go:358] Setting ErrFile to fd 2...
	I0917 16:55:51.792058   18924 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:55:51.792230   18924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 16:55:51.792827   18924 out.go:352] Setting JSON to false
	I0917 16:55:51.793665   18924 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2267,"bootTime":1726589885,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 16:55:51.793765   18924 start.go:139] virtualization: kvm guest
	I0917 16:55:51.795973   18924 out.go:177] * [addons-408385] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 16:55:51.797387   18924 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 16:55:51.797381   18924 notify.go:220] Checking for updates...
	I0917 16:55:51.798951   18924 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 16:55:51.800529   18924 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 16:55:51.801832   18924 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 16:55:51.803070   18924 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 16:55:51.804253   18924 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 16:55:51.805653   18924 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 16:55:51.838070   18924 out.go:177] * Using the kvm2 driver based on user configuration
	I0917 16:55:51.839376   18924 start.go:297] selected driver: kvm2
	I0917 16:55:51.839394   18924 start.go:901] validating driver "kvm2" against <nil>
	I0917 16:55:51.839405   18924 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 16:55:51.840126   18924 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 16:55:51.840207   18924 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19662-11085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0917 16:55:51.855471   18924 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0917 16:55:51.855528   18924 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 16:55:51.855817   18924 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 16:55:51.855861   18924 cni.go:84] Creating CNI manager for ""
	I0917 16:55:51.855920   18924 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 16:55:51.855931   18924 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 16:55:51.855997   18924 start.go:340] cluster config:
	{Name:addons-408385 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-408385 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 16:55:51.856122   18924 iso.go:125] acquiring lock: {Name:mkee7131e09b1c5eb94d106bda884bcbdbf6f906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 16:55:51.858118   18924 out.go:177] * Starting "addons-408385" primary control-plane node in "addons-408385" cluster
	I0917 16:55:51.859487   18924 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 16:55:51.859520   18924 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0917 16:55:51.859551   18924 cache.go:56] Caching tarball of preloaded images
	I0917 16:55:51.859643   18924 preload.go:172] Found /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 16:55:51.859654   18924 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0917 16:55:51.859979   18924 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/config.json ...
	I0917 16:55:51.860003   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/config.json: {Name:mkaab3d4715b6a1329fbbb57cdab9fd6bad92461 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:55:51.860158   18924 start.go:360] acquireMachinesLock for addons-408385: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 16:55:51.860218   18924 start.go:364] duration metric: took 44.183µs to acquireMachinesLock for "addons-408385"
	I0917 16:55:51.860239   18924 start.go:93] Provisioning new machine with config: &{Name:addons-408385 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-408385 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 16:55:51.860305   18924 start.go:125] createHost starting for "" (driver="kvm2")
	I0917 16:55:51.862121   18924 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0917 16:55:51.862257   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:55:51.862301   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:55:51.877059   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34091
	I0917 16:55:51.877513   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:55:51.877999   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:55:51.878018   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:55:51.878383   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:55:51.878572   18924 main.go:141] libmachine: (addons-408385) Calling .GetMachineName
	I0917 16:55:51.878714   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:55:51.878883   18924 start.go:159] libmachine.API.Create for "addons-408385" (driver="kvm2")
	I0917 16:55:51.878911   18924 client.go:168] LocalClient.Create starting
	I0917 16:55:51.878946   18924 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem
	I0917 16:55:51.947974   18924 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem
	I0917 16:55:52.056813   18924 main.go:141] libmachine: Running pre-create checks...
	I0917 16:55:52.056834   18924 main.go:141] libmachine: (addons-408385) Calling .PreCreateCheck
	I0917 16:55:52.057355   18924 main.go:141] libmachine: (addons-408385) Calling .GetConfigRaw
	I0917 16:55:52.057806   18924 main.go:141] libmachine: Creating machine...
	I0917 16:55:52.057820   18924 main.go:141] libmachine: (addons-408385) Calling .Create
	I0917 16:55:52.057938   18924 main.go:141] libmachine: (addons-408385) Creating KVM machine...
	I0917 16:55:52.059242   18924 main.go:141] libmachine: (addons-408385) DBG | found existing default KVM network
	I0917 16:55:52.060009   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:52.059868   18946 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123a60}
	I0917 16:55:52.060022   18924 main.go:141] libmachine: (addons-408385) DBG | created network xml: 
	I0917 16:55:52.060030   18924 main.go:141] libmachine: (addons-408385) DBG | <network>
	I0917 16:55:52.060035   18924 main.go:141] libmachine: (addons-408385) DBG |   <name>mk-addons-408385</name>
	I0917 16:55:52.060041   18924 main.go:141] libmachine: (addons-408385) DBG |   <dns enable='no'/>
	I0917 16:55:52.060045   18924 main.go:141] libmachine: (addons-408385) DBG |   
	I0917 16:55:52.060051   18924 main.go:141] libmachine: (addons-408385) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0917 16:55:52.060058   18924 main.go:141] libmachine: (addons-408385) DBG |     <dhcp>
	I0917 16:55:52.060064   18924 main.go:141] libmachine: (addons-408385) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0917 16:55:52.060070   18924 main.go:141] libmachine: (addons-408385) DBG |     </dhcp>
	I0917 16:55:52.060083   18924 main.go:141] libmachine: (addons-408385) DBG |   </ip>
	I0917 16:55:52.060092   18924 main.go:141] libmachine: (addons-408385) DBG |   
	I0917 16:55:52.060101   18924 main.go:141] libmachine: (addons-408385) DBG | </network>
	I0917 16:55:52.060112   18924 main.go:141] libmachine: (addons-408385) DBG | 
	I0917 16:55:52.065525   18924 main.go:141] libmachine: (addons-408385) DBG | trying to create private KVM network mk-addons-408385 192.168.39.0/24...
	I0917 16:55:52.130546   18924 main.go:141] libmachine: (addons-408385) DBG | private KVM network mk-addons-408385 192.168.39.0/24 created
	I0917 16:55:52.130574   18924 main.go:141] libmachine: (addons-408385) Setting up store path in /home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385 ...
	I0917 16:55:52.130589   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:52.130546   18946 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 16:55:52.130612   18924 main.go:141] libmachine: (addons-408385) Building disk image from file:///home/jenkins/minikube-integration/19662-11085/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0917 16:55:52.130765   18924 main.go:141] libmachine: (addons-408385) Downloading /home/jenkins/minikube-integration/19662-11085/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19662-11085/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0917 16:55:52.385741   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:52.385631   18946 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa...
	I0917 16:55:52.511387   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:52.511277   18946 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/addons-408385.rawdisk...
	I0917 16:55:52.511413   18924 main.go:141] libmachine: (addons-408385) DBG | Writing magic tar header
	I0917 16:55:52.511427   18924 main.go:141] libmachine: (addons-408385) DBG | Writing SSH key tar header
	I0917 16:55:52.511451   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:52.511387   18946 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385 ...
	I0917 16:55:52.511506   18924 main.go:141] libmachine: (addons-408385) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385
	I0917 16:55:52.511525   18924 main.go:141] libmachine: (addons-408385) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube/machines
	I0917 16:55:52.511538   18924 main.go:141] libmachine: (addons-408385) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385 (perms=drwx------)
	I0917 16:55:52.511548   18924 main.go:141] libmachine: (addons-408385) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 16:55:52.511562   18924 main.go:141] libmachine: (addons-408385) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085
	I0917 16:55:52.511573   18924 main.go:141] libmachine: (addons-408385) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube/machines (perms=drwxr-xr-x)
	I0917 16:55:52.511586   18924 main.go:141] libmachine: (addons-408385) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube (perms=drwxr-xr-x)
	I0917 16:55:52.511598   18924 main.go:141] libmachine: (addons-408385) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085 (perms=drwxrwxr-x)
	I0917 16:55:52.511610   18924 main.go:141] libmachine: (addons-408385) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0917 16:55:52.511622   18924 main.go:141] libmachine: (addons-408385) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0917 16:55:52.511634   18924 main.go:141] libmachine: (addons-408385) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0917 16:55:52.511646   18924 main.go:141] libmachine: (addons-408385) DBG | Checking permissions on dir: /home/jenkins
	I0917 16:55:52.511656   18924 main.go:141] libmachine: (addons-408385) Creating domain...
	I0917 16:55:52.511669   18924 main.go:141] libmachine: (addons-408385) DBG | Checking permissions on dir: /home
	I0917 16:55:52.511682   18924 main.go:141] libmachine: (addons-408385) DBG | Skipping /home - not owner
	I0917 16:55:52.512603   18924 main.go:141] libmachine: (addons-408385) define libvirt domain using xml: 
	I0917 16:55:52.512625   18924 main.go:141] libmachine: (addons-408385) <domain type='kvm'>
	I0917 16:55:52.512635   18924 main.go:141] libmachine: (addons-408385)   <name>addons-408385</name>
	I0917 16:55:52.512642   18924 main.go:141] libmachine: (addons-408385)   <memory unit='MiB'>4000</memory>
	I0917 16:55:52.512649   18924 main.go:141] libmachine: (addons-408385)   <vcpu>2</vcpu>
	I0917 16:55:52.512661   18924 main.go:141] libmachine: (addons-408385)   <features>
	I0917 16:55:52.512670   18924 main.go:141] libmachine: (addons-408385)     <acpi/>
	I0917 16:55:52.512679   18924 main.go:141] libmachine: (addons-408385)     <apic/>
	I0917 16:55:52.512690   18924 main.go:141] libmachine: (addons-408385)     <pae/>
	I0917 16:55:52.512699   18924 main.go:141] libmachine: (addons-408385)     
	I0917 16:55:52.512706   18924 main.go:141] libmachine: (addons-408385)   </features>
	I0917 16:55:52.512714   18924 main.go:141] libmachine: (addons-408385)   <cpu mode='host-passthrough'>
	I0917 16:55:52.512721   18924 main.go:141] libmachine: (addons-408385)   
	I0917 16:55:52.512730   18924 main.go:141] libmachine: (addons-408385)   </cpu>
	I0917 16:55:52.512740   18924 main.go:141] libmachine: (addons-408385)   <os>
	I0917 16:55:52.512749   18924 main.go:141] libmachine: (addons-408385)     <type>hvm</type>
	I0917 16:55:52.512760   18924 main.go:141] libmachine: (addons-408385)     <boot dev='cdrom'/>
	I0917 16:55:52.512769   18924 main.go:141] libmachine: (addons-408385)     <boot dev='hd'/>
	I0917 16:55:52.512778   18924 main.go:141] libmachine: (addons-408385)     <bootmenu enable='no'/>
	I0917 16:55:52.512784   18924 main.go:141] libmachine: (addons-408385)   </os>
	I0917 16:55:52.512790   18924 main.go:141] libmachine: (addons-408385)   <devices>
	I0917 16:55:52.512802   18924 main.go:141] libmachine: (addons-408385)     <disk type='file' device='cdrom'>
	I0917 16:55:52.512812   18924 main.go:141] libmachine: (addons-408385)       <source file='/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/boot2docker.iso'/>
	I0917 16:55:52.512819   18924 main.go:141] libmachine: (addons-408385)       <target dev='hdc' bus='scsi'/>
	I0917 16:55:52.512824   18924 main.go:141] libmachine: (addons-408385)       <readonly/>
	I0917 16:55:52.512834   18924 main.go:141] libmachine: (addons-408385)     </disk>
	I0917 16:55:52.512861   18924 main.go:141] libmachine: (addons-408385)     <disk type='file' device='disk'>
	I0917 16:55:52.512887   18924 main.go:141] libmachine: (addons-408385)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0917 16:55:52.512920   18924 main.go:141] libmachine: (addons-408385)       <source file='/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/addons-408385.rawdisk'/>
	I0917 16:55:52.512947   18924 main.go:141] libmachine: (addons-408385)       <target dev='hda' bus='virtio'/>
	I0917 16:55:52.512961   18924 main.go:141] libmachine: (addons-408385)     </disk>
	I0917 16:55:52.512977   18924 main.go:141] libmachine: (addons-408385)     <interface type='network'>
	I0917 16:55:52.512987   18924 main.go:141] libmachine: (addons-408385)       <source network='mk-addons-408385'/>
	I0917 16:55:52.512994   18924 main.go:141] libmachine: (addons-408385)       <model type='virtio'/>
	I0917 16:55:52.512999   18924 main.go:141] libmachine: (addons-408385)     </interface>
	I0917 16:55:52.513008   18924 main.go:141] libmachine: (addons-408385)     <interface type='network'>
	I0917 16:55:52.513020   18924 main.go:141] libmachine: (addons-408385)       <source network='default'/>
	I0917 16:55:52.513030   18924 main.go:141] libmachine: (addons-408385)       <model type='virtio'/>
	I0917 16:55:52.513041   18924 main.go:141] libmachine: (addons-408385)     </interface>
	I0917 16:55:52.513054   18924 main.go:141] libmachine: (addons-408385)     <serial type='pty'>
	I0917 16:55:52.513065   18924 main.go:141] libmachine: (addons-408385)       <target port='0'/>
	I0917 16:55:52.513074   18924 main.go:141] libmachine: (addons-408385)     </serial>
	I0917 16:55:52.513083   18924 main.go:141] libmachine: (addons-408385)     <console type='pty'>
	I0917 16:55:52.513090   18924 main.go:141] libmachine: (addons-408385)       <target type='serial' port='0'/>
	I0917 16:55:52.513100   18924 main.go:141] libmachine: (addons-408385)     </console>
	I0917 16:55:52.513110   18924 main.go:141] libmachine: (addons-408385)     <rng model='virtio'>
	I0917 16:55:52.513123   18924 main.go:141] libmachine: (addons-408385)       <backend model='random'>/dev/random</backend>
	I0917 16:55:52.513136   18924 main.go:141] libmachine: (addons-408385)     </rng>
	I0917 16:55:52.513146   18924 main.go:141] libmachine: (addons-408385)     
	I0917 16:55:52.513151   18924 main.go:141] libmachine: (addons-408385)     
	I0917 16:55:52.513161   18924 main.go:141] libmachine: (addons-408385)   </devices>
	I0917 16:55:52.513168   18924 main.go:141] libmachine: (addons-408385) </domain>
	I0917 16:55:52.513179   18924 main.go:141] libmachine: (addons-408385) 
	I0917 16:55:52.519149   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:10:0b:0b in network default
	I0917 16:55:52.519688   18924 main.go:141] libmachine: (addons-408385) Ensuring networks are active...
	I0917 16:55:52.519712   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:55:52.520323   18924 main.go:141] libmachine: (addons-408385) Ensuring network default is active
	I0917 16:55:52.520629   18924 main.go:141] libmachine: (addons-408385) Ensuring network mk-addons-408385 is active
	I0917 16:55:52.521053   18924 main.go:141] libmachine: (addons-408385) Getting domain xml...
	I0917 16:55:52.521710   18924 main.go:141] libmachine: (addons-408385) Creating domain...
	I0917 16:55:53.811430   18924 main.go:141] libmachine: (addons-408385) Waiting to get IP...
	I0917 16:55:53.812152   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:55:53.812522   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:55:53.812543   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:53.812493   18946 retry.go:31] will retry after 197.5195ms: waiting for machine to come up
	I0917 16:55:54.012026   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:55:54.012441   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:55:54.012468   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:54.012412   18946 retry.go:31] will retry after 326.010953ms: waiting for machine to come up
	I0917 16:55:54.339858   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:55:54.340287   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:55:54.340312   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:54.340239   18946 retry.go:31] will retry after 296.869686ms: waiting for machine to come up
	I0917 16:55:54.638673   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:55:54.639104   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:55:54.639128   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:54.639060   18946 retry.go:31] will retry after 392.314611ms: waiting for machine to come up
	I0917 16:55:55.032985   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:55:55.033655   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:55:55.033684   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:55.033600   18946 retry.go:31] will retry after 585.264566ms: waiting for machine to come up
	I0917 16:55:55.620073   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:55:55.620498   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:55:55.620534   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:55.620466   18946 retry.go:31] will retry after 797.322744ms: waiting for machine to come up
	I0917 16:55:56.419607   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:55:56.420088   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:55:56.420115   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:56.420046   18946 retry.go:31] will retry after 1.028584855s: waiting for machine to come up
	I0917 16:55:57.450058   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:55:57.450474   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:55:57.450503   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:57.450420   18946 retry.go:31] will retry after 1.43599402s: waiting for machine to come up
	I0917 16:55:58.888104   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:55:58.888459   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:55:58.888481   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:58.888437   18946 retry.go:31] will retry after 1.280603811s: waiting for machine to come up
	I0917 16:56:00.170844   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:00.171138   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:56:00.171158   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:56:00.171116   18946 retry.go:31] will retry after 1.674811656s: waiting for machine to come up
	I0917 16:56:01.848038   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:01.848477   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:56:01.848503   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:56:01.848445   18946 retry.go:31] will retry after 2.792716027s: waiting for machine to come up
	I0917 16:56:04.644899   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:04.645317   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:56:04.645336   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:56:04.645282   18946 retry.go:31] will retry after 2.720169067s: waiting for machine to come up
	I0917 16:56:07.367470   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:07.367874   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:56:07.367899   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:56:07.367847   18946 retry.go:31] will retry after 4.528965555s: waiting for machine to come up
	I0917 16:56:11.898213   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:11.898579   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:56:11.898600   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:56:11.898539   18946 retry.go:31] will retry after 4.262922802s: waiting for machine to come up
	I0917 16:56:16.165468   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.165964   18924 main.go:141] libmachine: (addons-408385) Found IP for machine: 192.168.39.170
	I0917 16:56:16.165979   18924 main.go:141] libmachine: (addons-408385) Reserving static IP address...
	I0917 16:56:16.165988   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has current primary IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.166352   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find host DHCP lease matching {name: "addons-408385", mac: "52:54:00:69:b5:a2", ip: "192.168.39.170"} in network mk-addons-408385
	I0917 16:56:16.239610   18924 main.go:141] libmachine: (addons-408385) DBG | Getting to WaitForSSH function...
	I0917 16:56:16.239655   18924 main.go:141] libmachine: (addons-408385) Reserved static IP address: 192.168.39.170
	I0917 16:56:16.239670   18924 main.go:141] libmachine: (addons-408385) Waiting for SSH to be available...
	I0917 16:56:16.242205   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.242648   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:minikube Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:16.242681   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.242868   18924 main.go:141] libmachine: (addons-408385) DBG | Using SSH client type: external
	I0917 16:56:16.242892   18924 main.go:141] libmachine: (addons-408385) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa (-rw-------)
	I0917 16:56:16.242919   18924 main.go:141] libmachine: (addons-408385) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.170 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 16:56:16.242929   18924 main.go:141] libmachine: (addons-408385) DBG | About to run SSH command:
	I0917 16:56:16.242938   18924 main.go:141] libmachine: (addons-408385) DBG | exit 0
	I0917 16:56:16.377461   18924 main.go:141] libmachine: (addons-408385) DBG | SSH cmd err, output: <nil>: 
	I0917 16:56:16.377719   18924 main.go:141] libmachine: (addons-408385) KVM machine creation complete!
	I0917 16:56:16.378103   18924 main.go:141] libmachine: (addons-408385) Calling .GetConfigRaw
	I0917 16:56:16.378639   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:16.378776   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:16.378886   18924 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0917 16:56:16.378895   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:16.380224   18924 main.go:141] libmachine: Detecting operating system of created instance...
	I0917 16:56:16.380240   18924 main.go:141] libmachine: Waiting for SSH to be available...
	I0917 16:56:16.380247   18924 main.go:141] libmachine: Getting to WaitForSSH function...
	I0917 16:56:16.380282   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:16.382400   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.382795   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:16.382826   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.382937   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:16.383090   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:16.383243   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:16.383336   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:16.383453   18924 main.go:141] libmachine: Using SSH client type: native
	I0917 16:56:16.383654   18924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0917 16:56:16.383667   18924 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0917 16:56:16.496650   18924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 16:56:16.496684   18924 main.go:141] libmachine: Detecting the provisioner...
	I0917 16:56:16.496692   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:16.499052   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.499387   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:16.499419   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.499509   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:16.499704   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:16.499841   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:16.499969   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:16.500153   18924 main.go:141] libmachine: Using SSH client type: native
	I0917 16:56:16.500355   18924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0917 16:56:16.500368   18924 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0917 16:56:16.614164   18924 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0917 16:56:16.614231   18924 main.go:141] libmachine: found compatible host: buildroot
	I0917 16:56:16.614239   18924 main.go:141] libmachine: Provisioning with buildroot...
	I0917 16:56:16.614251   18924 main.go:141] libmachine: (addons-408385) Calling .GetMachineName
	I0917 16:56:16.614509   18924 buildroot.go:166] provisioning hostname "addons-408385"
	I0917 16:56:16.614541   18924 main.go:141] libmachine: (addons-408385) Calling .GetMachineName
	I0917 16:56:16.614725   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:16.616892   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.617265   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:16.617292   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.617459   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:16.617618   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:16.617766   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:16.617880   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:16.618037   18924 main.go:141] libmachine: Using SSH client type: native
	I0917 16:56:16.618259   18924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0917 16:56:16.618274   18924 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-408385 && echo "addons-408385" | sudo tee /etc/hostname
	I0917 16:56:16.748306   18924 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-408385
	
	I0917 16:56:16.748338   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:16.751036   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.751353   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:16.751375   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.751594   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:16.751810   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:16.751967   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:16.752091   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:16.752236   18924 main.go:141] libmachine: Using SSH client type: native
	I0917 16:56:16.752408   18924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0917 16:56:16.752423   18924 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-408385' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-408385/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-408385' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 16:56:16.874871   18924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 16:56:16.874903   18924 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 16:56:16.874921   18924 buildroot.go:174] setting up certificates
	I0917 16:56:16.874931   18924 provision.go:84] configureAuth start
	I0917 16:56:16.874941   18924 main.go:141] libmachine: (addons-408385) Calling .GetMachineName
	I0917 16:56:16.875174   18924 main.go:141] libmachine: (addons-408385) Calling .GetIP
	I0917 16:56:16.877616   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.877962   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:16.877988   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.878128   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:16.879974   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.880235   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:16.880259   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.880362   18924 provision.go:143] copyHostCerts
	I0917 16:56:16.880447   18924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 16:56:16.880581   18924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 16:56:16.880694   18924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 16:56:16.880808   18924 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.addons-408385 san=[127.0.0.1 192.168.39.170 addons-408385 localhost minikube]
	I0917 16:56:17.201888   18924 provision.go:177] copyRemoteCerts
	I0917 16:56:17.201953   18924 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 16:56:17.201979   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:17.204413   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.204738   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:17.204767   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.204895   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:17.205077   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:17.205246   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:17.205392   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:17.291808   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 16:56:17.316923   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 16:56:17.341072   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 16:56:17.365516   18924 provision.go:87] duration metric: took 490.573886ms to configureAuth
	I0917 16:56:17.365539   18924 buildroot.go:189] setting minikube options for container-runtime
	I0917 16:56:17.365730   18924 config.go:182] Loaded profile config "addons-408385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 16:56:17.365826   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:17.368283   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.368639   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:17.368670   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.368823   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:17.369022   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:17.369153   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:17.369339   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:17.369514   18924 main.go:141] libmachine: Using SSH client type: native
	I0917 16:56:17.369693   18924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0917 16:56:17.369712   18924 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 16:56:17.597824   18924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 16:56:17.597848   18924 main.go:141] libmachine: Checking connection to Docker...
	I0917 16:56:17.597855   18924 main.go:141] libmachine: (addons-408385) Calling .GetURL
	I0917 16:56:17.599183   18924 main.go:141] libmachine: (addons-408385) DBG | Using libvirt version 6000000
	I0917 16:56:17.601596   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.601942   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:17.602006   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.602117   18924 main.go:141] libmachine: Docker is up and running!
	I0917 16:56:17.602131   18924 main.go:141] libmachine: Reticulating splines...
	I0917 16:56:17.602139   18924 client.go:171] duration metric: took 25.723220135s to LocalClient.Create
	I0917 16:56:17.602162   18924 start.go:167] duration metric: took 25.723279645s to libmachine.API.Create "addons-408385"
	I0917 16:56:17.602175   18924 start.go:293] postStartSetup for "addons-408385" (driver="kvm2")
	I0917 16:56:17.602188   18924 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 16:56:17.602210   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:17.602465   18924 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 16:56:17.602494   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:17.604650   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.604946   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:17.604964   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.605100   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:17.605274   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:17.605409   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:17.605565   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:17.694995   18924 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 16:56:17.699639   18924 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 16:56:17.699666   18924 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 16:56:17.699739   18924 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 16:56:17.699761   18924 start.go:296] duration metric: took 97.580146ms for postStartSetup
	I0917 16:56:17.699789   18924 main.go:141] libmachine: (addons-408385) Calling .GetConfigRaw
	I0917 16:56:17.700415   18924 main.go:141] libmachine: (addons-408385) Calling .GetIP
	I0917 16:56:17.702737   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.703149   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:17.703177   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.703448   18924 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/config.json ...
	I0917 16:56:17.703625   18924 start.go:128] duration metric: took 25.843310151s to createHost
	I0917 16:56:17.703646   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:17.705890   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.706224   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:17.706252   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.706358   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:17.706557   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:17.706719   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:17.706848   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:17.706979   18924 main.go:141] libmachine: Using SSH client type: native
	I0917 16:56:17.707143   18924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0917 16:56:17.707155   18924 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 16:56:17.822141   18924 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726592177.789241010
	
	I0917 16:56:17.822164   18924 fix.go:216] guest clock: 1726592177.789241010
	I0917 16:56:17.822171   18924 fix.go:229] Guest: 2024-09-17 16:56:17.78924101 +0000 UTC Remote: 2024-09-17 16:56:17.703636441 +0000 UTC m=+25.947315089 (delta=85.604569ms)
	I0917 16:56:17.822210   18924 fix.go:200] guest clock delta is within tolerance: 85.604569ms
	I0917 16:56:17.822215   18924 start.go:83] releasing machines lock for "addons-408385", held for 25.961986034s
	I0917 16:56:17.822238   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:17.822502   18924 main.go:141] libmachine: (addons-408385) Calling .GetIP
	I0917 16:56:17.825005   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.825336   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:17.825360   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.825513   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:17.826069   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:17.826274   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:17.826383   18924 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 16:56:17.826443   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:17.826489   18924 ssh_runner.go:195] Run: cat /version.json
	I0917 16:56:17.826513   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:17.829125   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.829486   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:17.829512   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.829533   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.829632   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:17.829794   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:17.829906   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:17.829934   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.829954   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:17.830071   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:17.830128   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:17.830224   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:17.830373   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:17.830521   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:17.951534   18924 ssh_runner.go:195] Run: systemctl --version
	I0917 16:56:17.958040   18924 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 16:56:18.115686   18924 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 16:56:18.123126   18924 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 16:56:18.123194   18924 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 16:56:18.140793   18924 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 16:56:18.140817   18924 start.go:495] detecting cgroup driver to use...
	I0917 16:56:18.140888   18924 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 16:56:18.158500   18924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 16:56:18.173453   18924 docker.go:217] disabling cri-docker service (if available) ...
	I0917 16:56:18.173513   18924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 16:56:18.187957   18924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 16:56:18.202598   18924 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 16:56:18.333027   18924 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 16:56:18.469130   18924 docker.go:233] disabling docker service ...
	I0917 16:56:18.469199   18924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 16:56:18.484667   18924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 16:56:18.498998   18924 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 16:56:18.641389   18924 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 16:56:18.776008   18924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 16:56:18.790837   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 16:56:18.812674   18924 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 16:56:18.812737   18924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 16:56:18.823898   18924 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 16:56:18.823956   18924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 16:56:18.834933   18924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 16:56:18.845553   18924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 16:56:18.856619   18924 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 16:56:18.868015   18924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 16:56:18.879257   18924 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 16:56:18.899805   18924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 16:56:18.911427   18924 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 16:56:18.921735   18924 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 16:56:18.921790   18924 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 16:56:18.936457   18924 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 16:56:18.946747   18924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 16:56:19.065494   18924 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 16:56:19.226108   18924 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 16:56:19.226205   18924 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 16:56:19.231213   18924 start.go:563] Will wait 60s for crictl version
	I0917 16:56:19.231297   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:56:19.235087   18924 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 16:56:19.281633   18924 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 16:56:19.281783   18924 ssh_runner.go:195] Run: crio --version
	I0917 16:56:19.311850   18924 ssh_runner.go:195] Run: crio --version
	I0917 16:56:19.341785   18924 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 16:56:19.343242   18924 main.go:141] libmachine: (addons-408385) Calling .GetIP
	I0917 16:56:19.345825   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:19.346167   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:19.346191   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:19.346407   18924 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0917 16:56:19.350778   18924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 16:56:19.364110   18924 kubeadm.go:883] updating cluster {Name:addons-408385 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-408385 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 16:56:19.364217   18924 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 16:56:19.364273   18924 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 16:56:19.396930   18924 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0917 16:56:19.397013   18924 ssh_runner.go:195] Run: which lz4
	I0917 16:56:19.401270   18924 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 16:56:19.405740   18924 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 16:56:19.405769   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0917 16:56:20.822525   18924 crio.go:462] duration metric: took 1.421306506s to copy over tarball
	I0917 16:56:20.822624   18924 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 16:56:23.006691   18924 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.184029221s)
	I0917 16:56:23.006730   18924 crio.go:469] duration metric: took 2.18417646s to extract the tarball
	I0917 16:56:23.006741   18924 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 16:56:23.043946   18924 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 16:56:23.086263   18924 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 16:56:23.086285   18924 cache_images.go:84] Images are preloaded, skipping loading
	I0917 16:56:23.086293   18924 kubeadm.go:934] updating node { 192.168.39.170 8443 v1.31.1 crio true true} ...
	I0917 16:56:23.086391   18924 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-408385 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-408385 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 16:56:23.086476   18924 ssh_runner.go:195] Run: crio config
	I0917 16:56:23.135589   18924 cni.go:84] Creating CNI manager for ""
	I0917 16:56:23.135612   18924 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 16:56:23.135622   18924 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 16:56:23.135642   18924 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.170 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-408385 NodeName:addons-408385 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 16:56:23.135765   18924 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.170
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-408385"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.170
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.170"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 16:56:23.135824   18924 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 16:56:23.146424   18924 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 16:56:23.146483   18924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 16:56:23.156664   18924 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0917 16:56:23.176236   18924 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 16:56:23.195926   18924 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0917 16:56:23.215956   18924 ssh_runner.go:195] Run: grep 192.168.39.170	control-plane.minikube.internal$ /etc/hosts
	I0917 16:56:23.220278   18924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.170	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 16:56:23.233718   18924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 16:56:23.361479   18924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 16:56:23.378343   18924 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385 for IP: 192.168.39.170
	I0917 16:56:23.378364   18924 certs.go:194] generating shared ca certs ...
	I0917 16:56:23.378379   18924 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:23.378538   18924 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 16:56:23.468659   18924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt ...
	I0917 16:56:23.468687   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt: {Name:mk4b2dc121f54e472a610da41ce39781730efcb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:23.468849   18924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key ...
	I0917 16:56:23.468860   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key: {Name:mk39fdbf9eb5c96a10b5f07aaa642e9ef6ef62c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:23.468930   18924 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 16:56:23.595987   18924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt ...
	I0917 16:56:23.596018   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt: {Name:mk688819f8e2946789f357ecd51fe07706693989 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:23.596170   18924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key ...
	I0917 16:56:23.596179   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key: {Name:mkcde83262d3acd542cf7897dccc5670ae8cce18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:23.596265   18924 certs.go:256] generating profile certs ...
	I0917 16:56:23.596328   18924 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.key
	I0917 16:56:23.596374   18924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt with IP's: []
	I0917 16:56:23.869724   18924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt ...
	I0917 16:56:23.869759   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: {Name:mk4d7f220fa0245c5bbf00a3bd85f1e0aa7b9b47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:23.869952   18924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.key ...
	I0917 16:56:23.869965   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.key: {Name:mka2d16d15d95cd3b1c29597e7f457020bb94a94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:23.870061   18924 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.key.7e131253
	I0917 16:56:23.870080   18924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.crt.7e131253 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.170]
	I0917 16:56:24.042828   18924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.crt.7e131253 ...
	I0917 16:56:24.042859   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.crt.7e131253: {Name:mkcf5a60df0a4773d88e8945f55342f4090e0047 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:24.043040   18924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.key.7e131253 ...
	I0917 16:56:24.043056   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.key.7e131253: {Name:mk4c9b250fe83846f2bf2a73f79edfbf255dff83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:24.043155   18924 certs.go:381] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.crt.7e131253 -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.crt
	I0917 16:56:24.043233   18924 certs.go:385] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.key.7e131253 -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.key
	I0917 16:56:24.043281   18924 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/proxy-client.key
	I0917 16:56:24.043297   18924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/proxy-client.crt with IP's: []
	I0917 16:56:24.187225   18924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/proxy-client.crt ...
	I0917 16:56:24.187252   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/proxy-client.crt: {Name:mk2cb67c490b7c4e2ac97ea0e98192c0133b5d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:24.187447   18924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/proxy-client.key ...
	I0917 16:56:24.187462   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/proxy-client.key: {Name:mk7886820c83ede55497d40d59a86ffc001d73bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:24.187650   18924 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 16:56:24.187683   18924 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 16:56:24.187708   18924 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 16:56:24.187731   18924 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 16:56:24.188296   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 16:56:24.217099   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 16:56:24.260095   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 16:56:24.286974   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 16:56:24.312555   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0917 16:56:24.338456   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 16:56:24.364498   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 16:56:24.390393   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 16:56:24.416565   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 16:56:24.441061   18924 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 16:56:24.459229   18924 ssh_runner.go:195] Run: openssl version
	I0917 16:56:24.466207   18924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 16:56:24.477993   18924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 16:56:24.482776   18924 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 16:56:24.482851   18924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 16:56:24.489986   18924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 16:56:24.501914   18924 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 16:56:24.506316   18924 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 16:56:24.506374   18924 kubeadm.go:392] StartCluster: {Name:addons-408385 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-408385 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 16:56:24.506440   18924 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 16:56:24.506497   18924 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 16:56:24.546313   18924 cri.go:89] found id: ""
	I0917 16:56:24.546370   18924 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 16:56:24.556630   18924 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 16:56:24.567104   18924 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 16:56:24.577871   18924 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 16:56:24.577897   18924 kubeadm.go:157] found existing configuration files:
	
	I0917 16:56:24.577941   18924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 16:56:24.588136   18924 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 16:56:24.588194   18924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 16:56:24.598858   18924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 16:56:24.608830   18924 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 16:56:24.608895   18924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 16:56:24.619369   18924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 16:56:24.630137   18924 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 16:56:24.630198   18924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 16:56:24.640661   18924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 16:56:24.650527   18924 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 16:56:24.650585   18924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 16:56:24.661071   18924 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 16:56:24.716386   18924 kubeadm.go:310] W0917 16:56:24.688487     813 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 16:56:24.717689   18924 kubeadm.go:310] W0917 16:56:24.690025     813 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 16:56:24.829103   18924 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 16:56:35.968996   18924 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 16:56:35.969071   18924 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 16:56:35.969172   18924 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 16:56:35.969326   18924 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 16:56:35.969456   18924 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 16:56:35.969552   18924 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 16:56:35.971346   18924 out.go:235]   - Generating certificates and keys ...
	I0917 16:56:35.971417   18924 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 16:56:35.971479   18924 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 16:56:35.971560   18924 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 16:56:35.971628   18924 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 16:56:35.971688   18924 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 16:56:35.971734   18924 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 16:56:35.971786   18924 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 16:56:35.971889   18924 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-408385 localhost] and IPs [192.168.39.170 127.0.0.1 ::1]
	I0917 16:56:35.971939   18924 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 16:56:35.972038   18924 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-408385 localhost] and IPs [192.168.39.170 127.0.0.1 ::1]
	I0917 16:56:35.972112   18924 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 16:56:35.972189   18924 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 16:56:35.972237   18924 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 16:56:35.972303   18924 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 16:56:35.972346   18924 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 16:56:35.972402   18924 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 16:56:35.972454   18924 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 16:56:35.972511   18924 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 16:56:35.972592   18924 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 16:56:35.972711   18924 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 16:56:35.972783   18924 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 16:56:35.975168   18924 out.go:235]   - Booting up control plane ...
	I0917 16:56:35.975264   18924 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 16:56:35.975333   18924 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 16:56:35.975390   18924 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 16:56:35.975497   18924 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 16:56:35.975587   18924 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 16:56:35.975627   18924 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 16:56:35.975737   18924 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 16:56:35.975844   18924 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 16:56:35.975901   18924 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001493427s
	I0917 16:56:35.975973   18924 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 16:56:35.976034   18924 kubeadm.go:310] [api-check] The API server is healthy after 5.001561419s
	I0917 16:56:35.976169   18924 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 16:56:35.976274   18924 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 16:56:35.976324   18924 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 16:56:35.976482   18924 kubeadm.go:310] [mark-control-plane] Marking the node addons-408385 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 16:56:35.976537   18924 kubeadm.go:310] [bootstrap-token] Using token: sa12t0.gjj5918ic1mqv0s7
	I0917 16:56:35.977945   18924 out.go:235]   - Configuring RBAC rules ...
	I0917 16:56:35.978054   18924 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 16:56:35.978128   18924 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 16:56:35.978288   18924 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 16:56:35.978410   18924 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 16:56:35.978518   18924 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 16:56:35.978615   18924 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 16:56:35.978719   18924 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 16:56:35.978764   18924 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 16:56:35.978818   18924 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 16:56:35.978838   18924 kubeadm.go:310] 
	I0917 16:56:35.978908   18924 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 16:56:35.978916   18924 kubeadm.go:310] 
	I0917 16:56:35.978996   18924 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 16:56:35.979002   18924 kubeadm.go:310] 
	I0917 16:56:35.979023   18924 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 16:56:35.979079   18924 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 16:56:35.979124   18924 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 16:56:35.979130   18924 kubeadm.go:310] 
	I0917 16:56:35.979179   18924 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 16:56:35.979186   18924 kubeadm.go:310] 
	I0917 16:56:35.979225   18924 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 16:56:35.979231   18924 kubeadm.go:310] 
	I0917 16:56:35.979277   18924 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 16:56:35.979341   18924 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 16:56:35.979408   18924 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 16:56:35.979414   18924 kubeadm.go:310] 
	I0917 16:56:35.979487   18924 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 16:56:35.979556   18924 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 16:56:35.979562   18924 kubeadm.go:310] 
	I0917 16:56:35.979647   18924 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sa12t0.gjj5918ic1mqv0s7 \
	I0917 16:56:35.979750   18924 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 \
	I0917 16:56:35.979771   18924 kubeadm.go:310] 	--control-plane 
	I0917 16:56:35.979776   18924 kubeadm.go:310] 
	I0917 16:56:35.979853   18924 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 16:56:35.979861   18924 kubeadm.go:310] 
	I0917 16:56:35.979942   18924 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sa12t0.gjj5918ic1mqv0s7 \
	I0917 16:56:35.980055   18924 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 
	I0917 16:56:35.980068   18924 cni.go:84] Creating CNI manager for ""
	I0917 16:56:35.980074   18924 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 16:56:35.982263   18924 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 16:56:35.983608   18924 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 16:56:35.994882   18924 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 16:56:36.019583   18924 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 16:56:36.019687   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:36.019738   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-408385 minikube.k8s.io/updated_at=2024_09_17T16_56_36_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=addons-408385 minikube.k8s.io/primary=true
	I0917 16:56:36.048300   18924 ops.go:34] apiserver oom_adj: -16
	I0917 16:56:36.170162   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:36.670383   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:37.170820   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:37.671076   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:38.170926   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:38.671033   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:39.170837   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:39.670394   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:40.171111   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:40.279973   18924 kubeadm.go:1113] duration metric: took 4.260359264s to wait for elevateKubeSystemPrivileges
	I0917 16:56:40.280020   18924 kubeadm.go:394] duration metric: took 15.773648579s to StartCluster
	I0917 16:56:40.280041   18924 settings.go:142] acquiring lock: {Name:mk34898861b3ed534da4bd8404feab95fe690001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:40.280170   18924 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 16:56:40.280550   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:40.280764   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 16:56:40.280775   18924 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 16:56:40.280828   18924 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0917 16:56:40.280929   18924 addons.go:69] Setting inspektor-gadget=true in profile "addons-408385"
	I0917 16:56:40.280942   18924 addons.go:69] Setting volcano=true in profile "addons-408385"
	I0917 16:56:40.280954   18924 addons.go:234] Setting addon volcano=true in "addons-408385"
	I0917 16:56:40.280953   18924 addons.go:69] Setting storage-provisioner=true in profile "addons-408385"
	I0917 16:56:40.280966   18924 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-408385"
	I0917 16:56:40.280977   18924 config.go:182] Loaded profile config "addons-408385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 16:56:40.280993   18924 addons.go:69] Setting volumesnapshots=true in profile "addons-408385"
	I0917 16:56:40.280996   18924 addons.go:69] Setting metrics-server=true in profile "addons-408385"
	I0917 16:56:40.281007   18924 addons.go:234] Setting addon volumesnapshots=true in "addons-408385"
	I0917 16:56:40.281017   18924 addons.go:69] Setting helm-tiller=true in profile "addons-408385"
	I0917 16:56:40.280959   18924 addons.go:69] Setting cloud-spanner=true in profile "addons-408385"
	I0917 16:56:40.281025   18924 addons.go:69] Setting ingress-dns=true in profile "addons-408385"
	I0917 16:56:40.281032   18924 addons.go:69] Setting default-storageclass=true in profile "addons-408385"
	I0917 16:56:40.281032   18924 addons.go:69] Setting gcp-auth=true in profile "addons-408385"
	I0917 16:56:40.281038   18924 addons.go:234] Setting addon ingress-dns=true in "addons-408385"
	I0917 16:56:40.281029   18924 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-408385"
	I0917 16:56:40.281044   18924 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-408385"
	I0917 16:56:40.281049   18924 mustload.go:65] Loading cluster: addons-408385
	I0917 16:56:40.281053   18924 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-408385"
	I0917 16:56:40.281053   18924 addons.go:234] Setting addon cloud-spanner=true in "addons-408385"
	I0917 16:56:40.281064   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.280954   18924 addons.go:234] Setting addon inspektor-gadget=true in "addons-408385"
	I0917 16:56:40.281084   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.281092   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.281104   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.280980   18924 addons.go:234] Setting addon storage-provisioner=true in "addons-408385"
	I0917 16:56:40.281211   18924 config.go:182] Loaded profile config "addons-408385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 16:56:40.281258   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.281535   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.281547   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.281572   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.281587   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.281033   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.281010   18924 addons.go:234] Setting addon metrics-server=true in "addons-408385"
	I0917 16:56:40.281537   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.280928   18924 addons.go:69] Setting yakd=true in profile "addons-408385"
	I0917 16:56:40.280984   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.281654   18924 addons.go:234] Setting addon yakd=true in "addons-408385"
	I0917 16:56:40.280987   18924 addons.go:69] Setting registry=true in profile "addons-408385"
	I0917 16:56:40.281672   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.281014   18924 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-408385"
	I0917 16:56:40.281028   18924 addons.go:234] Setting addon helm-tiller=true in "addons-408385"
	I0917 16:56:40.281535   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.281702   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.281674   18924 addons.go:234] Setting addon registry=true in "addons-408385"
	I0917 16:56:40.281712   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.281541   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.281732   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.281742   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.281021   18924 addons.go:69] Setting ingress=true in profile "addons-408385"
	I0917 16:56:40.281764   18924 addons.go:234] Setting addon ingress=true in "addons-408385"
	I0917 16:56:40.280936   18924 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-408385"
	I0917 16:56:40.281825   18924 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-408385"
	I0917 16:56:40.281873   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.281950   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.282079   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.282161   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.282186   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.282226   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.282238   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.282249   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.282261   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.282263   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.282286   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.282307   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.282229   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.282101   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.282491   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.282578   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.282605   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.282779   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.282827   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.282866   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.283094   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.283133   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.286312   18924 out.go:177] * Verifying Kubernetes components...
	I0917 16:56:40.287618   18924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 16:56:40.298908   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41809
	I0917 16:56:40.299068   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41359
	I0917 16:56:40.309608   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.309649   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.309700   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46171
	I0917 16:56:40.309807   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42879
	I0917 16:56:40.310065   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.310120   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.311980   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.312056   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.312293   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.312867   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.312887   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.313019   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.313031   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.313141   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.313152   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.313482   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.313528   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.313558   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.313604   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.314157   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.314183   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.314467   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.314488   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.314708   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.314760   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.315123   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.315527   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.315559   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.315951   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.320566   18924 addons.go:234] Setting addon default-storageclass=true in "addons-408385"
	I0917 16:56:40.320611   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.320981   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.321026   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.346242   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43905
	I0917 16:56:40.346807   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.347541   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.347571   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.348071   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.353705   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.357689   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37035
	I0917 16:56:40.358028   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38937
	I0917 16:56:40.358152   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37233
	I0917 16:56:40.358342   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36079
	I0917 16:56:40.358940   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34773
	I0917 16:56:40.359063   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.359591   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.359683   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.359700   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.359848   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.359959   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39249
	I0917 16:56:40.360078   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.360347   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.360572   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.360585   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.360591   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.360604   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.360670   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.360866   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.360880   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.360892   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.360995   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.361576   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.361617   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.361638   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.361651   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.361713   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.361760   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.361816   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.362002   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.362194   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.362474   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.362507   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.363919   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36061
	I0917 16:56:40.364019   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.364306   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.364485   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.364489   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.364503   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.365582   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33787
	I0917 16:56:40.365887   18924 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-408385"
	I0917 16:56:40.365928   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.366157   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.366191   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.366314   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.366338   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.366584   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.366594   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40661
	I0917 16:56:40.385020   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44237
	I0917 16:56:40.385053   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33747
	I0917 16:56:40.385026   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38213
	I0917 16:56:40.385345   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.385360   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.385377   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.385392   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.385441   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.385849   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.385945   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.386207   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.386235   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.386770   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.386839   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.386838   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.386856   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.386896   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.387149   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.387217   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.387351   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.387369   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.387504   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.387514   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.387573   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.387705   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.387723   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.387999   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.388667   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.388686   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.388751   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39331
	I0917 16:56:40.389456   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.389491   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.390745   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.390799   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.390825   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.390929   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.391352   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.391419   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.391635   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45913
	I0917 16:56:40.391780   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.391820   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.391906   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.391922   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.392274   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.392725   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.392756   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.392952   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33421
	I0917 16:56:40.393072   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.393464   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.393477   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.393806   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.393834   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.393926   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.394284   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.394301   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.394504   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.395211   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.395377   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.395596   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.395806   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.396088   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.396426   18924 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0917 16:56:40.396481   18924 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0917 16:56:40.398128   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.398337   18924 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 16:56:40.398355   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0917 16:56:40.398374   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.398436   18924 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0917 16:56:40.398463   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.398937   18924 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0917 16:56:40.399242   18924 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0917 16:56:40.399265   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.399639   18924 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0917 16:56:40.400906   18924 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0917 16:56:40.400950   18924 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0917 16:56:40.401326   18924 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0917 16:56:40.401347   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.401945   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.402595   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.402623   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.402809   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.402906   18924 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0917 16:56:40.402919   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0917 16:56:40.402936   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.402975   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.403463   18924 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 16:56:40.403552   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.403728   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.406113   18924 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 16:56:40.406794   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.406822   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42877
	I0917 16:56:40.406831   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.406851   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.406868   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.407039   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.407412   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.407477   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46253
	I0917 16:56:40.407478   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.407595   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.407739   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.407938   18924 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 16:56:40.407953   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0917 16:56:40.407967   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.408588   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.408686   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.408706   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.408715   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.409106   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.409130   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.409365   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.409533   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.409652   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.409751   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.410256   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.410275   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.410457   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.410629   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.410870   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.410885   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.410934   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.411338   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.411612   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.411868   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38341
	I0917 16:56:40.412251   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.412291   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.412296   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.412474   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.412838   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.412875   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.412954   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.412975   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.413015   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.413065   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.413625   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.413666   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.413850   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.414043   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.414175   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.414769   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.415585   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.417193   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.417660   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45933
	I0917 16:56:40.418242   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.418815   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.418842   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.419070   18924 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0917 16:56:40.419428   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.420115   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.420155   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.420492   18924 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0917 16:56:40.420506   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0917 16:56:40.420522   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.422261   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44323
	I0917 16:56:40.423377   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.423827   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.424457   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.424478   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.424549   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.424568   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.424717   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.424845   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.424938   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.425025   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.425342   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.427630   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37231
	I0917 16:56:40.428234   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.428248   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33441
	I0917 16:56:40.428817   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.428839   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.428912   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.429324   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.429475   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.429488   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.429563   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.429599   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.429886   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45523
	I0917 16:56:40.430318   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.430434   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.430844   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.430971   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.430982   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.431353   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.431404   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.431891   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.433596   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.435129   18924 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0917 16:56:40.435873   18924 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 16:56:40.436250   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.436549   18924 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0917 16:56:40.436566   18924 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0917 16:56:40.436587   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.437385   18924 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 16:56:40.437402   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 16:56:40.437420   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.437904   18924 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0917 16:56:40.439229   18924 out.go:177]   - Using image docker.io/registry:2.8.3
	I0917 16:56:40.440893   18924 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0917 16:56:40.440910   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0917 16:56:40.440929   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.442279   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.442325   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36387
	I0917 16:56:40.442832   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.443262   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.443270   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.443295   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.443522   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.443551   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.443747   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.443765   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.443793   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.443812   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.443956   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.443983   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.444081   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.444089   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.444200   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42843
	I0917 16:56:40.444225   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.444247   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.444406   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.444567   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.444597   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.445584   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.445602   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.446414   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.446488   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41543
	I0917 16:56:40.446604   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.447066   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.447281   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.448242   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.448260   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.448317   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.448690   18924 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0917 16:56:40.448734   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.449156   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.449170   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.449190   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.449336   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.449490   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.449542   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.449675   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.449801   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.450234   18924 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0917 16:56:40.451504   18924 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0917 16:56:40.451601   18924 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 16:56:40.451622   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0917 16:56:40.451645   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.452360   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40021
	I0917 16:56:40.452543   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.452876   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.453320   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.453340   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.453678   18924 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0917 16:56:40.453678   18924 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0917 16:56:40.453929   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.454105   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.454452   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39863
	I0917 16:56:40.454781   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.455056   18924 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 16:56:40.455077   18924 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 16:56:40.455161   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.455248   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.455502   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.455528   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.455769   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.455786   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.455855   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.455994   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.456119   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.456170   18924 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0917 16:56:40.456370   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.456909   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.457348   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.457979   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.458463   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.458485   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.458484   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.458600   18924 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0917 16:56:40.458671   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.458710   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:40.458729   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:40.458887   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.458918   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:40.458929   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:40.458938   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:40.458939   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:40.459021   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:40.460243   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:40.460246   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:40.460261   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.460263   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:40.460261   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	W0917 16:56:40.460349   18924 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0917 16:56:40.460425   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.460624   18924 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 16:56:40.460639   18924 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 16:56:40.460661   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.460968   18924 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0917 16:56:40.463033   18924 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0917 16:56:40.463859   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.464289   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.464310   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.464521   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.464735   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.464912   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.465060   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.465353   18924 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0917 16:56:40.466408   18924 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0917 16:56:40.466430   18924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0917 16:56:40.466455   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.468998   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.469414   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36913
	I0917 16:56:40.469615   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.469633   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.469650   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.469800   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.469875   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.470033   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.470168   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.470428   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.470451   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.470890   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.471071   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.472593   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.474373   18924 out.go:177]   - Using image docker.io/busybox:stable
	I0917 16:56:40.475735   18924 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0917 16:56:40.477135   18924 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 16:56:40.477147   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0917 16:56:40.477165   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.480812   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.481316   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.481354   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.481624   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.481827   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.481966   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.482082   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.887038   18924 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0917 16:56:40.887063   18924 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0917 16:56:40.957503   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 16:56:40.957833   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 16:56:40.990013   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 16:56:40.996441   18924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 16:56:40.996591   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0917 16:56:41.047793   18924 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0917 16:56:41.047816   18924 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0917 16:56:41.050251   18924 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0917 16:56:41.050266   18924 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0917 16:56:41.052602   18924 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0917 16:56:41.052619   18924 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0917 16:56:41.070072   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 16:56:41.070385   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 16:56:41.085507   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0917 16:56:41.098190   18924 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0917 16:56:41.098217   18924 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0917 16:56:41.112724   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 16:56:41.177089   18924 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0917 16:56:41.177113   18924 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0917 16:56:41.200547   18924 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0917 16:56:41.200577   18924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0917 16:56:41.201601   18924 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 16:56:41.201619   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0917 16:56:41.263241   18924 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0917 16:56:41.263268   18924 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0917 16:56:41.284538   18924 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0917 16:56:41.284563   18924 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0917 16:56:41.462449   18924 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0917 16:56:41.462479   18924 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0917 16:56:41.516502   18924 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0917 16:56:41.516526   18924 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0917 16:56:41.527742   18924 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0917 16:56:41.527763   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0917 16:56:41.592582   18924 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 16:56:41.592603   18924 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 16:56:41.692484   18924 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0917 16:56:41.692515   18924 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0917 16:56:41.707737   18924 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0917 16:56:41.707771   18924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0917 16:56:41.725728   18924 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0917 16:56:41.725752   18924 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0917 16:56:41.751606   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0917 16:56:41.763147   18924 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0917 16:56:41.763174   18924 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0917 16:56:41.845855   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0917 16:56:41.917959   18924 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0917 16:56:41.917982   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0917 16:56:41.932379   18924 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0917 16:56:41.932409   18924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0917 16:56:41.933743   18924 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 16:56:41.933758   18924 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 16:56:42.000189   18924 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 16:56:42.000209   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0917 16:56:42.019019   18924 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0917 16:56:42.019039   18924 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0917 16:56:42.120876   18924 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0917 16:56:42.120903   18924 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0917 16:56:42.215490   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0917 16:56:42.219259   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 16:56:42.235839   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 16:56:42.249709   18924 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0917 16:56:42.249738   18924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0917 16:56:42.408626   18924 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0917 16:56:42.408660   18924 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0917 16:56:42.597811   18924 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0917 16:56:42.597836   18924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0917 16:56:42.832549   18924 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 16:56:42.832574   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0917 16:56:42.877638   18924 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0917 16:56:42.877673   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0917 16:56:43.070157   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 16:56:43.223931   18924 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0917 16:56:43.223966   18924 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0917 16:56:43.642910   18924 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0917 16:56:43.642945   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0917 16:56:44.074864   18924 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0917 16:56:44.074888   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0917 16:56:44.426715   18924 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 16:56:44.426745   18924 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0917 16:56:44.816971   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 16:56:47.444904   18924 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0917 16:56:47.444944   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:47.448454   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:47.448848   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:47.448876   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:47.449068   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:47.449290   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:47.449479   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:47.449640   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:48.201028   18924 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0917 16:56:48.440942   18924 addons.go:234] Setting addon gcp-auth=true in "addons-408385"
	I0917 16:56:48.440997   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:48.441325   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:48.441359   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:48.457638   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42811
	I0917 16:56:48.458035   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:48.458476   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:48.458498   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:48.459269   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:48.459712   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:48.459740   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:48.475904   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46001
	I0917 16:56:48.476401   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:48.476926   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:48.476955   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:48.477337   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:48.477515   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:48.479054   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:48.479263   18924 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0917 16:56:48.479286   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:48.481756   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:48.482133   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:48.482152   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:48.482342   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:48.482542   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:48.482682   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:48.482802   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:50.488236   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.53069821s)
	I0917 16:56:50.488278   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.530418125s)
	I0917 16:56:50.488291   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.488303   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.488312   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.488328   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.488345   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.49830433s)
	I0917 16:56:50.488378   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.488393   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.488405   18924 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.491938735s)
	I0917 16:56:50.488459   18924 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (9.491834553s)
	I0917 16:56:50.488485   18924 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0917 16:56:50.488684   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.418586295s)
	I0917 16:56:50.488715   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.488725   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.488806   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.418403871s)
	I0917 16:56:50.488819   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.488826   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.488884   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.403354884s)
	I0917 16:56:50.488898   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.488905   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.488948   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.376200431s)
	I0917 16:56:50.488960   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.488968   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.489028   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.737398015s)
	I0917 16:56:50.489042   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.489053   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.489103   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.643223171s)
	I0917 16:56:50.489114   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.489123   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.489186   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.273670484s)
	I0917 16:56:50.489198   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.489205   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.489334   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.27004628s)
	W0917 16:56:50.489366   18924 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 16:56:50.489413   18924 retry.go:31] will retry after 216.517027ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 16:56:50.489488   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.253623687s)
	I0917 16:56:50.489516   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.489528   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.489623   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.419429218s)
	I0917 16:56:50.489637   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.489645   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.490730   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.490746   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.490761   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.490769   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.490773   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.490776   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.490780   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.490789   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.490794   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.490846   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.490846   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.490867   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.490875   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.490876   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.490881   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.490883   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.490889   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.490891   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.490898   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.490930   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.490950   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.490956   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.490963   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.490969   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.491011   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.491030   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.491037   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.491044   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.491051   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.491088   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.491105   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.491111   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.491119   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.491128   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.491168   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.491188   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.491195   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.491202   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.491208   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.491246   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.491266   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.491272   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.491281   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.491288   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.491328   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.491348   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.491353   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.491360   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.491366   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.491403   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.491422   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.491428   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.491435   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.491441   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.491476   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.491493   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.491498   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.491505   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.491511   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.492188   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.492223   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.492231   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.494284   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.494315   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.494322   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.494514   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.494536   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.494542   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.494578   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.494609   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.494616   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.494691   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.494714   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.494720   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.494832   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.494873   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.494880   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.495687   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.495706   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.495732   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.495738   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.495746   18924 addons.go:475] Verifying addon registry=true in "addons-408385"
	I0917 16:56:50.496538   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.496544   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.496555   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.496564   18924 addons.go:475] Verifying addon metrics-server=true in "addons-408385"
	I0917 16:56:50.496566   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.496573   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.496624   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.496639   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.496683   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.496717   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.496720   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.496727   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.496735   18924 addons.go:475] Verifying addon ingress=true in "addons-408385"
	I0917 16:56:50.496808   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.496815   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.497192   18924 node_ready.go:35] waiting up to 6m0s for node "addons-408385" to be "Ready" ...
	I0917 16:56:50.497351   18924 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-408385 service yakd-dashboard -n yakd-dashboard
	
	I0917 16:56:50.497389   18924 out.go:177] * Verifying registry addon...
	I0917 16:56:50.498257   18924 out.go:177] * Verifying ingress addon...
	I0917 16:56:50.500180   18924 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0917 16:56:50.500419   18924 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0917 16:56:50.518284   18924 node_ready.go:49] node "addons-408385" has status "Ready":"True"
	I0917 16:56:50.518306   18924 node_ready.go:38] duration metric: took 21.091831ms for node "addons-408385" to be "Ready" ...
	I0917 16:56:50.518315   18924 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 16:56:50.520856   18924 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0917 16:56:50.520883   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:50.523079   18924 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0917 16:56:50.523105   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:50.546145   18924 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6scmn" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:50.581347   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.581372   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.581745   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.581768   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.581818   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.581841   18924 pod_ready.go:93] pod "coredns-7c65d6cfc9-6scmn" in "kube-system" namespace has status "Ready":"True"
	I0917 16:56:50.581859   18924 pod_ready.go:82] duration metric: took 35.685801ms for pod "coredns-7c65d6cfc9-6scmn" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:50.581871   18924 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mhzww" in "kube-system" namespace to be "Ready" ...
	W0917 16:56:50.581910   18924 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0917 16:56:50.586512   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.586530   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.586847   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.586867   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.596137   18924 pod_ready.go:93] pod "coredns-7c65d6cfc9-mhzww" in "kube-system" namespace has status "Ready":"True"
	I0917 16:56:50.596162   18924 pod_ready.go:82] duration metric: took 14.284009ms for pod "coredns-7c65d6cfc9-mhzww" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:50.596172   18924 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-408385" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:50.623810   18924 pod_ready.go:93] pod "etcd-addons-408385" in "kube-system" namespace has status "Ready":"True"
	I0917 16:56:50.623835   18924 pod_ready.go:82] duration metric: took 27.656536ms for pod "etcd-addons-408385" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:50.623845   18924 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-408385" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:50.706847   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 16:56:50.717893   18924 pod_ready.go:93] pod "kube-apiserver-addons-408385" in "kube-system" namespace has status "Ready":"True"
	I0917 16:56:50.717915   18924 pod_ready.go:82] duration metric: took 94.063278ms for pod "kube-apiserver-addons-408385" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:50.717925   18924 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-408385" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:50.902706   18924 pod_ready.go:93] pod "kube-controller-manager-addons-408385" in "kube-system" namespace has status "Ready":"True"
	I0917 16:56:50.902732   18924 pod_ready.go:82] duration metric: took 184.800591ms for pod "kube-controller-manager-addons-408385" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:50.902744   18924 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6blpt" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:50.993709   18924 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-408385" context rescaled to 1 replicas
	I0917 16:56:51.006258   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:51.006412   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:51.311675   18924 pod_ready.go:93] pod "kube-proxy-6blpt" in "kube-system" namespace has status "Ready":"True"
	I0917 16:56:51.311702   18924 pod_ready.go:82] duration metric: took 408.951515ms for pod "kube-proxy-6blpt" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:51.311711   18924 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-408385" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:51.511546   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:51.512343   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:51.712678   18924 pod_ready.go:93] pod "kube-scheduler-addons-408385" in "kube-system" namespace has status "Ready":"True"
	I0917 16:56:51.712702   18924 pod_ready.go:82] duration metric: took 400.983783ms for pod "kube-scheduler-addons-408385" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:51.712710   18924 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:52.025749   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:52.026250   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:52.190681   18924 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.711392152s)
	I0917 16:56:52.191047   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.373996255s)
	I0917 16:56:52.191104   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:52.191125   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:52.191470   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:52.191517   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:52.191536   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:52.191553   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:52.191566   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:52.191792   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:52.191805   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:52.191826   18924 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-408385"
	I0917 16:56:52.192415   18924 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 16:56:52.193515   18924 out.go:177] * Verifying csi-hostpath-driver addon...
	I0917 16:56:52.195286   18924 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0917 16:56:52.196006   18924 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0917 16:56:52.196821   18924 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0917 16:56:52.196837   18924 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0917 16:56:52.214434   18924 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0917 16:56:52.214458   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:52.371675   18924 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0917 16:56:52.371704   18924 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0917 16:56:52.497363   18924 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 16:56:52.497383   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0917 16:56:52.504719   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:52.505342   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:52.564224   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 16:56:52.700595   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:53.011701   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:53.012015   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:53.159940   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.453036172s)
	I0917 16:56:53.160005   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:53.160022   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:53.160284   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:53.160332   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:53.160341   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:53.160357   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:53.160374   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:53.160616   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:53.160633   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:53.201585   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:53.506249   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:53.506293   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:53.709697   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:53.738231   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:56:53.984139   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.419872413s)
	I0917 16:56:53.984190   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:53.984212   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:53.984568   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:53.984589   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:53.984604   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:53.984612   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:53.984834   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:53.984853   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:53.987027   18924 addons.go:475] Verifying addon gcp-auth=true in "addons-408385"
	I0917 16:56:53.988873   18924 out.go:177] * Verifying gcp-auth addon...
	I0917 16:56:53.990825   18924 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0917 16:56:54.055092   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:54.055115   18924 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0917 16:56:54.055131   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:54.055387   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:54.202926   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:54.494716   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:54.506148   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:54.506174   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:54.701636   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:54.994373   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:55.005045   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:55.005494   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:55.200848   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:55.495663   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:55.504909   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:55.506086   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:55.855465   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:56:55.856656   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:55.993948   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:56.005711   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:56.006104   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:56.201254   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:56.494421   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:56.505090   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:56.505414   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:56.701176   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:56.995390   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:57.004844   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:57.005282   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:57.200660   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:57.494627   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:57.504621   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:57.505103   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:57.700909   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:57.994928   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:58.004757   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:58.005263   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:58.201434   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:58.219886   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:56:58.495575   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:58.504836   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:58.505317   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:58.701773   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:58.994959   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:59.005951   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:59.006611   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:59.201975   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:59.495332   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:59.506814   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:59.507819   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:59.700658   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:59.995245   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:00.004708   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:00.006302   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:00.200967   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:00.219938   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:00.495921   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:00.506377   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:00.506950   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:00.703768   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:00.995363   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:01.010398   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:01.011329   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:01.202047   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:01.495085   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:01.504652   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:01.505645   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:01.702029   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:01.994945   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:02.006766   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:02.008040   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:02.200473   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:02.221720   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:02.495451   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:02.504315   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:02.506062   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:02.700326   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:02.995096   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:03.005924   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:03.006819   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:03.201912   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:03.495000   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:03.504765   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:03.505943   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:03.701922   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:03.995337   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:04.004819   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:04.005035   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:04.201761   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:04.494642   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:04.504915   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:04.505321   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:04.702013   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:04.719604   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:04.995214   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:05.004602   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:05.005121   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:05.200850   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:05.494936   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:05.505716   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:05.506224   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:05.700440   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:05.994611   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:06.004208   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:06.006099   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:06.200977   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:06.528028   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:06.528127   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:06.528173   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:06.701154   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:06.994040   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:07.004294   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:07.004738   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:07.200229   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:07.219592   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:07.495326   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:07.504606   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:07.505193   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:07.700901   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:07.995249   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:08.004764   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:08.004900   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:08.200699   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:08.495328   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:08.503987   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:08.506826   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:08.700862   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:08.994609   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:09.004062   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:09.004349   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:09.202126   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:09.220482   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:09.494945   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:09.505116   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:09.506159   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:09.701734   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:09.996629   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:10.019821   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:10.021645   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:10.201473   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:10.495799   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:10.504801   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:10.506075   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:10.704466   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:10.994193   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:11.005581   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:11.005762   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:11.201601   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:11.495169   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:11.504802   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:11.505211   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:11.700302   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:11.719276   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:11.994525   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:12.004692   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:12.005129   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:12.201376   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:12.494979   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:12.505561   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:12.505703   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:12.975902   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:12.995801   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:13.004147   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:13.006830   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:13.200882   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:13.496008   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:13.506567   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:13.507195   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:13.701055   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:13.719675   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:13.994939   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:14.004466   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:14.004915   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:14.202094   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:14.495836   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:14.507503   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:14.508148   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:14.700728   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:14.996044   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:15.006105   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:15.006707   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:15.201653   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:15.494526   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:15.504505   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:15.505363   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:15.703586   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:15.994788   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:16.005108   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:16.005808   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:16.206044   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:16.220095   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:16.494315   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:16.505315   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:16.506169   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:16.704765   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:16.995307   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:17.096405   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:17.096552   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:17.200374   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:17.495743   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:17.505031   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:17.506329   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:17.721075   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:17.995723   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:18.004552   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:18.005928   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:18.200087   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:18.495274   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:18.504597   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:18.507379   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:18.700946   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:18.719392   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:18.994993   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:19.004577   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:19.005098   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:19.589168   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:19.589327   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:19.589667   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:19.589832   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:19.700535   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:19.994305   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:20.004913   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:20.005728   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:20.200701   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:20.494743   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:20.504820   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:20.506113   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:20.702270   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:20.995072   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:21.004890   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:21.005076   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:21.201054   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:21.219658   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:21.495297   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:21.505528   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:21.506012   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:21.702119   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:21.996390   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:22.005561   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:22.005652   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:22.200739   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:22.494563   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:22.506327   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:22.506676   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:22.700496   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:23.032136   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:23.032957   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:23.033036   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:23.202150   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:23.494360   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:23.504706   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:23.505348   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:23.947525   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:23.948575   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:23.994678   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:24.004245   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:24.005329   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:24.201222   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:24.495096   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:24.508318   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:24.510378   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:24.701555   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:24.995276   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:25.004269   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:25.007124   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:25.201504   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:25.495365   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:25.505283   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:25.505799   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:25.700648   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:26.039815   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:26.040228   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:26.040316   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:26.210495   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:26.220088   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:26.495232   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:26.510833   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:26.511093   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:26.700936   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:26.996436   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:27.004910   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:27.005741   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:27.202425   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:27.495288   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:27.505457   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:27.508530   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:27.700773   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:27.995376   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:28.005437   18924 kapi.go:107] duration metric: took 37.50525233s to wait for kubernetes.io/minikube-addons=registry ...
	I0917 16:57:28.005661   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:28.201963   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:28.495032   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:28.505610   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:28.701512   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:28.728312   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:28.995608   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:29.005993   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:29.202300   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:29.497995   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:29.504870   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:29.700212   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:29.995246   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:30.004884   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:30.202534   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:30.495333   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:30.505996   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:30.702019   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:30.994099   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:31.005314   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:31.202708   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:31.229988   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:31.493840   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:31.504120   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:31.701449   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:31.994920   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:32.004766   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:32.357159   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:32.495449   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:32.505535   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:32.701208   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:32.995100   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:33.004376   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:33.200664   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:33.498557   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:33.507115   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:33.700821   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:33.718587   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:33.995468   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:34.005462   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:34.201071   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:34.495519   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:34.505080   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:34.701276   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:34.995558   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:35.004981   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:35.203003   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:35.494303   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:35.504708   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:35.700739   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:35.718782   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:35.994881   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:36.097365   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:36.201890   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:36.495139   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:36.505487   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:36.701057   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:36.996834   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:37.005523   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:37.410454   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:37.516803   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:37.517120   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:37.701410   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:37.729938   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:37.996501   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:38.005193   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:38.200777   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:38.494507   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:38.504434   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:38.701189   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:38.994900   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:39.004122   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:39.201715   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:39.496473   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:39.506073   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:39.703094   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:39.994841   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:40.004452   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:40.201004   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:40.218439   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:40.495729   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:40.504143   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:40.853096   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:41.158440   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:41.159441   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:41.203681   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:41.494298   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:41.505342   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:41.701128   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:41.993947   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:42.005059   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:42.201190   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:42.219465   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:42.495543   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:42.505413   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:42.701555   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:42.995239   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:43.004317   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:43.201671   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:43.495708   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:43.505113   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:43.702002   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:43.997002   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:44.004765   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:44.200983   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:44.507042   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:44.510903   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:44.702550   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:44.723909   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:44.996307   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:45.004982   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:45.201479   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:45.495981   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:45.505405   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:45.700916   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:45.998807   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:46.011459   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:46.201895   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:46.495657   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:46.506169   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:46.701933   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:46.999183   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:47.006964   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:47.203049   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:47.219100   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:47.498008   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:47.506371   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:47.707797   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:47.996867   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:48.004924   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:48.201042   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:48.495636   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:48.511151   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:48.701120   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:48.996590   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:49.012436   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:49.202003   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:49.494728   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:49.505025   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:49.785313   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:49.788215   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:49.994837   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:50.004304   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:50.201021   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:50.495181   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:50.505534   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:50.701006   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:50.994275   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:51.005002   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:51.203019   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:51.495078   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:51.596463   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:51.701421   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:51.994676   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:52.005680   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:52.200539   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:52.218738   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:52.497799   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:52.504566   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:52.700922   18924 kapi.go:107] duration metric: took 1m0.504912498s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0917 16:57:52.995147   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:53.004995   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:53.494512   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:53.505190   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:53.994795   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:54.004440   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:54.219175   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:54.495330   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:54.505134   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:54.995438   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:55.004940   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:55.495125   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:55.504590   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:55.995062   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:56.004478   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:56.225636   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:56.499868   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:56.505194   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:56.996724   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:57.005674   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:57.495684   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:57.505781   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:57.995631   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:58.008631   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:58.494323   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:58.504300   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:58.718959   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:58.993981   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:59.004453   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:59.494338   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:59.505823   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:59.995264   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:00.005723   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:00.500318   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:00.507383   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:00.937272   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:00.995905   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:01.004584   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:01.494776   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:01.504156   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:01.995441   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:02.005703   18924 kapi.go:107] duration metric: took 1m11.505283995s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0917 16:58:02.495427   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:02.994984   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:03.220146   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:03.494557   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:03.995254   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:04.495578   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:04.995390   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:05.592860   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:05.720255   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:05.995373   18924 kapi.go:107] duration metric: took 1m12.00454435s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0917 16:58:05.997195   18924 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-408385 cluster.
	I0917 16:58:05.998523   18924 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0917 16:58:05.999866   18924 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0917 16:58:06.001358   18924 out.go:177] * Enabled addons: helm-tiller, nvidia-device-plugin, storage-provisioner, inspektor-gadget, metrics-server, ingress-dns, cloud-spanner, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0917 16:58:06.002828   18924 addons.go:510] duration metric: took 1m25.721995771s for enable addons: enabled=[helm-tiller nvidia-device-plugin storage-provisioner inspektor-gadget metrics-server ingress-dns cloud-spanner yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0917 16:58:08.220603   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:10.720898   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:13.220582   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:15.719610   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:18.218981   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:20.219159   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:22.219968   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:24.220176   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:26.719061   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:28.719706   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:31.220522   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:33.222077   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:35.720029   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:37.220204   18924 pod_ready.go:93] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"True"
	I0917 16:58:37.220228   18924 pod_ready.go:82] duration metric: took 1m45.507511223s for pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace to be "Ready" ...
	I0917 16:58:37.220238   18924 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-95n5v" in "kube-system" namespace to be "Ready" ...
	I0917 16:58:37.225164   18924 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-95n5v" in "kube-system" namespace has status "Ready":"True"
	I0917 16:58:37.225186   18924 pod_ready.go:82] duration metric: took 4.941018ms for pod "nvidia-device-plugin-daemonset-95n5v" in "kube-system" namespace to be "Ready" ...
	I0917 16:58:37.225205   18924 pod_ready.go:39] duration metric: took 1m46.70687885s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 16:58:37.225220   18924 api_server.go:52] waiting for apiserver process to appear ...
	I0917 16:58:37.225261   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 16:58:37.225308   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 16:58:37.279317   18924 cri.go:89] found id: "bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454"
	I0917 16:58:37.279344   18924 cri.go:89] found id: ""
	I0917 16:58:37.279354   18924 logs.go:276] 1 containers: [bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454]
	I0917 16:58:37.279413   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:37.283927   18924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 16:58:37.283993   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 16:58:37.333054   18924 cri.go:89] found id: "535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15"
	I0917 16:58:37.333075   18924 cri.go:89] found id: ""
	I0917 16:58:37.333082   18924 logs.go:276] 1 containers: [535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15]
	I0917 16:58:37.333127   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:37.337854   18924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 16:58:37.337913   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 16:58:37.376799   18924 cri.go:89] found id: "bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707"
	I0917 16:58:37.376819   18924 cri.go:89] found id: ""
	I0917 16:58:37.376826   18924 logs.go:276] 1 containers: [bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707]
	I0917 16:58:37.376871   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:37.381347   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 16:58:37.381426   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 16:58:37.427851   18924 cri.go:89] found id: "5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad"
	I0917 16:58:37.427871   18924 cri.go:89] found id: ""
	I0917 16:58:37.427878   18924 logs.go:276] 1 containers: [5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad]
	I0917 16:58:37.427920   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:37.432240   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 16:58:37.432302   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 16:58:37.479690   18924 cri.go:89] found id: "78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0"
	I0917 16:58:37.479709   18924 cri.go:89] found id: ""
	I0917 16:58:37.479720   18924 logs.go:276] 1 containers: [78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0]
	I0917 16:58:37.479769   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:37.484307   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 16:58:37.484359   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 16:58:37.530462   18924 cri.go:89] found id: "eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44"
	I0917 16:58:37.530482   18924 cri.go:89] found id: ""
	I0917 16:58:37.530490   18924 logs.go:276] 1 containers: [eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44]
	I0917 16:58:37.530536   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:37.534804   18924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 16:58:37.534867   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 16:58:37.576826   18924 cri.go:89] found id: ""
	I0917 16:58:37.576855   18924 logs.go:276] 0 containers: []
	W0917 16:58:37.576867   18924 logs.go:278] No container was found matching "kindnet"
	I0917 16:58:37.576879   18924 logs.go:123] Gathering logs for kube-apiserver [bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454] ...
	I0917 16:58:37.576897   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454"
	I0917 16:58:37.628751   18924 logs.go:123] Gathering logs for etcd [535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15] ...
	I0917 16:58:37.628793   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15"
	I0917 16:58:37.693416   18924 logs.go:123] Gathering logs for kube-proxy [78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0] ...
	I0917 16:58:37.693451   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0"
	I0917 16:58:37.734110   18924 logs.go:123] Gathering logs for CRI-O ...
	I0917 16:58:37.734140   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 16:58:38.409207   18924 logs.go:123] Gathering logs for container status ...
	I0917 16:58:38.409261   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 16:58:38.463953   18924 logs.go:123] Gathering logs for kubelet ...
	I0917 16:58:38.463988   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 16:58:38.554114   18924 logs.go:123] Gathering logs for dmesg ...
	I0917 16:58:38.554151   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 16:58:38.572938   18924 logs.go:123] Gathering logs for describe nodes ...
	I0917 16:58:38.572963   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 16:58:38.770050   18924 logs.go:123] Gathering logs for coredns [bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707] ...
	I0917 16:58:38.770086   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707"
	I0917 16:58:38.817495   18924 logs.go:123] Gathering logs for kube-scheduler [5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad] ...
	I0917 16:58:38.817523   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad"
	I0917 16:58:38.864149   18924 logs.go:123] Gathering logs for kube-controller-manager [eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44] ...
	I0917 16:58:38.864183   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44"
	I0917 16:58:41.429718   18924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 16:58:41.453455   18924 api_server.go:72] duration metric: took 2m1.172653121s to wait for apiserver process to appear ...
	I0917 16:58:41.453496   18924 api_server.go:88] waiting for apiserver healthz status ...
	I0917 16:58:41.453536   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 16:58:41.453601   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 16:58:41.494855   18924 cri.go:89] found id: "bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454"
	I0917 16:58:41.494880   18924 cri.go:89] found id: ""
	I0917 16:58:41.494890   18924 logs.go:276] 1 containers: [bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454]
	I0917 16:58:41.494938   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:41.499492   18924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 16:58:41.499556   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 16:58:41.538940   18924 cri.go:89] found id: "535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15"
	I0917 16:58:41.538965   18924 cri.go:89] found id: ""
	I0917 16:58:41.538974   18924 logs.go:276] 1 containers: [535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15]
	I0917 16:58:41.539031   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:41.543179   18924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 16:58:41.543238   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 16:58:41.592083   18924 cri.go:89] found id: "bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707"
	I0917 16:58:41.592107   18924 cri.go:89] found id: ""
	I0917 16:58:41.592115   18924 logs.go:276] 1 containers: [bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707]
	I0917 16:58:41.592162   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:41.596864   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 16:58:41.596926   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 16:58:41.642101   18924 cri.go:89] found id: "5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad"
	I0917 16:58:41.642126   18924 cri.go:89] found id: ""
	I0917 16:58:41.642136   18924 logs.go:276] 1 containers: [5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad]
	I0917 16:58:41.642182   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:41.647074   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 16:58:41.647150   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 16:58:41.689215   18924 cri.go:89] found id: "78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0"
	I0917 16:58:41.689253   18924 cri.go:89] found id: ""
	I0917 16:58:41.689262   18924 logs.go:276] 1 containers: [78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0]
	I0917 16:58:41.689322   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:41.693834   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 16:58:41.693902   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 16:58:41.736215   18924 cri.go:89] found id: "eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44"
	I0917 16:58:41.736241   18924 cri.go:89] found id: ""
	I0917 16:58:41.736251   18924 logs.go:276] 1 containers: [eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44]
	I0917 16:58:41.736309   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:41.740897   18924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 16:58:41.740965   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 16:58:41.782588   18924 cri.go:89] found id: ""
	I0917 16:58:41.782611   18924 logs.go:276] 0 containers: []
	W0917 16:58:41.782619   18924 logs.go:278] No container was found matching "kindnet"
	I0917 16:58:41.782626   18924 logs.go:123] Gathering logs for etcd [535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15] ...
	I0917 16:58:41.782637   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15"
	I0917 16:58:41.843944   18924 logs.go:123] Gathering logs for coredns [bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707] ...
	I0917 16:58:41.843982   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707"
	I0917 16:58:41.886360   18924 logs.go:123] Gathering logs for kube-scheduler [5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad] ...
	I0917 16:58:41.886389   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad"
	I0917 16:58:41.932278   18924 logs.go:123] Gathering logs for kube-controller-manager [eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44] ...
	I0917 16:58:41.932318   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44"
	I0917 16:58:42.000845   18924 logs.go:123] Gathering logs for CRI-O ...
	I0917 16:58:42.000894   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 16:58:42.752922   18924 logs.go:123] Gathering logs for container status ...
	I0917 16:58:42.752965   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 16:58:42.806585   18924 logs.go:123] Gathering logs for dmesg ...
	I0917 16:58:42.806621   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 16:58:42.822915   18924 logs.go:123] Gathering logs for kube-apiserver [bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454] ...
	I0917 16:58:42.822950   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454"
	I0917 16:58:42.872331   18924 logs.go:123] Gathering logs for kube-proxy [78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0] ...
	I0917 16:58:42.872363   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0"
	I0917 16:58:42.911531   18924 logs.go:123] Gathering logs for kubelet ...
	I0917 16:58:42.911556   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 16:58:43.001970   18924 logs.go:123] Gathering logs for describe nodes ...
	I0917 16:58:43.002011   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 16:58:45.635837   18924 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I0917 16:58:45.642306   18924 api_server.go:279] https://192.168.39.170:8443/healthz returned 200:
	ok
	I0917 16:58:45.643242   18924 api_server.go:141] control plane version: v1.31.1
	I0917 16:58:45.643264   18924 api_server.go:131] duration metric: took 4.189760157s to wait for apiserver health ...
	I0917 16:58:45.643271   18924 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 16:58:45.643288   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 16:58:45.643328   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 16:58:45.693219   18924 cri.go:89] found id: "bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454"
	I0917 16:58:45.693256   18924 cri.go:89] found id: ""
	I0917 16:58:45.693265   18924 logs.go:276] 1 containers: [bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454]
	I0917 16:58:45.693322   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:45.698334   18924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 16:58:45.698400   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 16:58:45.762484   18924 cri.go:89] found id: "535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15"
	I0917 16:58:45.762509   18924 cri.go:89] found id: ""
	I0917 16:58:45.762517   18924 logs.go:276] 1 containers: [535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15]
	I0917 16:58:45.762574   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:45.767293   18924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 16:58:45.767362   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 16:58:45.815706   18924 cri.go:89] found id: "bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707"
	I0917 16:58:45.815734   18924 cri.go:89] found id: ""
	I0917 16:58:45.815743   18924 logs.go:276] 1 containers: [bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707]
	I0917 16:58:45.815801   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:45.821316   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 16:58:45.821379   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 16:58:45.872354   18924 cri.go:89] found id: "5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad"
	I0917 16:58:45.872375   18924 cri.go:89] found id: ""
	I0917 16:58:45.872384   18924 logs.go:276] 1 containers: [5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad]
	I0917 16:58:45.872457   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:45.876864   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 16:58:45.876916   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 16:58:45.933435   18924 cri.go:89] found id: "78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0"
	I0917 16:58:45.933456   18924 cri.go:89] found id: ""
	I0917 16:58:45.933464   18924 logs.go:276] 1 containers: [78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0]
	I0917 16:58:45.933522   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:45.937839   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 16:58:45.937893   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 16:58:45.990922   18924 cri.go:89] found id: "eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44"
	I0917 16:58:45.990950   18924 cri.go:89] found id: ""
	I0917 16:58:45.990960   18924 logs.go:276] 1 containers: [eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44]
	I0917 16:58:45.991013   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:45.995807   18924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 16:58:45.995870   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 16:58:46.057313   18924 cri.go:89] found id: ""
	I0917 16:58:46.057345   18924 logs.go:276] 0 containers: []
	W0917 16:58:46.057362   18924 logs.go:278] No container was found matching "kindnet"
	I0917 16:58:46.057372   18924 logs.go:123] Gathering logs for kubelet ...
	I0917 16:58:46.057385   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 16:58:46.149501   18924 logs.go:123] Gathering logs for describe nodes ...
	I0917 16:58:46.149539   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 16:58:46.282319   18924 logs.go:123] Gathering logs for kube-apiserver [bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454] ...
	I0917 16:58:46.282352   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454"
	I0917 16:58:46.337878   18924 logs.go:123] Gathering logs for coredns [bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707] ...
	I0917 16:58:46.337916   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707"
	I0917 16:58:46.391452   18924 logs.go:123] Gathering logs for kube-proxy [78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0] ...
	I0917 16:58:46.391485   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0"
	I0917 16:58:46.429573   18924 logs.go:123] Gathering logs for CRI-O ...
	I0917 16:58:46.429607   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 16:58:47.320590   18924 logs.go:123] Gathering logs for dmesg ...
	I0917 16:58:47.320629   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 16:58:47.339176   18924 logs.go:123] Gathering logs for etcd [535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15] ...
	I0917 16:58:47.339207   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15"
	I0917 16:58:47.401618   18924 logs.go:123] Gathering logs for kube-scheduler [5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad] ...
	I0917 16:58:47.401661   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad"
	I0917 16:58:47.448277   18924 logs.go:123] Gathering logs for kube-controller-manager [eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44] ...
	I0917 16:58:47.448312   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44"
	I0917 16:58:47.519002   18924 logs.go:123] Gathering logs for container status ...
	I0917 16:58:47.519038   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 16:58:50.090393   18924 system_pods.go:59] 18 kube-system pods found
	I0917 16:58:50.090427   18924 system_pods.go:61] "coredns-7c65d6cfc9-6scmn" [8db4f4dd-ff63-4e6e-8533-37fc690e481f] Running
	I0917 16:58:50.090432   18924 system_pods.go:61] "csi-hostpath-attacher-0" [65b71f4b-d36f-4dc6-bdae-333899320ff0] Running
	I0917 16:58:50.090436   18924 system_pods.go:61] "csi-hostpath-resizer-0" [c83c2084-ccc8-4b76-9ea2-170c35f90d38] Running
	I0917 16:58:50.090440   18924 system_pods.go:61] "csi-hostpathplugin-l4qgp" [3d956da8-0046-445f-91ca-13ca2f599dd9] Running
	I0917 16:58:50.090443   18924 system_pods.go:61] "etcd-addons-408385" [12d66991-8c52-4c93-bbc7-62243564fa8c] Running
	I0917 16:58:50.090446   18924 system_pods.go:61] "kube-apiserver-addons-408385" [e7968656-cd51-4c73-b4d3-8fdf9e3a0397] Running
	I0917 16:58:50.090449   18924 system_pods.go:61] "kube-controller-manager-addons-408385" [f969f875-2b8a-4c74-9989-03e557f8a909] Running
	I0917 16:58:50.090453   18924 system_pods.go:61] "kube-ingress-dns-minikube" [a365fa42-68bf-4f57-ad20-e437ef76117e] Running
	I0917 16:58:50.090456   18924 system_pods.go:61] "kube-proxy-6blpt" [fea4c9da-fbd5-4ac9-be8e-7cd0b574e3fc] Running
	I0917 16:58:50.090459   18924 system_pods.go:61] "kube-scheduler-addons-408385" [4c2a228c-678f-48c1-96df-80d490cf18de] Running
	I0917 16:58:50.090462   18924 system_pods.go:61] "metrics-server-84c5f94fbc-nxwr4" [b55954ef-19c5-428e-b2f5-64cb84921e99] Running
	I0917 16:58:50.090465   18924 system_pods.go:61] "nvidia-device-plugin-daemonset-95n5v" [48c0bfc6-64c7-473b-9f8c-429d8af8f349] Running
	I0917 16:58:50.090468   18924 system_pods.go:61] "registry-66c9cd494c-5dzpj" [2f4278c0-9bc9-4d2d-8e73-43d39ddd1504] Running
	I0917 16:58:50.090472   18924 system_pods.go:61] "registry-proxy-84sgt" [93e3187d-0292-45df-9221-e406397b489f] Running
	I0917 16:58:50.090477   18924 system_pods.go:61] "snapshot-controller-56fcc65765-hzt86" [80bf610f-3214-4cdb-90db-4fb1bf38882c] Running
	I0917 16:58:50.090480   18924 system_pods.go:61] "snapshot-controller-56fcc65765-v8kzp" [d6dcec3f-4138-4065-aa77-d339d5b2a2d6] Running
	I0917 16:58:50.090483   18924 system_pods.go:61] "storage-provisioner" [308ed5f9-f16c-45d9-b7c4-edb96a6aa2d1] Running
	I0917 16:58:50.090486   18924 system_pods.go:61] "tiller-deploy-b48cc5f79-r4h85" [6b8d783b-0417-4bca-bedd-0283ba1faf18] Running
	I0917 16:58:50.090492   18924 system_pods.go:74] duration metric: took 4.447215491s to wait for pod list to return data ...
	I0917 16:58:50.090505   18924 default_sa.go:34] waiting for default service account to be created ...
	I0917 16:58:50.093151   18924 default_sa.go:45] found service account: "default"
	I0917 16:58:50.093172   18924 default_sa.go:55] duration metric: took 2.662022ms for default service account to be created ...
	I0917 16:58:50.093180   18924 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 16:58:50.100564   18924 system_pods.go:86] 18 kube-system pods found
	I0917 16:58:50.100596   18924 system_pods.go:89] "coredns-7c65d6cfc9-6scmn" [8db4f4dd-ff63-4e6e-8533-37fc690e481f] Running
	I0917 16:58:50.100607   18924 system_pods.go:89] "csi-hostpath-attacher-0" [65b71f4b-d36f-4dc6-bdae-333899320ff0] Running
	I0917 16:58:50.100619   18924 system_pods.go:89] "csi-hostpath-resizer-0" [c83c2084-ccc8-4b76-9ea2-170c35f90d38] Running
	I0917 16:58:50.100623   18924 system_pods.go:89] "csi-hostpathplugin-l4qgp" [3d956da8-0046-445f-91ca-13ca2f599dd9] Running
	I0917 16:58:50.100628   18924 system_pods.go:89] "etcd-addons-408385" [12d66991-8c52-4c93-bbc7-62243564fa8c] Running
	I0917 16:58:50.100632   18924 system_pods.go:89] "kube-apiserver-addons-408385" [e7968656-cd51-4c73-b4d3-8fdf9e3a0397] Running
	I0917 16:58:50.100637   18924 system_pods.go:89] "kube-controller-manager-addons-408385" [f969f875-2b8a-4c74-9989-03e557f8a909] Running
	I0917 16:58:50.100640   18924 system_pods.go:89] "kube-ingress-dns-minikube" [a365fa42-68bf-4f57-ad20-e437ef76117e] Running
	I0917 16:58:50.100643   18924 system_pods.go:89] "kube-proxy-6blpt" [fea4c9da-fbd5-4ac9-be8e-7cd0b574e3fc] Running
	I0917 16:58:50.100647   18924 system_pods.go:89] "kube-scheduler-addons-408385" [4c2a228c-678f-48c1-96df-80d490cf18de] Running
	I0917 16:58:50.100650   18924 system_pods.go:89] "metrics-server-84c5f94fbc-nxwr4" [b55954ef-19c5-428e-b2f5-64cb84921e99] Running
	I0917 16:58:50.100657   18924 system_pods.go:89] "nvidia-device-plugin-daemonset-95n5v" [48c0bfc6-64c7-473b-9f8c-429d8af8f349] Running
	I0917 16:58:50.100664   18924 system_pods.go:89] "registry-66c9cd494c-5dzpj" [2f4278c0-9bc9-4d2d-8e73-43d39ddd1504] Running
	I0917 16:58:50.100667   18924 system_pods.go:89] "registry-proxy-84sgt" [93e3187d-0292-45df-9221-e406397b489f] Running
	I0917 16:58:50.100670   18924 system_pods.go:89] "snapshot-controller-56fcc65765-hzt86" [80bf610f-3214-4cdb-90db-4fb1bf38882c] Running
	I0917 16:58:50.100674   18924 system_pods.go:89] "snapshot-controller-56fcc65765-v8kzp" [d6dcec3f-4138-4065-aa77-d339d5b2a2d6] Running
	I0917 16:58:50.100677   18924 system_pods.go:89] "storage-provisioner" [308ed5f9-f16c-45d9-b7c4-edb96a6aa2d1] Running
	I0917 16:58:50.100680   18924 system_pods.go:89] "tiller-deploy-b48cc5f79-r4h85" [6b8d783b-0417-4bca-bedd-0283ba1faf18] Running
	I0917 16:58:50.100687   18924 system_pods.go:126] duration metric: took 7.502942ms to wait for k8s-apps to be running ...
	I0917 16:58:50.100695   18924 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 16:58:50.100746   18924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 16:58:50.115763   18924 system_svc.go:56] duration metric: took 15.057221ms WaitForService to wait for kubelet
	I0917 16:58:50.115798   18924 kubeadm.go:582] duration metric: took 2m9.83500224s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 16:58:50.115816   18924 node_conditions.go:102] verifying NodePressure condition ...
	I0917 16:58:50.119437   18924 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 16:58:50.119462   18924 node_conditions.go:123] node cpu capacity is 2
	I0917 16:58:50.119474   18924 node_conditions.go:105] duration metric: took 3.65352ms to run NodePressure ...
	I0917 16:58:50.119484   18924 start.go:241] waiting for startup goroutines ...
	I0917 16:58:50.119490   18924 start.go:246] waiting for cluster config update ...
	I0917 16:58:50.119505   18924 start.go:255] writing updated cluster config ...
	I0917 16:58:50.119789   18924 ssh_runner.go:195] Run: rm -f paused
	I0917 16:58:50.169934   18924 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 16:58:50.173108   18924 out.go:177] * Done! kubectl is now configured to use "addons-408385" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 17 17:10:27 addons-408385 crio[662]: time="2024-09-17 17:10:27.566853259Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593027566818209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579800,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3e30e98c-2117-4a76-8413-1be6c56e6c2e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:10:27 addons-408385 crio[662]: time="2024-09-17 17:10:27.567889013Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=48f54d8d-5f21-42e2-9e77-cb29b99128f4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:10:27 addons-408385 crio[662]: time="2024-09-17 17:10:27.567976092Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=48f54d8d-5f21-42e2-9e77-cb29b99128f4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:10:27 addons-408385 crio[662]: time="2024-09-17 17:10:27.568290193Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00b9a002ec125abe4e7f50a4e6bd2705bc2fed2a77e71ae5ac798b3114e1db6c,PodSandboxId:9e624bc2158e468932af5a0902901aa8f1bf34037db6c4bab45b08b219f11247,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726593018687650340,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-cnzjd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cceea962-d19d-4282-aaaf-96c6277ba99f,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397f39163049566d23be8376dbed334aadd27c8c56e83f417aa2c59f2a252f9b,PodSandboxId:6e889439508c6580558306b641e7b6cfc5d4ce54fb03881be02e737d80da3344,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726592879375090193,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 076f30f2-e5f7-4810-8e8d-613a12b5664c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4c5e175eedc073e97f9ea9680c41d68c3c000caa69196b46e03499304831c0d,PodSandboxId:e39bbf11dc16564121b75b3ae0c124b1d4b3e667c00c6d0e30fe71b8dcc2eb3d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726592284895958469,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-b7hz4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 0d20c2e4-2c70-48c2-8fb9-a28309d6b41f,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbf8a94347cdf3d76d7c3a16e6e93cd904b404fc749bbf93aaf4c0045a2dea9d,PodSandboxId:d7e2e0870993796d606398fc8c186520db9a834b52f5c6551efda8ab541a156f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726592258234195990,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4b8gx,i
o.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e9f69cee-9c47-4129-9373-eb8999ab009c,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8c16cc48c9cae24c50e551dc4ffb786f6502b1e65fc8bcf65397e3beba2dd,PodSandboxId:31ba27ee18db2a2dce5e74f5feccf64d3dc534d81ab0058f0a67afdd27002961,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726592257659953729,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name:
ingress-nginx-admission-create-78945,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 46dcf364-4802-4e08-9db9-0e89c4984788,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c35ba12caa08b4b7dff115e7bec3ccfea7487866b24701619f1eb1dec01e5cdf,PodSandboxId:d5f73be9090dd4d27345f77aa93b450ea10f7e369ead9c0ec9078f75a9967238,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726592248653089159,Labels:map[string]string{io.kubernetes.conta
iner.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-nxwr4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b55954ef-19c5-428e-b2f5-64cb84921e99,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b3332c3d67663e332214a4d2a7600673eb50a2e85f955febcae8ae6bdc61cea,PodSandboxId:1266dc642a5d5d817566a303855b89ad35e9ba5ce0cd09a6987308c623d146d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State
:CONTAINER_RUNNING,CreatedAt:1726592207362039245,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 308ed5f9-f16c-45d9-b7c4-edb96a6aa2d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707,PodSandboxId:147e55def5b9c0c7a9d8b335c46dbfd7c65e5774b9251734e2b6257b17749d03,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,Creat
edAt:1726592204223691882,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6scmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db4f4dd-ff63-4e6e-8533-37fc690e481f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0,PodSandboxId:ba0e8772c0eef1236f4fa4985d24c32a210d0f1bdd86f2a9a4221eb1e6e06384,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307
063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726592200857172807,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6blpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fea4c9da-fbd5-4ac9-be8e-7cd0b574e3fc,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad,PodSandboxId:900247dfddd23f3136780bf7695d0bc30603abe8a6d1321d2c4ce6551729d09a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726592190061688263,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3943b5aa55300847a88d97baf9f5fcd9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15,PodSandboxId:0d778775904583ba97eb27717ac52155cf18a8980c70a3e42c566fb034a6538c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{}
,UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726592190061957344,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b22844b2da94ceb2dd6e2ff998a06b7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44,PodSandboxId:b694988dbe8974b1414fe63b75517e2c0ec0abb7613fd6db4ec17ca7ba275fc4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726592190025949182,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc7054c0bf6f7ef8f456663c4c477a6e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454,PodSandboxId:b5606602c326c9d22fba7b773a1101738483f2c55a045ae530eb2568b3631e79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726592189934323923,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e28321a1692b9c5e59016b226421277,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=48f54d8d-5f21-42e2-9e77-cb29b99128f4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:10:27 addons-408385 crio[662]: time="2024-09-17 17:10:27.609119866Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a3df1a04-b3a6-4e28-9974-4d6873bc9e3b name=/runtime.v1.RuntimeService/Version
	Sep 17 17:10:27 addons-408385 crio[662]: time="2024-09-17 17:10:27.609217484Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a3df1a04-b3a6-4e28-9974-4d6873bc9e3b name=/runtime.v1.RuntimeService/Version
	Sep 17 17:10:27 addons-408385 crio[662]: time="2024-09-17 17:10:27.610537609Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f11b7afa-b943-4929-8c7e-8cbd5a54e0c6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:10:27 addons-408385 crio[662]: time="2024-09-17 17:10:27.611865805Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593027611834758,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579800,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f11b7afa-b943-4929-8c7e-8cbd5a54e0c6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:10:27 addons-408385 crio[662]: time="2024-09-17 17:10:27.612665378Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fbd50144-7af9-41d0-9bfa-2e23e3e94620 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:10:27 addons-408385 crio[662]: time="2024-09-17 17:10:27.612756076Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fbd50144-7af9-41d0-9bfa-2e23e3e94620 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:10:27 addons-408385 crio[662]: time="2024-09-17 17:10:27.613074583Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00b9a002ec125abe4e7f50a4e6bd2705bc2fed2a77e71ae5ac798b3114e1db6c,PodSandboxId:9e624bc2158e468932af5a0902901aa8f1bf34037db6c4bab45b08b219f11247,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726593018687650340,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-cnzjd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cceea962-d19d-4282-aaaf-96c6277ba99f,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397f39163049566d23be8376dbed334aadd27c8c56e83f417aa2c59f2a252f9b,PodSandboxId:6e889439508c6580558306b641e7b6cfc5d4ce54fb03881be02e737d80da3344,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726592879375090193,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 076f30f2-e5f7-4810-8e8d-613a12b5664c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4c5e175eedc073e97f9ea9680c41d68c3c000caa69196b46e03499304831c0d,PodSandboxId:e39bbf11dc16564121b75b3ae0c124b1d4b3e667c00c6d0e30fe71b8dcc2eb3d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726592284895958469,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-b7hz4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 0d20c2e4-2c70-48c2-8fb9-a28309d6b41f,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbf8a94347cdf3d76d7c3a16e6e93cd904b404fc749bbf93aaf4c0045a2dea9d,PodSandboxId:d7e2e0870993796d606398fc8c186520db9a834b52f5c6551efda8ab541a156f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726592258234195990,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4b8gx,i
o.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e9f69cee-9c47-4129-9373-eb8999ab009c,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8c16cc48c9cae24c50e551dc4ffb786f6502b1e65fc8bcf65397e3beba2dd,PodSandboxId:31ba27ee18db2a2dce5e74f5feccf64d3dc534d81ab0058f0a67afdd27002961,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726592257659953729,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name:
ingress-nginx-admission-create-78945,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 46dcf364-4802-4e08-9db9-0e89c4984788,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c35ba12caa08b4b7dff115e7bec3ccfea7487866b24701619f1eb1dec01e5cdf,PodSandboxId:d5f73be9090dd4d27345f77aa93b450ea10f7e369ead9c0ec9078f75a9967238,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726592248653089159,Labels:map[string]string{io.kubernetes.conta
iner.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-nxwr4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b55954ef-19c5-428e-b2f5-64cb84921e99,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b3332c3d67663e332214a4d2a7600673eb50a2e85f955febcae8ae6bdc61cea,PodSandboxId:1266dc642a5d5d817566a303855b89ad35e9ba5ce0cd09a6987308c623d146d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State
:CONTAINER_RUNNING,CreatedAt:1726592207362039245,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 308ed5f9-f16c-45d9-b7c4-edb96a6aa2d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707,PodSandboxId:147e55def5b9c0c7a9d8b335c46dbfd7c65e5774b9251734e2b6257b17749d03,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,Creat
edAt:1726592204223691882,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6scmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db4f4dd-ff63-4e6e-8533-37fc690e481f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0,PodSandboxId:ba0e8772c0eef1236f4fa4985d24c32a210d0f1bdd86f2a9a4221eb1e6e06384,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307
063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726592200857172807,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6blpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fea4c9da-fbd5-4ac9-be8e-7cd0b574e3fc,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad,PodSandboxId:900247dfddd23f3136780bf7695d0bc30603abe8a6d1321d2c4ce6551729d09a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726592190061688263,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3943b5aa55300847a88d97baf9f5fcd9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15,PodSandboxId:0d778775904583ba97eb27717ac52155cf18a8980c70a3e42c566fb034a6538c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{}
,UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726592190061957344,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b22844b2da94ceb2dd6e2ff998a06b7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44,PodSandboxId:b694988dbe8974b1414fe63b75517e2c0ec0abb7613fd6db4ec17ca7ba275fc4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726592190025949182,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc7054c0bf6f7ef8f456663c4c477a6e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454,PodSandboxId:b5606602c326c9d22fba7b773a1101738483f2c55a045ae530eb2568b3631e79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726592189934323923,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e28321a1692b9c5e59016b226421277,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fbd50144-7af9-41d0-9bfa-2e23e3e94620 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:10:27 addons-408385 crio[662]: time="2024-09-17 17:10:27.654422328Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f88f343d-206d-47f5-961d-1211de803dc2 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:10:27 addons-408385 crio[662]: time="2024-09-17 17:10:27.654514975Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f88f343d-206d-47f5-961d-1211de803dc2 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:10:27 addons-408385 crio[662]: time="2024-09-17 17:10:27.655512396Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=021cb1f0-0acb-46df-a567-0c57654a337d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:10:27 addons-408385 crio[662]: time="2024-09-17 17:10:27.656701200Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593027656669089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579800,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=021cb1f0-0acb-46df-a567-0c57654a337d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:10:27 addons-408385 crio[662]: time="2024-09-17 17:10:27.657317136Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=21158963-471d-4dd7-8adf-19c02d1c338b name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:10:27 addons-408385 crio[662]: time="2024-09-17 17:10:27.657445600Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=21158963-471d-4dd7-8adf-19c02d1c338b name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:10:27 addons-408385 crio[662]: time="2024-09-17 17:10:27.657820875Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00b9a002ec125abe4e7f50a4e6bd2705bc2fed2a77e71ae5ac798b3114e1db6c,PodSandboxId:9e624bc2158e468932af5a0902901aa8f1bf34037db6c4bab45b08b219f11247,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726593018687650340,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-cnzjd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cceea962-d19d-4282-aaaf-96c6277ba99f,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397f39163049566d23be8376dbed334aadd27c8c56e83f417aa2c59f2a252f9b,PodSandboxId:6e889439508c6580558306b641e7b6cfc5d4ce54fb03881be02e737d80da3344,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726592879375090193,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 076f30f2-e5f7-4810-8e8d-613a12b5664c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4c5e175eedc073e97f9ea9680c41d68c3c000caa69196b46e03499304831c0d,PodSandboxId:e39bbf11dc16564121b75b3ae0c124b1d4b3e667c00c6d0e30fe71b8dcc2eb3d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726592284895958469,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-b7hz4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 0d20c2e4-2c70-48c2-8fb9-a28309d6b41f,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbf8a94347cdf3d76d7c3a16e6e93cd904b404fc749bbf93aaf4c0045a2dea9d,PodSandboxId:d7e2e0870993796d606398fc8c186520db9a834b52f5c6551efda8ab541a156f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726592258234195990,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4b8gx,i
o.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e9f69cee-9c47-4129-9373-eb8999ab009c,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8c16cc48c9cae24c50e551dc4ffb786f6502b1e65fc8bcf65397e3beba2dd,PodSandboxId:31ba27ee18db2a2dce5e74f5feccf64d3dc534d81ab0058f0a67afdd27002961,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726592257659953729,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name:
ingress-nginx-admission-create-78945,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 46dcf364-4802-4e08-9db9-0e89c4984788,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c35ba12caa08b4b7dff115e7bec3ccfea7487866b24701619f1eb1dec01e5cdf,PodSandboxId:d5f73be9090dd4d27345f77aa93b450ea10f7e369ead9c0ec9078f75a9967238,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726592248653089159,Labels:map[string]string{io.kubernetes.conta
iner.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-nxwr4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b55954ef-19c5-428e-b2f5-64cb84921e99,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b3332c3d67663e332214a4d2a7600673eb50a2e85f955febcae8ae6bdc61cea,PodSandboxId:1266dc642a5d5d817566a303855b89ad35e9ba5ce0cd09a6987308c623d146d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State
:CONTAINER_RUNNING,CreatedAt:1726592207362039245,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 308ed5f9-f16c-45d9-b7c4-edb96a6aa2d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707,PodSandboxId:147e55def5b9c0c7a9d8b335c46dbfd7c65e5774b9251734e2b6257b17749d03,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,Creat
edAt:1726592204223691882,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6scmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db4f4dd-ff63-4e6e-8533-37fc690e481f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0,PodSandboxId:ba0e8772c0eef1236f4fa4985d24c32a210d0f1bdd86f2a9a4221eb1e6e06384,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307
063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726592200857172807,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6blpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fea4c9da-fbd5-4ac9-be8e-7cd0b574e3fc,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad,PodSandboxId:900247dfddd23f3136780bf7695d0bc30603abe8a6d1321d2c4ce6551729d09a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726592190061688263,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3943b5aa55300847a88d97baf9f5fcd9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15,PodSandboxId:0d778775904583ba97eb27717ac52155cf18a8980c70a3e42c566fb034a6538c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{}
,UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726592190061957344,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b22844b2da94ceb2dd6e2ff998a06b7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44,PodSandboxId:b694988dbe8974b1414fe63b75517e2c0ec0abb7613fd6db4ec17ca7ba275fc4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726592190025949182,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc7054c0bf6f7ef8f456663c4c477a6e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454,PodSandboxId:b5606602c326c9d22fba7b773a1101738483f2c55a045ae530eb2568b3631e79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726592189934323923,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e28321a1692b9c5e59016b226421277,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=21158963-471d-4dd7-8adf-19c02d1c338b name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:10:27 addons-408385 crio[662]: time="2024-09-17 17:10:27.698780900Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=15aa16ee-38b5-49fb-8cb1-ac0c1bfa8c44 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:10:27 addons-408385 crio[662]: time="2024-09-17 17:10:27.698878454Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=15aa16ee-38b5-49fb-8cb1-ac0c1bfa8c44 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:10:27 addons-408385 crio[662]: time="2024-09-17 17:10:27.700210105Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=70b8ceac-e14f-41a5-a67d-4664ae2004bf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:10:27 addons-408385 crio[662]: time="2024-09-17 17:10:27.701472301Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593027701436308,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579800,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=70b8ceac-e14f-41a5-a67d-4664ae2004bf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:10:27 addons-408385 crio[662]: time="2024-09-17 17:10:27.702112383Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67b27398-f005-4481-9074-4dae3572aec2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:10:27 addons-408385 crio[662]: time="2024-09-17 17:10:27.702178825Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67b27398-f005-4481-9074-4dae3572aec2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:10:27 addons-408385 crio[662]: time="2024-09-17 17:10:27.702664450Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00b9a002ec125abe4e7f50a4e6bd2705bc2fed2a77e71ae5ac798b3114e1db6c,PodSandboxId:9e624bc2158e468932af5a0902901aa8f1bf34037db6c4bab45b08b219f11247,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726593018687650340,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-cnzjd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cceea962-d19d-4282-aaaf-96c6277ba99f,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397f39163049566d23be8376dbed334aadd27c8c56e83f417aa2c59f2a252f9b,PodSandboxId:6e889439508c6580558306b641e7b6cfc5d4ce54fb03881be02e737d80da3344,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726592879375090193,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 076f30f2-e5f7-4810-8e8d-613a12b5664c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4c5e175eedc073e97f9ea9680c41d68c3c000caa69196b46e03499304831c0d,PodSandboxId:e39bbf11dc16564121b75b3ae0c124b1d4b3e667c00c6d0e30fe71b8dcc2eb3d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726592284895958469,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-b7hz4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 0d20c2e4-2c70-48c2-8fb9-a28309d6b41f,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbf8a94347cdf3d76d7c3a16e6e93cd904b404fc749bbf93aaf4c0045a2dea9d,PodSandboxId:d7e2e0870993796d606398fc8c186520db9a834b52f5c6551efda8ab541a156f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726592258234195990,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4b8gx,i
o.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e9f69cee-9c47-4129-9373-eb8999ab009c,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8c16cc48c9cae24c50e551dc4ffb786f6502b1e65fc8bcf65397e3beba2dd,PodSandboxId:31ba27ee18db2a2dce5e74f5feccf64d3dc534d81ab0058f0a67afdd27002961,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726592257659953729,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name:
ingress-nginx-admission-create-78945,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 46dcf364-4802-4e08-9db9-0e89c4984788,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c35ba12caa08b4b7dff115e7bec3ccfea7487866b24701619f1eb1dec01e5cdf,PodSandboxId:d5f73be9090dd4d27345f77aa93b450ea10f7e369ead9c0ec9078f75a9967238,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726592248653089159,Labels:map[string]string{io.kubernetes.conta
iner.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-nxwr4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b55954ef-19c5-428e-b2f5-64cb84921e99,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b3332c3d67663e332214a4d2a7600673eb50a2e85f955febcae8ae6bdc61cea,PodSandboxId:1266dc642a5d5d817566a303855b89ad35e9ba5ce0cd09a6987308c623d146d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State
:CONTAINER_RUNNING,CreatedAt:1726592207362039245,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 308ed5f9-f16c-45d9-b7c4-edb96a6aa2d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707,PodSandboxId:147e55def5b9c0c7a9d8b335c46dbfd7c65e5774b9251734e2b6257b17749d03,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,Creat
edAt:1726592204223691882,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6scmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db4f4dd-ff63-4e6e-8533-37fc690e481f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0,PodSandboxId:ba0e8772c0eef1236f4fa4985d24c32a210d0f1bdd86f2a9a4221eb1e6e06384,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307
063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726592200857172807,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6blpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fea4c9da-fbd5-4ac9-be8e-7cd0b574e3fc,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad,PodSandboxId:900247dfddd23f3136780bf7695d0bc30603abe8a6d1321d2c4ce6551729d09a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726592190061688263,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3943b5aa55300847a88d97baf9f5fcd9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15,PodSandboxId:0d778775904583ba97eb27717ac52155cf18a8980c70a3e42c566fb034a6538c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{}
,UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726592190061957344,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b22844b2da94ceb2dd6e2ff998a06b7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44,PodSandboxId:b694988dbe8974b1414fe63b75517e2c0ec0abb7613fd6db4ec17ca7ba275fc4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726592190025949182,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc7054c0bf6f7ef8f456663c4c477a6e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454,PodSandboxId:b5606602c326c9d22fba7b773a1101738483f2c55a045ae530eb2568b3631e79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726592189934323923,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e28321a1692b9c5e59016b226421277,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=67b27398-f005-4481-9074-4dae3572aec2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	00b9a002ec125       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        9 seconds ago       Running             hello-world-app           0                   9e624bc2158e4       hello-world-app-55bf9c44b4-cnzjd
	397f391630495       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              2 minutes ago       Running             nginx                     0                   6e889439508c6       nginx
	f4c5e175eedc0       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 12 minutes ago      Running             gcp-auth                  0                   e39bbf11dc165       gcp-auth-89d5ffd79-b7hz4
	fbf8a94347cdf       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                             12 minutes ago      Exited              patch                     1                   d7e2e08709937       ingress-nginx-admission-patch-4b8gx
	6ea8c16cc48c9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   12 minutes ago      Exited              create                    0                   31ba27ee18db2       ingress-nginx-admission-create-78945
	c35ba12caa08b       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        12 minutes ago      Running             metrics-server            0                   d5f73be9090dd       metrics-server-84c5f94fbc-nxwr4
	4b3332c3d6766       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             13 minutes ago      Running             storage-provisioner       0                   1266dc642a5d5       storage-provisioner
	bc6baaebe3ad7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             13 minutes ago      Running             coredns                   0                   147e55def5b9c       coredns-7c65d6cfc9-6scmn
	78abe757b26b6       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             13 minutes ago      Running             kube-proxy                0                   ba0e8772c0eef       kube-proxy-6blpt
	535459bc7374f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             13 minutes ago      Running             etcd                      0                   0d77877590458       etcd-addons-408385
	5e8239454541e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             13 minutes ago      Running             kube-scheduler            0                   900247dfddd23       kube-scheduler-addons-408385
	eb8765767a52a       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             13 minutes ago      Running             kube-controller-manager   0                   b694988dbe897       kube-controller-manager-addons-408385
	bd97816994086       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             13 minutes ago      Running             kube-apiserver            0                   b5606602c326c       kube-apiserver-addons-408385
	
	
	==> coredns [bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707] <==
	[INFO] 127.0.0.1:57157 - 41320 "HINFO IN 6395580120945152869.1644042831807943476. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013215393s
	[INFO] 10.244.0.7:46761 - 53795 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000416235s
	[INFO] 10.244.0.7:46761 - 26406 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000534252s
	[INFO] 10.244.0.7:48828 - 43868 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00012662s
	[INFO] 10.244.0.7:48828 - 6464 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000151166s
	[INFO] 10.244.0.7:37590 - 72 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000092427s
	[INFO] 10.244.0.7:37590 - 33095 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000167498s
	[INFO] 10.244.0.7:58968 - 53960 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000109529s
	[INFO] 10.244.0.7:58968 - 34006 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000103021s
	[INFO] 10.244.0.7:37473 - 44286 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000113543s
	[INFO] 10.244.0.7:37473 - 56545 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000093243s
	[INFO] 10.244.0.7:41216 - 28183 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000250593s
	[INFO] 10.244.0.7:41216 - 45082 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00008693s
	[INFO] 10.244.0.7:54147 - 34285 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000052976s
	[INFO] 10.244.0.7:54147 - 34283 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000055269s
	[INFO] 10.244.0.7:52498 - 26622 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000077619s
	[INFO] 10.244.0.7:52498 - 59135 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000083094s
	[INFO] 10.244.0.22:33436 - 15658 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000483817s
	[INFO] 10.244.0.22:54534 - 52664 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000596513s
	[INFO] 10.244.0.22:60274 - 25830 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000160929s
	[INFO] 10.244.0.22:55742 - 23361 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000135249s
	[INFO] 10.244.0.22:58422 - 120 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000117693s
	[INFO] 10.244.0.22:60253 - 8920 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000245419s
	[INFO] 10.244.0.22:47422 - 15749 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000974274s
	[INFO] 10.244.0.22:57287 - 962 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001082864s
	
	
	==> describe nodes <==
	Name:               addons-408385
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-408385
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=addons-408385
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T16_56_36_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-408385
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 16:56:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-408385
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:10:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:08:38 +0000   Tue, 17 Sep 2024 16:56:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:08:38 +0000   Tue, 17 Sep 2024 16:56:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:08:38 +0000   Tue, 17 Sep 2024 16:56:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:08:38 +0000   Tue, 17 Sep 2024 16:56:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.170
	  Hostname:    addons-408385
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 303ab64fe93940c69a272a146d3d7928
	  System UUID:                303ab64f-e939-40c6-9a27-2a146d3d7928
	  Boot ID:                    fb6d0db4-ddc4-405a-8acb-6d4fe2f98715
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-world-app-55bf9c44b4-cnzjd         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  gcp-auth                    gcp-auth-89d5ffd79-b7hz4                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-7c65d6cfc9-6scmn                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 etcd-addons-408385                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         13m
	  kube-system                 kube-apiserver-addons-408385             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-408385    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-6blpt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-408385             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 metrics-server-84c5f94fbc-nxwr4          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         13m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node addons-408385 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node addons-408385 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node addons-408385 status is now: NodeHasSufficientPID
	  Normal  NodeReady                13m   kubelet          Node addons-408385 status is now: NodeReady
	  Normal  RegisteredNode           13m   node-controller  Node addons-408385 event: Registered Node addons-408385 in Controller
	
	
	==> dmesg <==
	[  +7.274919] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.009609] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.152447] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.508316] kauditd_printk_skb: 31 callbacks suppressed
	[  +9.343675] kauditd_printk_skb: 13 callbacks suppressed
	[Sep17 16:58] kauditd_printk_skb: 11 callbacks suppressed
	[  +9.434512] kauditd_printk_skb: 35 callbacks suppressed
	[ +36.392548] kauditd_printk_skb: 30 callbacks suppressed
	[Sep17 16:59] kauditd_printk_skb: 2 callbacks suppressed
	[Sep17 17:00] kauditd_printk_skb: 28 callbacks suppressed
	[Sep17 17:03] kauditd_printk_skb: 28 callbacks suppressed
	[Sep17 17:06] kauditd_printk_skb: 28 callbacks suppressed
	[Sep17 17:07] kauditd_printk_skb: 27 callbacks suppressed
	[  +6.051902] kauditd_printk_skb: 49 callbacks suppressed
	[ +21.859037] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.899065] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.636828] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.569598] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.336851] kauditd_printk_skb: 27 callbacks suppressed
	[Sep17 17:08] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.561003] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.057084] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.541549] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.340073] kauditd_printk_skb: 2 callbacks suppressed
	[Sep17 17:10] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15] <==
	{"level":"warn","ts":"2024-09-17T16:57:40.825821Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.125828ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-84c5f94fbc-nxwr4\" ","response":"range_response_count:1 size:4566"}
	{"level":"info","ts":"2024-09-17T16:57:40.825856Z","caller":"traceutil/trace.go:171","msg":"trace[1110547650] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-84c5f94fbc-nxwr4; range_end:; response_count:1; response_revision:1019; }","duration":"133.163284ms","start":"2024-09-17T16:57:40.692688Z","end":"2024-09-17T16:57:40.825851Z","steps":["trace[1110547650] 'agreement among raft nodes before linearized reading'  (duration: 133.076718ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T16:57:40.825938Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.331801ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T16:57:40.825956Z","caller":"traceutil/trace.go:171","msg":"trace[1450865517] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1019; }","duration":"142.345433ms","start":"2024-09-17T16:57:40.683601Z","end":"2024-09-17T16:57:40.825947Z","steps":["trace[1450865517] 'agreement among raft nodes before linearized reading'  (duration: 142.316331ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T16:57:41.126635Z","caller":"traceutil/trace.go:171","msg":"trace[957294072] linearizableReadLoop","detail":"{readStateIndex:1046; appliedIndex:1045; }","duration":"156.015075ms","start":"2024-09-17T16:57:40.970600Z","end":"2024-09-17T16:57:41.126615Z","steps":["trace[957294072] 'read index received'  (duration: 151.361865ms)","trace[957294072] 'applied index is now lower than readState.Index'  (duration: 4.652523ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-17T16:57:41.127181Z","caller":"traceutil/trace.go:171","msg":"trace[1537240876] transaction","detail":"{read_only:false; response_revision:1020; number_of_response:1; }","duration":"288.479372ms","start":"2024-09-17T16:57:40.838686Z","end":"2024-09-17T16:57:41.127165Z","steps":["trace[1537240876] 'process raft request'  (duration: 283.161611ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T16:57:41.127246Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.650442ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T16:57:41.129441Z","caller":"traceutil/trace.go:171","msg":"trace[1844929121] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1020; }","duration":"158.756306ms","start":"2024-09-17T16:57:40.970576Z","end":"2024-09-17T16:57:41.129332Z","steps":["trace[1844929121] 'agreement among raft nodes before linearized reading'  (duration: 156.630189ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T16:57:41.128691Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.896858ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T16:57:41.130001Z","caller":"traceutil/trace.go:171","msg":"trace[2083427081] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1020; }","duration":"150.161523ms","start":"2024-09-17T16:57:40.979772Z","end":"2024-09-17T16:57:41.129934Z","steps":["trace[2083427081] 'agreement among raft nodes before linearized reading'  (duration: 148.875086ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T16:57:49.758492Z","caller":"traceutil/trace.go:171","msg":"trace[1749067213] linearizableReadLoop","detail":"{readStateIndex:1128; appliedIndex:1127; }","duration":"142.014283ms","start":"2024-09-17T16:57:49.616449Z","end":"2024-09-17T16:57:49.758464Z","steps":["trace[1749067213] 'read index received'  (duration: 137.745885ms)","trace[1749067213] 'applied index is now lower than readState.Index'  (duration: 4.264398ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-17T16:57:49.761832Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.490472ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T16:57:49.761946Z","caller":"traceutil/trace.go:171","msg":"trace[402711062] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1100; }","duration":"145.520155ms","start":"2024-09-17T16:57:49.616413Z","end":"2024-09-17T16:57:49.761933Z","steps":["trace[402711062] 'agreement among raft nodes before linearized reading'  (duration: 142.267631ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T16:58:00.905621Z","caller":"traceutil/trace.go:171","msg":"trace[1306530275] linearizableReadLoop","detail":"{readStateIndex:1157; appliedIndex:1156; }","duration":"289.751652ms","start":"2024-09-17T16:58:00.615836Z","end":"2024-09-17T16:58:00.905587Z","steps":["trace[1306530275] 'read index received'  (duration: 289.557299ms)","trace[1306530275] 'applied index is now lower than readState.Index'  (duration: 193.815µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-17T16:58:00.905791Z","caller":"traceutil/trace.go:171","msg":"trace[759752851] transaction","detail":"{read_only:false; response_revision:1127; number_of_response:1; }","duration":"392.185176ms","start":"2024-09-17T16:58:00.513584Z","end":"2024-09-17T16:58:00.905769Z","steps":["trace[759752851] 'process raft request'  (duration: 391.871349ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T16:58:00.905917Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-17T16:58:00.513568Z","time spent":"392.247218ms","remote":"127.0.0.1:46318","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1125 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-09-17T16:58:00.906045Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"290.207792ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T16:58:00.906091Z","caller":"traceutil/trace.go:171","msg":"trace[250334734] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1127; }","duration":"290.252982ms","start":"2024-09-17T16:58:00.615830Z","end":"2024-09-17T16:58:00.906083Z","steps":["trace[250334734] 'agreement among raft nodes before linearized reading'  (duration: 290.193074ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T16:58:00.906401Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.852875ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-84c5f94fbc-nxwr4\" ","response":"range_response_count:1 size:4566"}
	{"level":"info","ts":"2024-09-17T16:58:00.906443Z","caller":"traceutil/trace.go:171","msg":"trace[1208448744] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-84c5f94fbc-nxwr4; range_end:; response_count:1; response_revision:1127; }","duration":"214.898501ms","start":"2024-09-17T16:58:00.691537Z","end":"2024-09-17T16:58:00.906435Z","steps":["trace[1208448744] 'agreement among raft nodes before linearized reading'  (duration: 214.748925ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T16:58:05.566022Z","caller":"traceutil/trace.go:171","msg":"trace[1110581115] transaction","detail":"{read_only:false; response_revision:1159; number_of_response:1; }","duration":"216.906105ms","start":"2024-09-17T16:58:05.349099Z","end":"2024-09-17T16:58:05.566005Z","steps":["trace[1110581115] 'process raft request'  (duration: 216.44414ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T17:06:31.055466Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1534}
	{"level":"info","ts":"2024-09-17T17:06:31.099441Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1534,"took":"43.089418ms","hash":2805165075,"current-db-size-bytes":6627328,"current-db-size":"6.6 MB","current-db-size-in-use-bytes":3305472,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-09-17T17:06:31.099569Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2805165075,"revision":1534,"compact-revision":-1}
	{"level":"info","ts":"2024-09-17T17:08:47.292674Z","caller":"traceutil/trace.go:171","msg":"trace[380971827] transaction","detail":"{read_only:false; response_revision:2615; number_of_response:1; }","duration":"195.793214ms","start":"2024-09-17T17:08:47.096842Z","end":"2024-09-17T17:08:47.292635Z","steps":["trace[380971827] 'process raft request'  (duration: 195.290233ms)"],"step_count":1}
	
	
	==> gcp-auth [f4c5e175eedc073e97f9ea9680c41d68c3c000caa69196b46e03499304831c0d] <==
	2024/09/17 16:58:50 Ready to write response ...
	2024/09/17 17:06:53 Ready to marshal response ...
	2024/09/17 17:06:53 Ready to write response ...
	2024/09/17 17:06:53 Ready to marshal response ...
	2024/09/17 17:06:53 Ready to write response ...
	2024/09/17 17:06:55 Ready to marshal response ...
	2024/09/17 17:06:55 Ready to write response ...
	2024/09/17 17:07:04 Ready to marshal response ...
	2024/09/17 17:07:04 Ready to write response ...
	2024/09/17 17:07:06 Ready to marshal response ...
	2024/09/17 17:07:06 Ready to write response ...
	2024/09/17 17:07:29 Ready to marshal response ...
	2024/09/17 17:07:29 Ready to write response ...
	2024/09/17 17:07:50 Ready to marshal response ...
	2024/09/17 17:07:50 Ready to write response ...
	2024/09/17 17:07:56 Ready to marshal response ...
	2024/09/17 17:07:56 Ready to write response ...
	2024/09/17 17:08:08 Ready to marshal response ...
	2024/09/17 17:08:08 Ready to write response ...
	2024/09/17 17:08:08 Ready to marshal response ...
	2024/09/17 17:08:08 Ready to write response ...
	2024/09/17 17:08:08 Ready to marshal response ...
	2024/09/17 17:08:08 Ready to write response ...
	2024/09/17 17:10:17 Ready to marshal response ...
	2024/09/17 17:10:17 Ready to write response ...
	
	
	==> kernel <==
	 17:10:28 up 14 min,  0 users,  load average: 0.28, 0.61, 0.54
	Linux addons-408385 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454] <==
	E0917 17:07:32.332918       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0917 17:07:33.341648       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0917 17:07:34.356998       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0917 17:07:35.368574       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0917 17:07:36.377836       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0917 17:07:45.022032       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:07:45.022241       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 17:07:45.069544       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:07:45.070007       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 17:07:45.095472       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:07:45.095596       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 17:07:45.140075       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:07:45.140334       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 17:07:45.293286       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:07:45.293432       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0917 17:07:46.095884       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0917 17:07:46.293643       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0917 17:07:46.311410       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0917 17:07:56.668400       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0917 17:07:56.842947       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.139.217"}
	I0917 17:08:01.899778       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0917 17:08:03.039111       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0917 17:08:08.207716       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.117.32"}
	I0917 17:10:17.545474       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.45.149"}
	E0917 17:10:18.941843       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	
	
	==> kube-controller-manager [eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44] <==
	W0917 17:09:07.836291       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:09:07.836452       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:09:08.677493       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:09:08.677708       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:09:31.765193       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:09:31.765517       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:09:46.693253       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:09:46.693524       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:09:49.044705       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:09:49.044822       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:10:00.543800       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:10:00.544006       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 17:10:17.383730       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="48.277036ms"
	I0917 17:10:17.406932       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="23.120166ms"
	I0917 17:10:17.419926       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="12.939078ms"
	I0917 17:10:17.420013       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="40.485µs"
	I0917 17:10:18.923285       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="20.504929ms"
	I0917 17:10:18.923511       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="44.932µs"
	I0917 17:10:19.611940       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0917 17:10:19.617164       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="12.449µs"
	I0917 17:10:19.620968       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W0917 17:10:21.588190       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:10:21.588272       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:10:25.769079       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:10:25.769115       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 16:56:41.756177       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 16:56:41.772932       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.170"]
	E0917 16:56:41.773152       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 16:56:41.851988       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 16:56:41.852089       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 16:56:41.852113       1 server_linux.go:169] "Using iptables Proxier"
	I0917 16:56:41.860672       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 16:56:41.861044       1 server.go:483] "Version info" version="v1.31.1"
	I0917 16:56:41.861068       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 16:56:41.862987       1 config.go:199] "Starting service config controller"
	I0917 16:56:41.863008       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 16:56:41.863038       1 config.go:105] "Starting endpoint slice config controller"
	I0917 16:56:41.863044       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 16:56:41.863485       1 config.go:328] "Starting node config controller"
	I0917 16:56:41.863493       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 16:56:41.963269       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 16:56:41.963331       1 shared_informer.go:320] Caches are synced for service config
	I0917 16:56:41.963540       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad] <==
	W0917 16:56:32.615429       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0917 16:56:32.615914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:32.615566       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 16:56:32.615953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:32.615976       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 16:56:32.616015       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:33.493645       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0917 16:56:33.493786       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0917 16:56:33.498727       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0917 16:56:33.498778       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:33.500587       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0917 16:56:33.500622       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:33.737858       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0917 16:56:33.737918       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:33.762653       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0917 16:56:33.762736       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:33.781707       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 16:56:33.781836       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:33.846619       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 16:56:33.846670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:33.921594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 16:56:33.922678       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:33.924769       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 16:56:33.924819       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0917 16:56:35.706106       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 17 17:10:18 addons-408385 kubelet[1205]: I0917 17:10:18.745653    1205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs9wp\" (UniqueName: \"kubernetes.io/projected/a365fa42-68bf-4f57-ad20-e437ef76117e-kube-api-access-bs9wp\") pod \"a365fa42-68bf-4f57-ad20-e437ef76117e\" (UID: \"a365fa42-68bf-4f57-ad20-e437ef76117e\") "
	Sep 17 17:10:18 addons-408385 kubelet[1205]: I0917 17:10:18.748849    1205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a365fa42-68bf-4f57-ad20-e437ef76117e-kube-api-access-bs9wp" (OuterVolumeSpecName: "kube-api-access-bs9wp") pod "a365fa42-68bf-4f57-ad20-e437ef76117e" (UID: "a365fa42-68bf-4f57-ad20-e437ef76117e"). InnerVolumeSpecName "kube-api-access-bs9wp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 17:10:18 addons-408385 kubelet[1205]: I0917 17:10:18.846952    1205 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-bs9wp\" (UniqueName: \"kubernetes.io/projected/a365fa42-68bf-4f57-ad20-e437ef76117e-kube-api-access-bs9wp\") on node \"addons-408385\" DevicePath \"\""
	Sep 17 17:10:18 addons-408385 kubelet[1205]: I0917 17:10:18.874935    1205 scope.go:117] "RemoveContainer" containerID="18180b1d2a45e61bc74c08b02254b5c0a3901d01890ff4080dd549339ae22505"
	Sep 17 17:10:18 addons-408385 kubelet[1205]: I0917 17:10:18.911890    1205 scope.go:117] "RemoveContainer" containerID="18180b1d2a45e61bc74c08b02254b5c0a3901d01890ff4080dd549339ae22505"
	Sep 17 17:10:18 addons-408385 kubelet[1205]: E0917 17:10:18.915160    1205 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18180b1d2a45e61bc74c08b02254b5c0a3901d01890ff4080dd549339ae22505\": container with ID starting with 18180b1d2a45e61bc74c08b02254b5c0a3901d01890ff4080dd549339ae22505 not found: ID does not exist" containerID="18180b1d2a45e61bc74c08b02254b5c0a3901d01890ff4080dd549339ae22505"
	Sep 17 17:10:18 addons-408385 kubelet[1205]: I0917 17:10:18.915260    1205 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18180b1d2a45e61bc74c08b02254b5c0a3901d01890ff4080dd549339ae22505"} err="failed to get container status \"18180b1d2a45e61bc74c08b02254b5c0a3901d01890ff4080dd549339ae22505\": rpc error: code = NotFound desc = could not find container \"18180b1d2a45e61bc74c08b02254b5c0a3901d01890ff4080dd549339ae22505\": container with ID starting with 18180b1d2a45e61bc74c08b02254b5c0a3901d01890ff4080dd549339ae22505 not found: ID does not exist"
	Sep 17 17:10:18 addons-408385 kubelet[1205]: I0917 17:10:18.934239    1205 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-cnzjd" podStartSLOduration=1.224486823 podStartE2EDuration="1.934213853s" podCreationTimestamp="2024-09-17 17:10:17 +0000 UTC" firstStartedPulling="2024-09-17 17:10:17.966586695 +0000 UTC m=+822.828781419" lastFinishedPulling="2024-09-17 17:10:18.676313725 +0000 UTC m=+823.538508449" observedRunningTime="2024-09-17 17:10:18.908477045 +0000 UTC m=+823.770671784" watchObservedRunningTime="2024-09-17 17:10:18.934213853 +0000 UTC m=+823.796408593"
	Sep 17 17:10:19 addons-408385 kubelet[1205]: I0917 17:10:19.296048    1205 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a365fa42-68bf-4f57-ad20-e437ef76117e" path="/var/lib/kubelet/pods/a365fa42-68bf-4f57-ad20-e437ef76117e/volumes"
	Sep 17 17:10:21 addons-408385 kubelet[1205]: I0917 17:10:21.295231    1205 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46dcf364-4802-4e08-9db9-0e89c4984788" path="/var/lib/kubelet/pods/46dcf364-4802-4e08-9db9-0e89c4984788/volumes"
	Sep 17 17:10:21 addons-408385 kubelet[1205]: I0917 17:10:21.295765    1205 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9f69cee-9c47-4129-9373-eb8999ab009c" path="/var/lib/kubelet/pods/e9f69cee-9c47-4129-9373-eb8999ab009c/volumes"
	Sep 17 17:10:22 addons-408385 kubelet[1205]: I0917 17:10:22.880780    1205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw9lz\" (UniqueName: \"kubernetes.io/projected/049eb2d7-44c4-41a2-b1b5-bc7c2ca9a3d9-kube-api-access-nw9lz\") pod \"049eb2d7-44c4-41a2-b1b5-bc7c2ca9a3d9\" (UID: \"049eb2d7-44c4-41a2-b1b5-bc7c2ca9a3d9\") "
	Sep 17 17:10:22 addons-408385 kubelet[1205]: I0917 17:10:22.880868    1205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/049eb2d7-44c4-41a2-b1b5-bc7c2ca9a3d9-webhook-cert\") pod \"049eb2d7-44c4-41a2-b1b5-bc7c2ca9a3d9\" (UID: \"049eb2d7-44c4-41a2-b1b5-bc7c2ca9a3d9\") "
	Sep 17 17:10:22 addons-408385 kubelet[1205]: I0917 17:10:22.883747    1205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/049eb2d7-44c4-41a2-b1b5-bc7c2ca9a3d9-kube-api-access-nw9lz" (OuterVolumeSpecName: "kube-api-access-nw9lz") pod "049eb2d7-44c4-41a2-b1b5-bc7c2ca9a3d9" (UID: "049eb2d7-44c4-41a2-b1b5-bc7c2ca9a3d9"). InnerVolumeSpecName "kube-api-access-nw9lz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 17:10:22 addons-408385 kubelet[1205]: I0917 17:10:22.887662    1205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/049eb2d7-44c4-41a2-b1b5-bc7c2ca9a3d9-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "049eb2d7-44c4-41a2-b1b5-bc7c2ca9a3d9" (UID: "049eb2d7-44c4-41a2-b1b5-bc7c2ca9a3d9"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 17 17:10:22 addons-408385 kubelet[1205]: I0917 17:10:22.901892    1205 scope.go:117] "RemoveContainer" containerID="60c71bad9fa6a1e36178abd84c1ba467a3b3a142b5718369b389d74af6857592"
	Sep 17 17:10:22 addons-408385 kubelet[1205]: I0917 17:10:22.930778    1205 scope.go:117] "RemoveContainer" containerID="60c71bad9fa6a1e36178abd84c1ba467a3b3a142b5718369b389d74af6857592"
	Sep 17 17:10:22 addons-408385 kubelet[1205]: E0917 17:10:22.931411    1205 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60c71bad9fa6a1e36178abd84c1ba467a3b3a142b5718369b389d74af6857592\": container with ID starting with 60c71bad9fa6a1e36178abd84c1ba467a3b3a142b5718369b389d74af6857592 not found: ID does not exist" containerID="60c71bad9fa6a1e36178abd84c1ba467a3b3a142b5718369b389d74af6857592"
	Sep 17 17:10:22 addons-408385 kubelet[1205]: I0917 17:10:22.931481    1205 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60c71bad9fa6a1e36178abd84c1ba467a3b3a142b5718369b389d74af6857592"} err="failed to get container status \"60c71bad9fa6a1e36178abd84c1ba467a3b3a142b5718369b389d74af6857592\": rpc error: code = NotFound desc = could not find container \"60c71bad9fa6a1e36178abd84c1ba467a3b3a142b5718369b389d74af6857592\": container with ID starting with 60c71bad9fa6a1e36178abd84c1ba467a3b3a142b5718369b389d74af6857592 not found: ID does not exist"
	Sep 17 17:10:22 addons-408385 kubelet[1205]: I0917 17:10:22.981978    1205 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/049eb2d7-44c4-41a2-b1b5-bc7c2ca9a3d9-webhook-cert\") on node \"addons-408385\" DevicePath \"\""
	Sep 17 17:10:22 addons-408385 kubelet[1205]: I0917 17:10:22.982053    1205 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nw9lz\" (UniqueName: \"kubernetes.io/projected/049eb2d7-44c4-41a2-b1b5-bc7c2ca9a3d9-kube-api-access-nw9lz\") on node \"addons-408385\" DevicePath \"\""
	Sep 17 17:10:23 addons-408385 kubelet[1205]: E0917 17:10:23.294551    1205 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="b1cf7e30-fdc3-4fed-88ab-f3634aace95b"
	Sep 17 17:10:23 addons-408385 kubelet[1205]: I0917 17:10:23.297765    1205 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="049eb2d7-44c4-41a2-b1b5-bc7c2ca9a3d9" path="/var/lib/kubelet/pods/049eb2d7-44c4-41a2-b1b5-bc7c2ca9a3d9/volumes"
	Sep 17 17:10:25 addons-408385 kubelet[1205]: E0917 17:10:25.767089    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593025766701563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579800,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:10:25 addons-408385 kubelet[1205]: E0917 17:10:25.767585    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593025766701563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579800,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [4b3332c3d67663e332214a4d2a7600673eb50a2e85f955febcae8ae6bdc61cea] <==
	I0917 16:56:47.969062       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 16:56:48.024282       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 16:56:48.024402       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0917 16:56:48.040055       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 16:56:48.041757       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f8b747fa-ca28-40fb-9f2b-ae004859bb2e", APIVersion:"v1", ResourceVersion:"609", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-408385_c2b88aec-d75b-48d3-8371-4055fb3d5c3d became leader
	I0917 16:56:48.043770       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-408385_c2b88aec-d75b-48d3-8371-4055fb3d5c3d!
	I0917 16:56:48.148903       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-408385_c2b88aec-d75b-48d3-8371-4055fb3d5c3d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-408385 -n addons-408385
helpers_test.go:261: (dbg) Run:  kubectl --context addons-408385 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-408385 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-408385 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-408385/192.168.39.170
	Start Time:       Tue, 17 Sep 2024 16:58:50 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hhf5n (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hhf5n:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/busybox to addons-408385
	  Normal   Pulling    10m (x4 over 11m)    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     10m (x4 over 11m)    kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     10m (x4 over 11m)    kubelet            Error: ErrImagePull
	  Warning  Failed     9m57s (x6 over 11m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    94s (x43 over 11m)   kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.59s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (346.04s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 6.788418ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-nxwr4" [b55954ef-19c5-428e-b2f5-64cb84921e99] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006258902s
addons_test.go:417: (dbg) Run:  kubectl --context addons-408385 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-408385 top pods -n kube-system: exit status 1 (76.374779ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6scmn, age: 10m18.45075154s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-408385 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-408385 top pods -n kube-system: exit status 1 (90.550208ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6scmn, age: 10m20.402194418s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-408385 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-408385 top pods -n kube-system: exit status 1 (70.651474ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6scmn, age: 10m25.347441438s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-408385 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-408385 top pods -n kube-system: exit status 1 (80.286117ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6scmn, age: 10m31.988143306s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-408385 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-408385 top pods -n kube-system: exit status 1 (80.795268ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6scmn, age: 10m39.360983833s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-408385 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-408385 top pods -n kube-system: exit status 1 (80.011805ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6scmn, age: 10m50.081483829s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-408385 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-408385 top pods -n kube-system: exit status 1 (67.531833ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6scmn, age: 11m21.708740378s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-408385 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-408385 top pods -n kube-system: exit status 1 (66.792547ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6scmn, age: 12m12.281856583s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-408385 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-408385 top pods -n kube-system: exit status 1 (66.368166ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6scmn, age: 13m16.298736734s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-408385 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-408385 top pods -n kube-system: exit status 1 (64.394315ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6scmn, age: 14m7.643816047s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-408385 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-408385 top pods -n kube-system: exit status 1 (65.679351ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6scmn, age: 14m59.940413591s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-408385 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-408385 top pods -n kube-system: exit status 1 (66.828483ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6scmn, age: 15m56.354251659s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-408385 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-408385 -n addons-408385
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-408385 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-408385 logs -n 25: (1.523688628s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-581824                                                                     | download-only-581824 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| delete  | -p download-only-285125                                                                     | download-only-285125 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-510758 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |                     |
	|         | binary-mirror-510758                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:36709                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-510758                                                                     | binary-mirror-510758 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| addons  | disable dashboard -p                                                                        | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |                     |
	|         | addons-408385                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |                     |
	|         | addons-408385                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-408385 --wait=true                                                                | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:58 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-408385 ssh cat                                                                       | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:07 UTC | 17 Sep 24 17:07 UTC |
	|         | /opt/local-path-provisioner/pvc-909e1d4d-bf3e-45b2-8d6d-fc1ce31d7fc6_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-408385 addons disable                                                                | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:07 UTC | 17 Sep 24 17:07 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-408385 addons                                                                        | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:07 UTC | 17 Sep 24 17:07 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-408385 addons                                                                        | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:07 UTC | 17 Sep 24 17:07 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-408385 addons disable                                                                | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:07 UTC | 17 Sep 24 17:07 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:07 UTC | 17 Sep 24 17:07 UTC |
	|         | addons-408385                                                                               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:08 UTC | 17 Sep 24 17:08 UTC |
	|         | addons-408385                                                                               |                      |         |         |                     |                     |
	| ip      | addons-408385 ip                                                                            | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:08 UTC | 17 Sep 24 17:08 UTC |
	| addons  | addons-408385 addons disable                                                                | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:08 UTC | 17 Sep 24 17:08 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-408385 ssh curl -s                                                                   | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:08 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:08 UTC | 17 Sep 24 17:08 UTC |
	|         | -p addons-408385                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-408385 addons disable                                                                | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:08 UTC | 17 Sep 24 17:08 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-408385 addons disable                                                                | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:08 UTC | 17 Sep 24 17:08 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:08 UTC | 17 Sep 24 17:08 UTC |
	|         | -p addons-408385                                                                            |                      |         |         |                     |                     |
	| ip      | addons-408385 ip                                                                            | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:10 UTC | 17 Sep 24 17:10 UTC |
	| addons  | addons-408385 addons disable                                                                | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:10 UTC | 17 Sep 24 17:10 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-408385 addons disable                                                                | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:10 UTC | 17 Sep 24 17:10 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-408385 addons                                                                        | addons-408385        | jenkins | v1.34.0 | 17 Sep 24 17:12 UTC | 17 Sep 24 17:12 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 16:55:51
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 16:55:51.791795   18924 out.go:345] Setting OutFile to fd 1 ...
	I0917 16:55:51.792044   18924 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:55:51.792053   18924 out.go:358] Setting ErrFile to fd 2...
	I0917 16:55:51.792058   18924 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:55:51.792230   18924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 16:55:51.792827   18924 out.go:352] Setting JSON to false
	I0917 16:55:51.793665   18924 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2267,"bootTime":1726589885,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 16:55:51.793765   18924 start.go:139] virtualization: kvm guest
	I0917 16:55:51.795973   18924 out.go:177] * [addons-408385] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 16:55:51.797387   18924 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 16:55:51.797381   18924 notify.go:220] Checking for updates...
	I0917 16:55:51.798951   18924 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 16:55:51.800529   18924 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 16:55:51.801832   18924 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 16:55:51.803070   18924 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 16:55:51.804253   18924 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 16:55:51.805653   18924 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 16:55:51.838070   18924 out.go:177] * Using the kvm2 driver based on user configuration
	I0917 16:55:51.839376   18924 start.go:297] selected driver: kvm2
	I0917 16:55:51.839394   18924 start.go:901] validating driver "kvm2" against <nil>
	I0917 16:55:51.839405   18924 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 16:55:51.840126   18924 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 16:55:51.840207   18924 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19662-11085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0917 16:55:51.855471   18924 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0917 16:55:51.855528   18924 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 16:55:51.855817   18924 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 16:55:51.855861   18924 cni.go:84] Creating CNI manager for ""
	I0917 16:55:51.855920   18924 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 16:55:51.855931   18924 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 16:55:51.855997   18924 start.go:340] cluster config:
	{Name:addons-408385 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-408385 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 16:55:51.856122   18924 iso.go:125] acquiring lock: {Name:mkee7131e09b1c5eb94d106bda884bcbdbf6f906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 16:55:51.858118   18924 out.go:177] * Starting "addons-408385" primary control-plane node in "addons-408385" cluster
	I0917 16:55:51.859487   18924 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 16:55:51.859520   18924 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0917 16:55:51.859551   18924 cache.go:56] Caching tarball of preloaded images
	I0917 16:55:51.859643   18924 preload.go:172] Found /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 16:55:51.859654   18924 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0917 16:55:51.859979   18924 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/config.json ...
	I0917 16:55:51.860003   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/config.json: {Name:mkaab3d4715b6a1329fbbb57cdab9fd6bad92461 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:55:51.860158   18924 start.go:360] acquireMachinesLock for addons-408385: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 16:55:51.860218   18924 start.go:364] duration metric: took 44.183µs to acquireMachinesLock for "addons-408385"
	I0917 16:55:51.860239   18924 start.go:93] Provisioning new machine with config: &{Name:addons-408385 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-408385 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 16:55:51.860305   18924 start.go:125] createHost starting for "" (driver="kvm2")
	I0917 16:55:51.862121   18924 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0917 16:55:51.862257   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:55:51.862301   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:55:51.877059   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34091
	I0917 16:55:51.877513   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:55:51.877999   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:55:51.878018   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:55:51.878383   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:55:51.878572   18924 main.go:141] libmachine: (addons-408385) Calling .GetMachineName
	I0917 16:55:51.878714   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:55:51.878883   18924 start.go:159] libmachine.API.Create for "addons-408385" (driver="kvm2")
	I0917 16:55:51.878911   18924 client.go:168] LocalClient.Create starting
	I0917 16:55:51.878946   18924 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem
	I0917 16:55:51.947974   18924 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem
	I0917 16:55:52.056813   18924 main.go:141] libmachine: Running pre-create checks...
	I0917 16:55:52.056834   18924 main.go:141] libmachine: (addons-408385) Calling .PreCreateCheck
	I0917 16:55:52.057355   18924 main.go:141] libmachine: (addons-408385) Calling .GetConfigRaw
	I0917 16:55:52.057806   18924 main.go:141] libmachine: Creating machine...
	I0917 16:55:52.057820   18924 main.go:141] libmachine: (addons-408385) Calling .Create
	I0917 16:55:52.057938   18924 main.go:141] libmachine: (addons-408385) Creating KVM machine...
	I0917 16:55:52.059242   18924 main.go:141] libmachine: (addons-408385) DBG | found existing default KVM network
	I0917 16:55:52.060009   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:52.059868   18946 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123a60}
	I0917 16:55:52.060022   18924 main.go:141] libmachine: (addons-408385) DBG | created network xml: 
	I0917 16:55:52.060030   18924 main.go:141] libmachine: (addons-408385) DBG | <network>
	I0917 16:55:52.060035   18924 main.go:141] libmachine: (addons-408385) DBG |   <name>mk-addons-408385</name>
	I0917 16:55:52.060041   18924 main.go:141] libmachine: (addons-408385) DBG |   <dns enable='no'/>
	I0917 16:55:52.060045   18924 main.go:141] libmachine: (addons-408385) DBG |   
	I0917 16:55:52.060051   18924 main.go:141] libmachine: (addons-408385) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0917 16:55:52.060058   18924 main.go:141] libmachine: (addons-408385) DBG |     <dhcp>
	I0917 16:55:52.060064   18924 main.go:141] libmachine: (addons-408385) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0917 16:55:52.060070   18924 main.go:141] libmachine: (addons-408385) DBG |     </dhcp>
	I0917 16:55:52.060083   18924 main.go:141] libmachine: (addons-408385) DBG |   </ip>
	I0917 16:55:52.060092   18924 main.go:141] libmachine: (addons-408385) DBG |   
	I0917 16:55:52.060101   18924 main.go:141] libmachine: (addons-408385) DBG | </network>
	I0917 16:55:52.060112   18924 main.go:141] libmachine: (addons-408385) DBG | 
	I0917 16:55:52.065525   18924 main.go:141] libmachine: (addons-408385) DBG | trying to create private KVM network mk-addons-408385 192.168.39.0/24...
	I0917 16:55:52.130546   18924 main.go:141] libmachine: (addons-408385) DBG | private KVM network mk-addons-408385 192.168.39.0/24 created
	I0917 16:55:52.130574   18924 main.go:141] libmachine: (addons-408385) Setting up store path in /home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385 ...
	I0917 16:55:52.130589   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:52.130546   18946 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 16:55:52.130612   18924 main.go:141] libmachine: (addons-408385) Building disk image from file:///home/jenkins/minikube-integration/19662-11085/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0917 16:55:52.130765   18924 main.go:141] libmachine: (addons-408385) Downloading /home/jenkins/minikube-integration/19662-11085/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19662-11085/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0917 16:55:52.385741   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:52.385631   18946 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa...
	I0917 16:55:52.511387   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:52.511277   18946 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/addons-408385.rawdisk...
	I0917 16:55:52.511413   18924 main.go:141] libmachine: (addons-408385) DBG | Writing magic tar header
	I0917 16:55:52.511427   18924 main.go:141] libmachine: (addons-408385) DBG | Writing SSH key tar header
	I0917 16:55:52.511451   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:52.511387   18946 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385 ...
	I0917 16:55:52.511506   18924 main.go:141] libmachine: (addons-408385) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385
	I0917 16:55:52.511525   18924 main.go:141] libmachine: (addons-408385) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube/machines
	I0917 16:55:52.511538   18924 main.go:141] libmachine: (addons-408385) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385 (perms=drwx------)
	I0917 16:55:52.511548   18924 main.go:141] libmachine: (addons-408385) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 16:55:52.511562   18924 main.go:141] libmachine: (addons-408385) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085
	I0917 16:55:52.511573   18924 main.go:141] libmachine: (addons-408385) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube/machines (perms=drwxr-xr-x)
	I0917 16:55:52.511586   18924 main.go:141] libmachine: (addons-408385) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube (perms=drwxr-xr-x)
	I0917 16:55:52.511598   18924 main.go:141] libmachine: (addons-408385) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085 (perms=drwxrwxr-x)
	I0917 16:55:52.511610   18924 main.go:141] libmachine: (addons-408385) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0917 16:55:52.511622   18924 main.go:141] libmachine: (addons-408385) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0917 16:55:52.511634   18924 main.go:141] libmachine: (addons-408385) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0917 16:55:52.511646   18924 main.go:141] libmachine: (addons-408385) DBG | Checking permissions on dir: /home/jenkins
	I0917 16:55:52.511656   18924 main.go:141] libmachine: (addons-408385) Creating domain...
	I0917 16:55:52.511669   18924 main.go:141] libmachine: (addons-408385) DBG | Checking permissions on dir: /home
	I0917 16:55:52.511682   18924 main.go:141] libmachine: (addons-408385) DBG | Skipping /home - not owner
	I0917 16:55:52.512603   18924 main.go:141] libmachine: (addons-408385) define libvirt domain using xml: 
	I0917 16:55:52.512625   18924 main.go:141] libmachine: (addons-408385) <domain type='kvm'>
	I0917 16:55:52.512635   18924 main.go:141] libmachine: (addons-408385)   <name>addons-408385</name>
	I0917 16:55:52.512642   18924 main.go:141] libmachine: (addons-408385)   <memory unit='MiB'>4000</memory>
	I0917 16:55:52.512649   18924 main.go:141] libmachine: (addons-408385)   <vcpu>2</vcpu>
	I0917 16:55:52.512661   18924 main.go:141] libmachine: (addons-408385)   <features>
	I0917 16:55:52.512670   18924 main.go:141] libmachine: (addons-408385)     <acpi/>
	I0917 16:55:52.512679   18924 main.go:141] libmachine: (addons-408385)     <apic/>
	I0917 16:55:52.512690   18924 main.go:141] libmachine: (addons-408385)     <pae/>
	I0917 16:55:52.512699   18924 main.go:141] libmachine: (addons-408385)     
	I0917 16:55:52.512706   18924 main.go:141] libmachine: (addons-408385)   </features>
	I0917 16:55:52.512714   18924 main.go:141] libmachine: (addons-408385)   <cpu mode='host-passthrough'>
	I0917 16:55:52.512721   18924 main.go:141] libmachine: (addons-408385)   
	I0917 16:55:52.512730   18924 main.go:141] libmachine: (addons-408385)   </cpu>
	I0917 16:55:52.512740   18924 main.go:141] libmachine: (addons-408385)   <os>
	I0917 16:55:52.512749   18924 main.go:141] libmachine: (addons-408385)     <type>hvm</type>
	I0917 16:55:52.512760   18924 main.go:141] libmachine: (addons-408385)     <boot dev='cdrom'/>
	I0917 16:55:52.512769   18924 main.go:141] libmachine: (addons-408385)     <boot dev='hd'/>
	I0917 16:55:52.512778   18924 main.go:141] libmachine: (addons-408385)     <bootmenu enable='no'/>
	I0917 16:55:52.512784   18924 main.go:141] libmachine: (addons-408385)   </os>
	I0917 16:55:52.512790   18924 main.go:141] libmachine: (addons-408385)   <devices>
	I0917 16:55:52.512802   18924 main.go:141] libmachine: (addons-408385)     <disk type='file' device='cdrom'>
	I0917 16:55:52.512812   18924 main.go:141] libmachine: (addons-408385)       <source file='/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/boot2docker.iso'/>
	I0917 16:55:52.512819   18924 main.go:141] libmachine: (addons-408385)       <target dev='hdc' bus='scsi'/>
	I0917 16:55:52.512824   18924 main.go:141] libmachine: (addons-408385)       <readonly/>
	I0917 16:55:52.512834   18924 main.go:141] libmachine: (addons-408385)     </disk>
	I0917 16:55:52.512861   18924 main.go:141] libmachine: (addons-408385)     <disk type='file' device='disk'>
	I0917 16:55:52.512887   18924 main.go:141] libmachine: (addons-408385)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0917 16:55:52.512920   18924 main.go:141] libmachine: (addons-408385)       <source file='/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/addons-408385.rawdisk'/>
	I0917 16:55:52.512947   18924 main.go:141] libmachine: (addons-408385)       <target dev='hda' bus='virtio'/>
	I0917 16:55:52.512961   18924 main.go:141] libmachine: (addons-408385)     </disk>
	I0917 16:55:52.512977   18924 main.go:141] libmachine: (addons-408385)     <interface type='network'>
	I0917 16:55:52.512987   18924 main.go:141] libmachine: (addons-408385)       <source network='mk-addons-408385'/>
	I0917 16:55:52.512994   18924 main.go:141] libmachine: (addons-408385)       <model type='virtio'/>
	I0917 16:55:52.512999   18924 main.go:141] libmachine: (addons-408385)     </interface>
	I0917 16:55:52.513008   18924 main.go:141] libmachine: (addons-408385)     <interface type='network'>
	I0917 16:55:52.513020   18924 main.go:141] libmachine: (addons-408385)       <source network='default'/>
	I0917 16:55:52.513030   18924 main.go:141] libmachine: (addons-408385)       <model type='virtio'/>
	I0917 16:55:52.513041   18924 main.go:141] libmachine: (addons-408385)     </interface>
	I0917 16:55:52.513054   18924 main.go:141] libmachine: (addons-408385)     <serial type='pty'>
	I0917 16:55:52.513065   18924 main.go:141] libmachine: (addons-408385)       <target port='0'/>
	I0917 16:55:52.513074   18924 main.go:141] libmachine: (addons-408385)     </serial>
	I0917 16:55:52.513083   18924 main.go:141] libmachine: (addons-408385)     <console type='pty'>
	I0917 16:55:52.513090   18924 main.go:141] libmachine: (addons-408385)       <target type='serial' port='0'/>
	I0917 16:55:52.513100   18924 main.go:141] libmachine: (addons-408385)     </console>
	I0917 16:55:52.513110   18924 main.go:141] libmachine: (addons-408385)     <rng model='virtio'>
	I0917 16:55:52.513123   18924 main.go:141] libmachine: (addons-408385)       <backend model='random'>/dev/random</backend>
	I0917 16:55:52.513136   18924 main.go:141] libmachine: (addons-408385)     </rng>
	I0917 16:55:52.513146   18924 main.go:141] libmachine: (addons-408385)     
	I0917 16:55:52.513151   18924 main.go:141] libmachine: (addons-408385)     
	I0917 16:55:52.513161   18924 main.go:141] libmachine: (addons-408385)   </devices>
	I0917 16:55:52.513168   18924 main.go:141] libmachine: (addons-408385) </domain>
	I0917 16:55:52.513179   18924 main.go:141] libmachine: (addons-408385) 
	I0917 16:55:52.519149   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:10:0b:0b in network default
	I0917 16:55:52.519688   18924 main.go:141] libmachine: (addons-408385) Ensuring networks are active...
	I0917 16:55:52.519712   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:55:52.520323   18924 main.go:141] libmachine: (addons-408385) Ensuring network default is active
	I0917 16:55:52.520629   18924 main.go:141] libmachine: (addons-408385) Ensuring network mk-addons-408385 is active
	I0917 16:55:52.521053   18924 main.go:141] libmachine: (addons-408385) Getting domain xml...
	I0917 16:55:52.521710   18924 main.go:141] libmachine: (addons-408385) Creating domain...
	I0917 16:55:53.811430   18924 main.go:141] libmachine: (addons-408385) Waiting to get IP...
	I0917 16:55:53.812152   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:55:53.812522   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:55:53.812543   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:53.812493   18946 retry.go:31] will retry after 197.5195ms: waiting for machine to come up
	I0917 16:55:54.012026   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:55:54.012441   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:55:54.012468   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:54.012412   18946 retry.go:31] will retry after 326.010953ms: waiting for machine to come up
	I0917 16:55:54.339858   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:55:54.340287   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:55:54.340312   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:54.340239   18946 retry.go:31] will retry after 296.869686ms: waiting for machine to come up
	I0917 16:55:54.638673   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:55:54.639104   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:55:54.639128   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:54.639060   18946 retry.go:31] will retry after 392.314611ms: waiting for machine to come up
	I0917 16:55:55.032985   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:55:55.033655   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:55:55.033684   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:55.033600   18946 retry.go:31] will retry after 585.264566ms: waiting for machine to come up
	I0917 16:55:55.620073   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:55:55.620498   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:55:55.620534   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:55.620466   18946 retry.go:31] will retry after 797.322744ms: waiting for machine to come up
	I0917 16:55:56.419607   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:55:56.420088   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:55:56.420115   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:56.420046   18946 retry.go:31] will retry after 1.028584855s: waiting for machine to come up
	I0917 16:55:57.450058   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:55:57.450474   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:55:57.450503   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:57.450420   18946 retry.go:31] will retry after 1.43599402s: waiting for machine to come up
	I0917 16:55:58.888104   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:55:58.888459   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:55:58.888481   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:55:58.888437   18946 retry.go:31] will retry after 1.280603811s: waiting for machine to come up
	I0917 16:56:00.170844   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:00.171138   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:56:00.171158   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:56:00.171116   18946 retry.go:31] will retry after 1.674811656s: waiting for machine to come up
	I0917 16:56:01.848038   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:01.848477   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:56:01.848503   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:56:01.848445   18946 retry.go:31] will retry after 2.792716027s: waiting for machine to come up
	I0917 16:56:04.644899   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:04.645317   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:56:04.645336   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:56:04.645282   18946 retry.go:31] will retry after 2.720169067s: waiting for machine to come up
	I0917 16:56:07.367470   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:07.367874   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:56:07.367899   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:56:07.367847   18946 retry.go:31] will retry after 4.528965555s: waiting for machine to come up
	I0917 16:56:11.898213   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:11.898579   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find current IP address of domain addons-408385 in network mk-addons-408385
	I0917 16:56:11.898600   18924 main.go:141] libmachine: (addons-408385) DBG | I0917 16:56:11.898539   18946 retry.go:31] will retry after 4.262922802s: waiting for machine to come up
	I0917 16:56:16.165468   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.165964   18924 main.go:141] libmachine: (addons-408385) Found IP for machine: 192.168.39.170
	I0917 16:56:16.165979   18924 main.go:141] libmachine: (addons-408385) Reserving static IP address...
	I0917 16:56:16.165988   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has current primary IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.166352   18924 main.go:141] libmachine: (addons-408385) DBG | unable to find host DHCP lease matching {name: "addons-408385", mac: "52:54:00:69:b5:a2", ip: "192.168.39.170"} in network mk-addons-408385
	I0917 16:56:16.239610   18924 main.go:141] libmachine: (addons-408385) DBG | Getting to WaitForSSH function...
	I0917 16:56:16.239655   18924 main.go:141] libmachine: (addons-408385) Reserved static IP address: 192.168.39.170
	I0917 16:56:16.239670   18924 main.go:141] libmachine: (addons-408385) Waiting for SSH to be available...
	I0917 16:56:16.242205   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.242648   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:minikube Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:16.242681   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.242868   18924 main.go:141] libmachine: (addons-408385) DBG | Using SSH client type: external
	I0917 16:56:16.242892   18924 main.go:141] libmachine: (addons-408385) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa (-rw-------)
	I0917 16:56:16.242919   18924 main.go:141] libmachine: (addons-408385) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.170 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 16:56:16.242929   18924 main.go:141] libmachine: (addons-408385) DBG | About to run SSH command:
	I0917 16:56:16.242938   18924 main.go:141] libmachine: (addons-408385) DBG | exit 0
	I0917 16:56:16.377461   18924 main.go:141] libmachine: (addons-408385) DBG | SSH cmd err, output: <nil>: 
	I0917 16:56:16.377719   18924 main.go:141] libmachine: (addons-408385) KVM machine creation complete!
	I0917 16:56:16.378103   18924 main.go:141] libmachine: (addons-408385) Calling .GetConfigRaw
	I0917 16:56:16.378639   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:16.378776   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:16.378886   18924 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0917 16:56:16.378895   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:16.380224   18924 main.go:141] libmachine: Detecting operating system of created instance...
	I0917 16:56:16.380240   18924 main.go:141] libmachine: Waiting for SSH to be available...
	I0917 16:56:16.380247   18924 main.go:141] libmachine: Getting to WaitForSSH function...
	I0917 16:56:16.380282   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:16.382400   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.382795   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:16.382826   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.382937   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:16.383090   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:16.383243   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:16.383336   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:16.383453   18924 main.go:141] libmachine: Using SSH client type: native
	I0917 16:56:16.383654   18924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0917 16:56:16.383667   18924 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0917 16:56:16.496650   18924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 16:56:16.496684   18924 main.go:141] libmachine: Detecting the provisioner...
	I0917 16:56:16.496692   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:16.499052   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.499387   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:16.499419   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.499509   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:16.499704   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:16.499841   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:16.499969   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:16.500153   18924 main.go:141] libmachine: Using SSH client type: native
	I0917 16:56:16.500355   18924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0917 16:56:16.500368   18924 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0917 16:56:16.614164   18924 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0917 16:56:16.614231   18924 main.go:141] libmachine: found compatible host: buildroot
	I0917 16:56:16.614239   18924 main.go:141] libmachine: Provisioning with buildroot...
	I0917 16:56:16.614251   18924 main.go:141] libmachine: (addons-408385) Calling .GetMachineName
	I0917 16:56:16.614509   18924 buildroot.go:166] provisioning hostname "addons-408385"
	I0917 16:56:16.614541   18924 main.go:141] libmachine: (addons-408385) Calling .GetMachineName
	I0917 16:56:16.614725   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:16.616892   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.617265   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:16.617292   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.617459   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:16.617618   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:16.617766   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:16.617880   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:16.618037   18924 main.go:141] libmachine: Using SSH client type: native
	I0917 16:56:16.618259   18924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0917 16:56:16.618274   18924 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-408385 && echo "addons-408385" | sudo tee /etc/hostname
	I0917 16:56:16.748306   18924 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-408385
	
	I0917 16:56:16.748338   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:16.751036   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.751353   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:16.751375   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.751594   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:16.751810   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:16.751967   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:16.752091   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:16.752236   18924 main.go:141] libmachine: Using SSH client type: native
	I0917 16:56:16.752408   18924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0917 16:56:16.752423   18924 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-408385' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-408385/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-408385' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 16:56:16.874871   18924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 16:56:16.874903   18924 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 16:56:16.874921   18924 buildroot.go:174] setting up certificates
	I0917 16:56:16.874931   18924 provision.go:84] configureAuth start
	I0917 16:56:16.874941   18924 main.go:141] libmachine: (addons-408385) Calling .GetMachineName
	I0917 16:56:16.875174   18924 main.go:141] libmachine: (addons-408385) Calling .GetIP
	I0917 16:56:16.877616   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.877962   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:16.877988   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.878128   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:16.879974   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.880235   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:16.880259   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:16.880362   18924 provision.go:143] copyHostCerts
	I0917 16:56:16.880447   18924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 16:56:16.880581   18924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 16:56:16.880694   18924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 16:56:16.880808   18924 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.addons-408385 san=[127.0.0.1 192.168.39.170 addons-408385 localhost minikube]
	I0917 16:56:17.201888   18924 provision.go:177] copyRemoteCerts
	I0917 16:56:17.201953   18924 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 16:56:17.201979   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:17.204413   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.204738   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:17.204767   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.204895   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:17.205077   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:17.205246   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:17.205392   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:17.291808   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 16:56:17.316923   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 16:56:17.341072   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 16:56:17.365516   18924 provision.go:87] duration metric: took 490.573886ms to configureAuth
	I0917 16:56:17.365539   18924 buildroot.go:189] setting minikube options for container-runtime
	I0917 16:56:17.365730   18924 config.go:182] Loaded profile config "addons-408385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 16:56:17.365826   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:17.368283   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.368639   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:17.368670   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.368823   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:17.369022   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:17.369153   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:17.369339   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:17.369514   18924 main.go:141] libmachine: Using SSH client type: native
	I0917 16:56:17.369693   18924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0917 16:56:17.369712   18924 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 16:56:17.597824   18924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 16:56:17.597848   18924 main.go:141] libmachine: Checking connection to Docker...
	I0917 16:56:17.597855   18924 main.go:141] libmachine: (addons-408385) Calling .GetURL
	I0917 16:56:17.599183   18924 main.go:141] libmachine: (addons-408385) DBG | Using libvirt version 6000000
	I0917 16:56:17.601596   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.601942   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:17.602006   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.602117   18924 main.go:141] libmachine: Docker is up and running!
	I0917 16:56:17.602131   18924 main.go:141] libmachine: Reticulating splines...
	I0917 16:56:17.602139   18924 client.go:171] duration metric: took 25.723220135s to LocalClient.Create
	I0917 16:56:17.602162   18924 start.go:167] duration metric: took 25.723279645s to libmachine.API.Create "addons-408385"
	I0917 16:56:17.602175   18924 start.go:293] postStartSetup for "addons-408385" (driver="kvm2")
	I0917 16:56:17.602188   18924 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 16:56:17.602210   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:17.602465   18924 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 16:56:17.602494   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:17.604650   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.604946   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:17.604964   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.605100   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:17.605274   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:17.605409   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:17.605565   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:17.694995   18924 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 16:56:17.699639   18924 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 16:56:17.699666   18924 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 16:56:17.699739   18924 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 16:56:17.699761   18924 start.go:296] duration metric: took 97.580146ms for postStartSetup
	I0917 16:56:17.699789   18924 main.go:141] libmachine: (addons-408385) Calling .GetConfigRaw
	I0917 16:56:17.700415   18924 main.go:141] libmachine: (addons-408385) Calling .GetIP
	I0917 16:56:17.702737   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.703149   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:17.703177   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.703448   18924 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/config.json ...
	I0917 16:56:17.703625   18924 start.go:128] duration metric: took 25.843310151s to createHost
	I0917 16:56:17.703646   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:17.705890   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.706224   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:17.706252   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.706358   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:17.706557   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:17.706719   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:17.706848   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:17.706979   18924 main.go:141] libmachine: Using SSH client type: native
	I0917 16:56:17.707143   18924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0917 16:56:17.707155   18924 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 16:56:17.822141   18924 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726592177.789241010
	
	I0917 16:56:17.822164   18924 fix.go:216] guest clock: 1726592177.789241010
	I0917 16:56:17.822171   18924 fix.go:229] Guest: 2024-09-17 16:56:17.78924101 +0000 UTC Remote: 2024-09-17 16:56:17.703636441 +0000 UTC m=+25.947315089 (delta=85.604569ms)
	I0917 16:56:17.822210   18924 fix.go:200] guest clock delta is within tolerance: 85.604569ms
	I0917 16:56:17.822215   18924 start.go:83] releasing machines lock for "addons-408385", held for 25.961986034s
	I0917 16:56:17.822238   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:17.822502   18924 main.go:141] libmachine: (addons-408385) Calling .GetIP
	I0917 16:56:17.825005   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.825336   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:17.825360   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.825513   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:17.826069   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:17.826274   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:17.826383   18924 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 16:56:17.826443   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:17.826489   18924 ssh_runner.go:195] Run: cat /version.json
	I0917 16:56:17.826513   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:17.829125   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.829486   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:17.829512   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.829533   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.829632   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:17.829794   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:17.829906   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:17.829934   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:17.829954   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:17.830071   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:17.830128   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:17.830224   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:17.830373   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:17.830521   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:17.951534   18924 ssh_runner.go:195] Run: systemctl --version
	I0917 16:56:17.958040   18924 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 16:56:18.115686   18924 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 16:56:18.123126   18924 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 16:56:18.123194   18924 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 16:56:18.140793   18924 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 16:56:18.140817   18924 start.go:495] detecting cgroup driver to use...
	I0917 16:56:18.140888   18924 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 16:56:18.158500   18924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 16:56:18.173453   18924 docker.go:217] disabling cri-docker service (if available) ...
	I0917 16:56:18.173513   18924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 16:56:18.187957   18924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 16:56:18.202598   18924 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 16:56:18.333027   18924 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 16:56:18.469130   18924 docker.go:233] disabling docker service ...
	I0917 16:56:18.469199   18924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 16:56:18.484667   18924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 16:56:18.498998   18924 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 16:56:18.641389   18924 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 16:56:18.776008   18924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 16:56:18.790837   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 16:56:18.812674   18924 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 16:56:18.812737   18924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 16:56:18.823898   18924 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 16:56:18.823956   18924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 16:56:18.834933   18924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 16:56:18.845553   18924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 16:56:18.856619   18924 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 16:56:18.868015   18924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 16:56:18.879257   18924 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 16:56:18.899805   18924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 16:56:18.911427   18924 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 16:56:18.921735   18924 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 16:56:18.921790   18924 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 16:56:18.936457   18924 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 16:56:18.946747   18924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 16:56:19.065494   18924 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 16:56:19.226108   18924 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 16:56:19.226205   18924 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 16:56:19.231213   18924 start.go:563] Will wait 60s for crictl version
	I0917 16:56:19.231297   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:56:19.235087   18924 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 16:56:19.281633   18924 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 16:56:19.281783   18924 ssh_runner.go:195] Run: crio --version
	I0917 16:56:19.311850   18924 ssh_runner.go:195] Run: crio --version
	I0917 16:56:19.341785   18924 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 16:56:19.343242   18924 main.go:141] libmachine: (addons-408385) Calling .GetIP
	I0917 16:56:19.345825   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:19.346167   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:19.346191   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:19.346407   18924 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0917 16:56:19.350778   18924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 16:56:19.364110   18924 kubeadm.go:883] updating cluster {Name:addons-408385 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-408385 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 16:56:19.364217   18924 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 16:56:19.364273   18924 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 16:56:19.396930   18924 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0917 16:56:19.397013   18924 ssh_runner.go:195] Run: which lz4
	I0917 16:56:19.401270   18924 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 16:56:19.405740   18924 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 16:56:19.405769   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0917 16:56:20.822525   18924 crio.go:462] duration metric: took 1.421306506s to copy over tarball
	I0917 16:56:20.822624   18924 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 16:56:23.006691   18924 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.184029221s)
	I0917 16:56:23.006730   18924 crio.go:469] duration metric: took 2.18417646s to extract the tarball
	I0917 16:56:23.006741   18924 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 16:56:23.043946   18924 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 16:56:23.086263   18924 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 16:56:23.086285   18924 cache_images.go:84] Images are preloaded, skipping loading
	I0917 16:56:23.086293   18924 kubeadm.go:934] updating node { 192.168.39.170 8443 v1.31.1 crio true true} ...
	I0917 16:56:23.086391   18924 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-408385 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-408385 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 16:56:23.086476   18924 ssh_runner.go:195] Run: crio config
	I0917 16:56:23.135589   18924 cni.go:84] Creating CNI manager for ""
	I0917 16:56:23.135612   18924 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 16:56:23.135622   18924 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 16:56:23.135642   18924 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.170 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-408385 NodeName:addons-408385 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 16:56:23.135765   18924 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.170
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-408385"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.170
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.170"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 16:56:23.135824   18924 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 16:56:23.146424   18924 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 16:56:23.146483   18924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 16:56:23.156664   18924 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0917 16:56:23.176236   18924 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 16:56:23.195926   18924 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0917 16:56:23.215956   18924 ssh_runner.go:195] Run: grep 192.168.39.170	control-plane.minikube.internal$ /etc/hosts
	I0917 16:56:23.220278   18924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.170	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 16:56:23.233718   18924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 16:56:23.361479   18924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 16:56:23.378343   18924 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385 for IP: 192.168.39.170
	I0917 16:56:23.378364   18924 certs.go:194] generating shared ca certs ...
	I0917 16:56:23.378379   18924 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:23.378538   18924 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 16:56:23.468659   18924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt ...
	I0917 16:56:23.468687   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt: {Name:mk4b2dc121f54e472a610da41ce39781730efcb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:23.468849   18924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key ...
	I0917 16:56:23.468860   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key: {Name:mk39fdbf9eb5c96a10b5f07aaa642e9ef6ef62c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:23.468930   18924 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 16:56:23.595987   18924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt ...
	I0917 16:56:23.596018   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt: {Name:mk688819f8e2946789f357ecd51fe07706693989 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:23.596170   18924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key ...
	I0917 16:56:23.596179   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key: {Name:mkcde83262d3acd542cf7897dccc5670ae8cce18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:23.596265   18924 certs.go:256] generating profile certs ...
	I0917 16:56:23.596328   18924 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.key
	I0917 16:56:23.596374   18924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt with IP's: []
	I0917 16:56:23.869724   18924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt ...
	I0917 16:56:23.869759   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: {Name:mk4d7f220fa0245c5bbf00a3bd85f1e0aa7b9b47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:23.869952   18924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.key ...
	I0917 16:56:23.869965   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.key: {Name:mka2d16d15d95cd3b1c29597e7f457020bb94a94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:23.870061   18924 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.key.7e131253
	I0917 16:56:23.870080   18924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.crt.7e131253 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.170]
	I0917 16:56:24.042828   18924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.crt.7e131253 ...
	I0917 16:56:24.042859   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.crt.7e131253: {Name:mkcf5a60df0a4773d88e8945f55342f4090e0047 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:24.043040   18924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.key.7e131253 ...
	I0917 16:56:24.043056   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.key.7e131253: {Name:mk4c9b250fe83846f2bf2a73f79edfbf255dff83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:24.043155   18924 certs.go:381] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.crt.7e131253 -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.crt
	I0917 16:56:24.043233   18924 certs.go:385] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.key.7e131253 -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.key
	I0917 16:56:24.043281   18924 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/proxy-client.key
	I0917 16:56:24.043297   18924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/proxy-client.crt with IP's: []
	I0917 16:56:24.187225   18924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/proxy-client.crt ...
	I0917 16:56:24.187252   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/proxy-client.crt: {Name:mk2cb67c490b7c4e2ac97ea0e98192c0133b5d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:24.187447   18924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/proxy-client.key ...
	I0917 16:56:24.187462   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/proxy-client.key: {Name:mk7886820c83ede55497d40d59a86ffc001d73bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:24.187650   18924 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 16:56:24.187683   18924 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 16:56:24.187708   18924 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 16:56:24.187731   18924 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 16:56:24.188296   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 16:56:24.217099   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 16:56:24.260095   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 16:56:24.286974   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 16:56:24.312555   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0917 16:56:24.338456   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 16:56:24.364498   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 16:56:24.390393   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 16:56:24.416565   18924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 16:56:24.441061   18924 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 16:56:24.459229   18924 ssh_runner.go:195] Run: openssl version
	I0917 16:56:24.466207   18924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 16:56:24.477993   18924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 16:56:24.482776   18924 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 16:56:24.482851   18924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 16:56:24.489986   18924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 16:56:24.501914   18924 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 16:56:24.506316   18924 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 16:56:24.506374   18924 kubeadm.go:392] StartCluster: {Name:addons-408385 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-408385 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 16:56:24.506440   18924 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 16:56:24.506497   18924 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 16:56:24.546313   18924 cri.go:89] found id: ""
	I0917 16:56:24.546370   18924 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 16:56:24.556630   18924 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 16:56:24.567104   18924 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 16:56:24.577871   18924 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 16:56:24.577897   18924 kubeadm.go:157] found existing configuration files:
	
	I0917 16:56:24.577941   18924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 16:56:24.588136   18924 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 16:56:24.588194   18924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 16:56:24.598858   18924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 16:56:24.608830   18924 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 16:56:24.608895   18924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 16:56:24.619369   18924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 16:56:24.630137   18924 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 16:56:24.630198   18924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 16:56:24.640661   18924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 16:56:24.650527   18924 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 16:56:24.650585   18924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 16:56:24.661071   18924 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 16:56:24.716386   18924 kubeadm.go:310] W0917 16:56:24.688487     813 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 16:56:24.717689   18924 kubeadm.go:310] W0917 16:56:24.690025     813 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 16:56:24.829103   18924 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 16:56:35.968996   18924 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 16:56:35.969071   18924 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 16:56:35.969172   18924 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 16:56:35.969326   18924 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 16:56:35.969456   18924 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 16:56:35.969552   18924 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 16:56:35.971346   18924 out.go:235]   - Generating certificates and keys ...
	I0917 16:56:35.971417   18924 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 16:56:35.971479   18924 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 16:56:35.971560   18924 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 16:56:35.971628   18924 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 16:56:35.971688   18924 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 16:56:35.971734   18924 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 16:56:35.971786   18924 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 16:56:35.971889   18924 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-408385 localhost] and IPs [192.168.39.170 127.0.0.1 ::1]
	I0917 16:56:35.971939   18924 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 16:56:35.972038   18924 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-408385 localhost] and IPs [192.168.39.170 127.0.0.1 ::1]
	I0917 16:56:35.972112   18924 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 16:56:35.972189   18924 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 16:56:35.972237   18924 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 16:56:35.972303   18924 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 16:56:35.972346   18924 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 16:56:35.972402   18924 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 16:56:35.972454   18924 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 16:56:35.972511   18924 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 16:56:35.972592   18924 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 16:56:35.972711   18924 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 16:56:35.972783   18924 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 16:56:35.975168   18924 out.go:235]   - Booting up control plane ...
	I0917 16:56:35.975264   18924 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 16:56:35.975333   18924 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 16:56:35.975390   18924 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 16:56:35.975497   18924 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 16:56:35.975587   18924 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 16:56:35.975627   18924 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 16:56:35.975737   18924 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 16:56:35.975844   18924 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 16:56:35.975901   18924 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001493427s
	I0917 16:56:35.975973   18924 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 16:56:35.976034   18924 kubeadm.go:310] [api-check] The API server is healthy after 5.001561419s
	I0917 16:56:35.976169   18924 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 16:56:35.976274   18924 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 16:56:35.976324   18924 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 16:56:35.976482   18924 kubeadm.go:310] [mark-control-plane] Marking the node addons-408385 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 16:56:35.976537   18924 kubeadm.go:310] [bootstrap-token] Using token: sa12t0.gjj5918ic1mqv0s7
	I0917 16:56:35.977945   18924 out.go:235]   - Configuring RBAC rules ...
	I0917 16:56:35.978054   18924 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 16:56:35.978128   18924 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 16:56:35.978288   18924 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 16:56:35.978410   18924 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 16:56:35.978518   18924 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 16:56:35.978615   18924 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 16:56:35.978719   18924 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 16:56:35.978764   18924 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 16:56:35.978818   18924 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 16:56:35.978838   18924 kubeadm.go:310] 
	I0917 16:56:35.978908   18924 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 16:56:35.978916   18924 kubeadm.go:310] 
	I0917 16:56:35.978996   18924 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 16:56:35.979002   18924 kubeadm.go:310] 
	I0917 16:56:35.979023   18924 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 16:56:35.979079   18924 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 16:56:35.979124   18924 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 16:56:35.979130   18924 kubeadm.go:310] 
	I0917 16:56:35.979179   18924 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 16:56:35.979186   18924 kubeadm.go:310] 
	I0917 16:56:35.979225   18924 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 16:56:35.979231   18924 kubeadm.go:310] 
	I0917 16:56:35.979277   18924 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 16:56:35.979341   18924 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 16:56:35.979408   18924 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 16:56:35.979414   18924 kubeadm.go:310] 
	I0917 16:56:35.979487   18924 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 16:56:35.979556   18924 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 16:56:35.979562   18924 kubeadm.go:310] 
	I0917 16:56:35.979647   18924 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sa12t0.gjj5918ic1mqv0s7 \
	I0917 16:56:35.979750   18924 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 \
	I0917 16:56:35.979771   18924 kubeadm.go:310] 	--control-plane 
	I0917 16:56:35.979776   18924 kubeadm.go:310] 
	I0917 16:56:35.979853   18924 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 16:56:35.979861   18924 kubeadm.go:310] 
	I0917 16:56:35.979942   18924 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sa12t0.gjj5918ic1mqv0s7 \
	I0917 16:56:35.980055   18924 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 
	I0917 16:56:35.980068   18924 cni.go:84] Creating CNI manager for ""
	I0917 16:56:35.980074   18924 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 16:56:35.982263   18924 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 16:56:35.983608   18924 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 16:56:35.994882   18924 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 16:56:36.019583   18924 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 16:56:36.019687   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:36.019738   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-408385 minikube.k8s.io/updated_at=2024_09_17T16_56_36_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=addons-408385 minikube.k8s.io/primary=true
	I0917 16:56:36.048300   18924 ops.go:34] apiserver oom_adj: -16
	I0917 16:56:36.170162   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:36.670383   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:37.170820   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:37.671076   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:38.170926   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:38.671033   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:39.170837   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:39.670394   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:40.171111   18924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:40.279973   18924 kubeadm.go:1113] duration metric: took 4.260359264s to wait for elevateKubeSystemPrivileges
	I0917 16:56:40.280020   18924 kubeadm.go:394] duration metric: took 15.773648579s to StartCluster
	I0917 16:56:40.280041   18924 settings.go:142] acquiring lock: {Name:mk34898861b3ed534da4bd8404feab95fe690001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:40.280170   18924 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 16:56:40.280550   18924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:40.280764   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 16:56:40.280775   18924 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 16:56:40.280828   18924 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0917 16:56:40.280929   18924 addons.go:69] Setting inspektor-gadget=true in profile "addons-408385"
	I0917 16:56:40.280942   18924 addons.go:69] Setting volcano=true in profile "addons-408385"
	I0917 16:56:40.280954   18924 addons.go:234] Setting addon volcano=true in "addons-408385"
	I0917 16:56:40.280953   18924 addons.go:69] Setting storage-provisioner=true in profile "addons-408385"
	I0917 16:56:40.280966   18924 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-408385"
	I0917 16:56:40.280977   18924 config.go:182] Loaded profile config "addons-408385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 16:56:40.280993   18924 addons.go:69] Setting volumesnapshots=true in profile "addons-408385"
	I0917 16:56:40.280996   18924 addons.go:69] Setting metrics-server=true in profile "addons-408385"
	I0917 16:56:40.281007   18924 addons.go:234] Setting addon volumesnapshots=true in "addons-408385"
	I0917 16:56:40.281017   18924 addons.go:69] Setting helm-tiller=true in profile "addons-408385"
	I0917 16:56:40.280959   18924 addons.go:69] Setting cloud-spanner=true in profile "addons-408385"
	I0917 16:56:40.281025   18924 addons.go:69] Setting ingress-dns=true in profile "addons-408385"
	I0917 16:56:40.281032   18924 addons.go:69] Setting default-storageclass=true in profile "addons-408385"
	I0917 16:56:40.281032   18924 addons.go:69] Setting gcp-auth=true in profile "addons-408385"
	I0917 16:56:40.281038   18924 addons.go:234] Setting addon ingress-dns=true in "addons-408385"
	I0917 16:56:40.281029   18924 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-408385"
	I0917 16:56:40.281044   18924 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-408385"
	I0917 16:56:40.281049   18924 mustload.go:65] Loading cluster: addons-408385
	I0917 16:56:40.281053   18924 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-408385"
	I0917 16:56:40.281053   18924 addons.go:234] Setting addon cloud-spanner=true in "addons-408385"
	I0917 16:56:40.281064   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.280954   18924 addons.go:234] Setting addon inspektor-gadget=true in "addons-408385"
	I0917 16:56:40.281084   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.281092   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.281104   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.280980   18924 addons.go:234] Setting addon storage-provisioner=true in "addons-408385"
	I0917 16:56:40.281211   18924 config.go:182] Loaded profile config "addons-408385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 16:56:40.281258   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.281535   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.281547   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.281572   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.281587   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.281033   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.281010   18924 addons.go:234] Setting addon metrics-server=true in "addons-408385"
	I0917 16:56:40.281537   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.280928   18924 addons.go:69] Setting yakd=true in profile "addons-408385"
	I0917 16:56:40.280984   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.281654   18924 addons.go:234] Setting addon yakd=true in "addons-408385"
	I0917 16:56:40.280987   18924 addons.go:69] Setting registry=true in profile "addons-408385"
	I0917 16:56:40.281672   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.281014   18924 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-408385"
	I0917 16:56:40.281028   18924 addons.go:234] Setting addon helm-tiller=true in "addons-408385"
	I0917 16:56:40.281535   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.281702   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.281674   18924 addons.go:234] Setting addon registry=true in "addons-408385"
	I0917 16:56:40.281712   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.281541   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.281732   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.281742   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.281021   18924 addons.go:69] Setting ingress=true in profile "addons-408385"
	I0917 16:56:40.281764   18924 addons.go:234] Setting addon ingress=true in "addons-408385"
	I0917 16:56:40.280936   18924 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-408385"
	I0917 16:56:40.281825   18924 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-408385"
	I0917 16:56:40.281873   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.281950   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.282079   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.282161   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.282186   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.282226   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.282238   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.282249   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.282261   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.282263   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.282286   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.282307   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.282229   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.282101   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.282491   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.282578   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.282605   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.282779   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.282827   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.282866   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.283094   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.283133   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.286312   18924 out.go:177] * Verifying Kubernetes components...
	I0917 16:56:40.287618   18924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 16:56:40.298908   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41809
	I0917 16:56:40.299068   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41359
	I0917 16:56:40.309608   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.309649   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.309700   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46171
	I0917 16:56:40.309807   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42879
	I0917 16:56:40.310065   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.310120   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.311980   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.312056   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.312293   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.312867   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.312887   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.313019   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.313031   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.313141   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.313152   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.313482   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.313528   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.313558   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.313604   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.314157   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.314183   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.314467   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.314488   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.314708   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.314760   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.315123   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.315527   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.315559   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.315951   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.320566   18924 addons.go:234] Setting addon default-storageclass=true in "addons-408385"
	I0917 16:56:40.320611   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.320981   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.321026   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.346242   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43905
	I0917 16:56:40.346807   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.347541   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.347571   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.348071   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.353705   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.357689   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37035
	I0917 16:56:40.358028   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38937
	I0917 16:56:40.358152   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37233
	I0917 16:56:40.358342   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36079
	I0917 16:56:40.358940   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34773
	I0917 16:56:40.359063   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.359591   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.359683   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.359700   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.359848   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.359959   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39249
	I0917 16:56:40.360078   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.360347   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.360572   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.360585   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.360591   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.360604   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.360670   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.360866   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.360880   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.360892   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.360995   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.361576   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.361617   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.361638   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.361651   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.361713   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.361760   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.361816   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.362002   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.362194   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.362474   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.362507   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.363919   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36061
	I0917 16:56:40.364019   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.364306   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.364485   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.364489   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.364503   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.365582   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33787
	I0917 16:56:40.365887   18924 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-408385"
	I0917 16:56:40.365928   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.366157   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.366191   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.366314   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.366338   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.366584   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.366594   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40661
	I0917 16:56:40.385020   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44237
	I0917 16:56:40.385053   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33747
	I0917 16:56:40.385026   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38213
	I0917 16:56:40.385345   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.385360   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.385377   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.385392   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.385441   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.385849   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.385945   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.386207   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.386235   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.386770   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.386839   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.386838   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.386856   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.386896   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.387149   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.387217   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.387351   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.387369   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.387504   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.387514   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.387573   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.387705   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.387723   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.387999   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.388667   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.388686   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.388751   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39331
	I0917 16:56:40.389456   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.389491   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.390745   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.390799   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.390825   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.390929   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.391352   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.391419   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:40.391635   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45913
	I0917 16:56:40.391780   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.391820   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.391906   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.391922   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.392274   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.392725   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.392756   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.392952   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33421
	I0917 16:56:40.393072   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.393464   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.393477   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.393806   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.393834   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.393926   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.394284   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.394301   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.394504   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.395211   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.395377   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.395596   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.395806   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.396088   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.396426   18924 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0917 16:56:40.396481   18924 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0917 16:56:40.398128   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.398337   18924 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 16:56:40.398355   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0917 16:56:40.398374   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.398436   18924 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0917 16:56:40.398463   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.398937   18924 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0917 16:56:40.399242   18924 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0917 16:56:40.399265   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.399639   18924 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0917 16:56:40.400906   18924 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0917 16:56:40.400950   18924 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0917 16:56:40.401326   18924 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0917 16:56:40.401347   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.401945   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.402595   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.402623   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.402809   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.402906   18924 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0917 16:56:40.402919   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0917 16:56:40.402936   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.402975   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.403463   18924 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 16:56:40.403552   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.403728   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.406113   18924 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 16:56:40.406794   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.406822   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42877
	I0917 16:56:40.406831   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.406851   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.406868   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.407039   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.407412   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.407477   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46253
	I0917 16:56:40.407478   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.407595   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.407739   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.407938   18924 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 16:56:40.407953   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0917 16:56:40.407967   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.408588   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.408686   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.408706   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.408715   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.409106   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.409130   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.409365   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.409533   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.409652   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.409751   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.410256   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.410275   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.410457   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.410629   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.410870   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.410885   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.410934   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.411338   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.411612   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.411868   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38341
	I0917 16:56:40.412251   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.412291   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.412296   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.412474   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.412838   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.412875   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.412954   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.412975   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.413015   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.413065   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.413625   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.413666   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.413850   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.414043   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.414175   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.414769   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.415585   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.417193   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.417660   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45933
	I0917 16:56:40.418242   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.418815   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.418842   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.419070   18924 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0917 16:56:40.419428   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.420115   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:40.420155   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:40.420492   18924 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0917 16:56:40.420506   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0917 16:56:40.420522   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.422261   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44323
	I0917 16:56:40.423377   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.423827   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.424457   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.424478   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.424549   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.424568   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.424717   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.424845   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.424938   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.425025   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.425342   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.427630   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37231
	I0917 16:56:40.428234   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.428248   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33441
	I0917 16:56:40.428817   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.428839   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.428912   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.429324   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.429475   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.429488   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.429563   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.429599   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.429886   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45523
	I0917 16:56:40.430318   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.430434   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.430844   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.430971   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.430982   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.431353   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.431404   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.431891   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.433596   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.435129   18924 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0917 16:56:40.435873   18924 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 16:56:40.436250   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.436549   18924 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0917 16:56:40.436566   18924 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0917 16:56:40.436587   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.437385   18924 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 16:56:40.437402   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 16:56:40.437420   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.437904   18924 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0917 16:56:40.439229   18924 out.go:177]   - Using image docker.io/registry:2.8.3
	I0917 16:56:40.440893   18924 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0917 16:56:40.440910   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0917 16:56:40.440929   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.442279   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.442325   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36387
	I0917 16:56:40.442832   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.443262   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.443270   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.443295   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.443522   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.443551   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.443747   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.443765   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.443793   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.443812   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.443956   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.443983   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.444081   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.444089   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.444200   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42843
	I0917 16:56:40.444225   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.444247   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.444406   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.444567   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.444597   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.445584   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.445602   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.446414   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.446488   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41543
	I0917 16:56:40.446604   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.447066   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.447281   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.448242   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.448260   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.448317   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.448690   18924 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0917 16:56:40.448734   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.449156   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.449170   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.449190   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.449336   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.449490   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.449542   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.449675   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.449801   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.450234   18924 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0917 16:56:40.451504   18924 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0917 16:56:40.451601   18924 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 16:56:40.451622   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0917 16:56:40.451645   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.452360   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40021
	I0917 16:56:40.452543   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.452876   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.453320   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.453340   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.453678   18924 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0917 16:56:40.453678   18924 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0917 16:56:40.453929   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.454105   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.454452   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39863
	I0917 16:56:40.454781   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.455056   18924 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 16:56:40.455077   18924 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 16:56:40.455161   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.455248   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.455502   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.455528   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.455769   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.455786   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.455855   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.455994   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.456119   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.456170   18924 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0917 16:56:40.456370   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.456909   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.457348   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.457979   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.458463   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.458485   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.458484   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.458600   18924 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0917 16:56:40.458671   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.458710   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:40.458729   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:40.458887   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.458918   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:40.458929   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:40.458938   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:40.458939   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:40.459021   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:40.460243   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:40.460246   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:40.460261   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.460263   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:40.460261   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	W0917 16:56:40.460349   18924 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0917 16:56:40.460425   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.460624   18924 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 16:56:40.460639   18924 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 16:56:40.460661   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.460968   18924 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0917 16:56:40.463033   18924 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0917 16:56:40.463859   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.464289   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.464310   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.464521   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.464735   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.464912   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.465060   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.465353   18924 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0917 16:56:40.466408   18924 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0917 16:56:40.466430   18924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0917 16:56:40.466455   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.468998   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.469414   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36913
	I0917 16:56:40.469615   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.469633   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.469650   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.469800   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:40.469875   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.470033   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.470168   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.470428   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:40.470451   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:40.470890   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:40.471071   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:40.472593   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:40.474373   18924 out.go:177]   - Using image docker.io/busybox:stable
	I0917 16:56:40.475735   18924 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0917 16:56:40.477135   18924 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 16:56:40.477147   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0917 16:56:40.477165   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:40.480812   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.481316   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:40.481354   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:40.481624   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:40.481827   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:40.481966   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:40.482082   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:40.887038   18924 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0917 16:56:40.887063   18924 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0917 16:56:40.957503   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 16:56:40.957833   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 16:56:40.990013   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 16:56:40.996441   18924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 16:56:40.996591   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0917 16:56:41.047793   18924 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0917 16:56:41.047816   18924 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0917 16:56:41.050251   18924 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0917 16:56:41.050266   18924 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0917 16:56:41.052602   18924 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0917 16:56:41.052619   18924 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0917 16:56:41.070072   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 16:56:41.070385   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 16:56:41.085507   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0917 16:56:41.098190   18924 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0917 16:56:41.098217   18924 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0917 16:56:41.112724   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 16:56:41.177089   18924 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0917 16:56:41.177113   18924 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0917 16:56:41.200547   18924 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0917 16:56:41.200577   18924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0917 16:56:41.201601   18924 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 16:56:41.201619   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0917 16:56:41.263241   18924 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0917 16:56:41.263268   18924 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0917 16:56:41.284538   18924 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0917 16:56:41.284563   18924 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0917 16:56:41.462449   18924 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0917 16:56:41.462479   18924 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0917 16:56:41.516502   18924 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0917 16:56:41.516526   18924 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0917 16:56:41.527742   18924 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0917 16:56:41.527763   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0917 16:56:41.592582   18924 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 16:56:41.592603   18924 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 16:56:41.692484   18924 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0917 16:56:41.692515   18924 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0917 16:56:41.707737   18924 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0917 16:56:41.707771   18924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0917 16:56:41.725728   18924 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0917 16:56:41.725752   18924 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0917 16:56:41.751606   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0917 16:56:41.763147   18924 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0917 16:56:41.763174   18924 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0917 16:56:41.845855   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0917 16:56:41.917959   18924 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0917 16:56:41.917982   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0917 16:56:41.932379   18924 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0917 16:56:41.932409   18924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0917 16:56:41.933743   18924 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 16:56:41.933758   18924 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 16:56:42.000189   18924 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 16:56:42.000209   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0917 16:56:42.019019   18924 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0917 16:56:42.019039   18924 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0917 16:56:42.120876   18924 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0917 16:56:42.120903   18924 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0917 16:56:42.215490   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0917 16:56:42.219259   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 16:56:42.235839   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 16:56:42.249709   18924 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0917 16:56:42.249738   18924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0917 16:56:42.408626   18924 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0917 16:56:42.408660   18924 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0917 16:56:42.597811   18924 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0917 16:56:42.597836   18924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0917 16:56:42.832549   18924 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 16:56:42.832574   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0917 16:56:42.877638   18924 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0917 16:56:42.877673   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0917 16:56:43.070157   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 16:56:43.223931   18924 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0917 16:56:43.223966   18924 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0917 16:56:43.642910   18924 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0917 16:56:43.642945   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0917 16:56:44.074864   18924 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0917 16:56:44.074888   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0917 16:56:44.426715   18924 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 16:56:44.426745   18924 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0917 16:56:44.816971   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 16:56:47.444904   18924 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0917 16:56:47.444944   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:47.448454   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:47.448848   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:47.448876   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:47.449068   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:47.449290   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:47.449479   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:47.449640   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:48.201028   18924 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0917 16:56:48.440942   18924 addons.go:234] Setting addon gcp-auth=true in "addons-408385"
	I0917 16:56:48.440997   18924 host.go:66] Checking if "addons-408385" exists ...
	I0917 16:56:48.441325   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:48.441359   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:48.457638   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42811
	I0917 16:56:48.458035   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:48.458476   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:48.458498   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:48.459269   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:48.459712   18924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 16:56:48.459740   18924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 16:56:48.475904   18924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46001
	I0917 16:56:48.476401   18924 main.go:141] libmachine: () Calling .GetVersion
	I0917 16:56:48.476926   18924 main.go:141] libmachine: Using API Version  1
	I0917 16:56:48.476955   18924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 16:56:48.477337   18924 main.go:141] libmachine: () Calling .GetMachineName
	I0917 16:56:48.477515   18924 main.go:141] libmachine: (addons-408385) Calling .GetState
	I0917 16:56:48.479054   18924 main.go:141] libmachine: (addons-408385) Calling .DriverName
	I0917 16:56:48.479263   18924 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0917 16:56:48.479286   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHHostname
	I0917 16:56:48.481756   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:48.482133   18924 main.go:141] libmachine: (addons-408385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:b5:a2", ip: ""} in network mk-addons-408385: {Iface:virbr1 ExpiryTime:2024-09-17 17:56:07 +0000 UTC Type:0 Mac:52:54:00:69:b5:a2 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-408385 Clientid:01:52:54:00:69:b5:a2}
	I0917 16:56:48.482152   18924 main.go:141] libmachine: (addons-408385) DBG | domain addons-408385 has defined IP address 192.168.39.170 and MAC address 52:54:00:69:b5:a2 in network mk-addons-408385
	I0917 16:56:48.482342   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHPort
	I0917 16:56:48.482542   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHKeyPath
	I0917 16:56:48.482682   18924 main.go:141] libmachine: (addons-408385) Calling .GetSSHUsername
	I0917 16:56:48.482802   18924 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/addons-408385/id_rsa Username:docker}
	I0917 16:56:50.488236   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.53069821s)
	I0917 16:56:50.488278   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.530418125s)
	I0917 16:56:50.488291   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.488303   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.488312   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.488328   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.488345   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.49830433s)
	I0917 16:56:50.488378   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.488393   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.488405   18924 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.491938735s)
	I0917 16:56:50.488459   18924 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (9.491834553s)
	I0917 16:56:50.488485   18924 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0917 16:56:50.488684   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.418586295s)
	I0917 16:56:50.488715   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.488725   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.488806   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.418403871s)
	I0917 16:56:50.488819   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.488826   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.488884   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.403354884s)
	I0917 16:56:50.488898   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.488905   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.488948   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.376200431s)
	I0917 16:56:50.488960   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.488968   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.489028   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.737398015s)
	I0917 16:56:50.489042   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.489053   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.489103   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.643223171s)
	I0917 16:56:50.489114   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.489123   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.489186   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.273670484s)
	I0917 16:56:50.489198   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.489205   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.489334   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.27004628s)
	W0917 16:56:50.489366   18924 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 16:56:50.489413   18924 retry.go:31] will retry after 216.517027ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 16:56:50.489488   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.253623687s)
	I0917 16:56:50.489516   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.489528   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.489623   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.419429218s)
	I0917 16:56:50.489637   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.489645   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.490730   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.490746   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.490761   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.490769   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.490773   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.490776   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.490780   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.490789   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.490794   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.490846   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.490846   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.490867   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.490875   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.490876   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.490881   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.490883   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.490889   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.490891   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.490898   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.490930   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.490950   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.490956   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.490963   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.490969   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.491011   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.491030   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.491037   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.491044   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.491051   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.491088   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.491105   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.491111   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.491119   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.491128   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.491168   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.491188   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.491195   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.491202   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.491208   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.491246   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.491266   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.491272   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.491281   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.491288   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.491328   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.491348   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.491353   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.491360   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.491366   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.491403   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.491422   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.491428   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.491435   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.491441   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.491476   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.491493   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.491498   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.491505   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.491511   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.492188   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.492223   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.492231   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.494284   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.494315   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.494322   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.494514   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.494536   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.494542   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.494578   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.494609   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.494616   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.494691   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.494714   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.494720   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.494832   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.494873   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.494880   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.495687   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.495706   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.495732   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.495738   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.495746   18924 addons.go:475] Verifying addon registry=true in "addons-408385"
	I0917 16:56:50.496538   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.496544   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.496555   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.496564   18924 addons.go:475] Verifying addon metrics-server=true in "addons-408385"
	I0917 16:56:50.496566   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.496573   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.496624   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.496639   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.496683   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.496717   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.496720   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.496727   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.496735   18924 addons.go:475] Verifying addon ingress=true in "addons-408385"
	I0917 16:56:50.496808   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.496815   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.497192   18924 node_ready.go:35] waiting up to 6m0s for node "addons-408385" to be "Ready" ...
	I0917 16:56:50.497351   18924 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-408385 service yakd-dashboard -n yakd-dashboard
	
	I0917 16:56:50.497389   18924 out.go:177] * Verifying registry addon...
	I0917 16:56:50.498257   18924 out.go:177] * Verifying ingress addon...
	I0917 16:56:50.500180   18924 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0917 16:56:50.500419   18924 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0917 16:56:50.518284   18924 node_ready.go:49] node "addons-408385" has status "Ready":"True"
	I0917 16:56:50.518306   18924 node_ready.go:38] duration metric: took 21.091831ms for node "addons-408385" to be "Ready" ...
	I0917 16:56:50.518315   18924 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 16:56:50.520856   18924 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0917 16:56:50.520883   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:50.523079   18924 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0917 16:56:50.523105   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:50.546145   18924 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6scmn" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:50.581347   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.581372   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.581745   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:50.581768   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.581818   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.581841   18924 pod_ready.go:93] pod "coredns-7c65d6cfc9-6scmn" in "kube-system" namespace has status "Ready":"True"
	I0917 16:56:50.581859   18924 pod_ready.go:82] duration metric: took 35.685801ms for pod "coredns-7c65d6cfc9-6scmn" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:50.581871   18924 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mhzww" in "kube-system" namespace to be "Ready" ...
	W0917 16:56:50.581910   18924 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0917 16:56:50.586512   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:50.586530   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:50.586847   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:50.586867   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:50.596137   18924 pod_ready.go:93] pod "coredns-7c65d6cfc9-mhzww" in "kube-system" namespace has status "Ready":"True"
	I0917 16:56:50.596162   18924 pod_ready.go:82] duration metric: took 14.284009ms for pod "coredns-7c65d6cfc9-mhzww" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:50.596172   18924 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-408385" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:50.623810   18924 pod_ready.go:93] pod "etcd-addons-408385" in "kube-system" namespace has status "Ready":"True"
	I0917 16:56:50.623835   18924 pod_ready.go:82] duration metric: took 27.656536ms for pod "etcd-addons-408385" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:50.623845   18924 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-408385" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:50.706847   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 16:56:50.717893   18924 pod_ready.go:93] pod "kube-apiserver-addons-408385" in "kube-system" namespace has status "Ready":"True"
	I0917 16:56:50.717915   18924 pod_ready.go:82] duration metric: took 94.063278ms for pod "kube-apiserver-addons-408385" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:50.717925   18924 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-408385" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:50.902706   18924 pod_ready.go:93] pod "kube-controller-manager-addons-408385" in "kube-system" namespace has status "Ready":"True"
	I0917 16:56:50.902732   18924 pod_ready.go:82] duration metric: took 184.800591ms for pod "kube-controller-manager-addons-408385" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:50.902744   18924 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6blpt" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:50.993709   18924 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-408385" context rescaled to 1 replicas
	I0917 16:56:51.006258   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:51.006412   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:51.311675   18924 pod_ready.go:93] pod "kube-proxy-6blpt" in "kube-system" namespace has status "Ready":"True"
	I0917 16:56:51.311702   18924 pod_ready.go:82] duration metric: took 408.951515ms for pod "kube-proxy-6blpt" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:51.311711   18924 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-408385" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:51.511546   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:51.512343   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:51.712678   18924 pod_ready.go:93] pod "kube-scheduler-addons-408385" in "kube-system" namespace has status "Ready":"True"
	I0917 16:56:51.712702   18924 pod_ready.go:82] duration metric: took 400.983783ms for pod "kube-scheduler-addons-408385" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:51.712710   18924 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:52.025749   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:52.026250   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:52.190681   18924 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.711392152s)
	I0917 16:56:52.191047   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.373996255s)
	I0917 16:56:52.191104   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:52.191125   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:52.191470   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:52.191517   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:52.191536   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:52.191553   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:52.191566   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:52.191792   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:52.191805   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:52.191826   18924 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-408385"
	I0917 16:56:52.192415   18924 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 16:56:52.193515   18924 out.go:177] * Verifying csi-hostpath-driver addon...
	I0917 16:56:52.195286   18924 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0917 16:56:52.196006   18924 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0917 16:56:52.196821   18924 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0917 16:56:52.196837   18924 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0917 16:56:52.214434   18924 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0917 16:56:52.214458   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:52.371675   18924 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0917 16:56:52.371704   18924 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0917 16:56:52.497363   18924 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 16:56:52.497383   18924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0917 16:56:52.504719   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:52.505342   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:52.564224   18924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 16:56:52.700595   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:53.011701   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:53.012015   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:53.159940   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.453036172s)
	I0917 16:56:53.160005   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:53.160022   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:53.160284   18924 main.go:141] libmachine: (addons-408385) DBG | Closing plugin on server side
	I0917 16:56:53.160332   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:53.160341   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:53.160357   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:53.160374   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:53.160616   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:53.160633   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:53.201585   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:53.506249   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:53.506293   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:53.709697   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:53.738231   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:56:53.984139   18924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.419872413s)
	I0917 16:56:53.984190   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:53.984212   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:53.984568   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:53.984589   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:53.984604   18924 main.go:141] libmachine: Making call to close driver server
	I0917 16:56:53.984612   18924 main.go:141] libmachine: (addons-408385) Calling .Close
	I0917 16:56:53.984834   18924 main.go:141] libmachine: Successfully made call to close driver server
	I0917 16:56:53.984853   18924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 16:56:53.987027   18924 addons.go:475] Verifying addon gcp-auth=true in "addons-408385"
	I0917 16:56:53.988873   18924 out.go:177] * Verifying gcp-auth addon...
	I0917 16:56:53.990825   18924 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0917 16:56:54.055092   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:54.055115   18924 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0917 16:56:54.055131   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:54.055387   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:54.202926   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:54.494716   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:54.506148   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:54.506174   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:54.701636   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:54.994373   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:55.005045   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:55.005494   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:55.200848   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:55.495663   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:55.504909   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:55.506086   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:55.855465   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:56:55.856656   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:55.993948   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:56.005711   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:56.006104   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:56.201254   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:56.494421   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:56.505090   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:56.505414   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:56.701176   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:56.995390   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:57.004844   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:57.005282   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:57.200660   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:57.494627   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:57.504621   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:57.505103   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:57.700909   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:57.994928   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:58.004757   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:58.005263   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:58.201434   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:58.219886   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:56:58.495575   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:58.504836   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:58.505317   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:58.701773   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:58.994959   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:59.005951   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:59.006611   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:59.201975   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:59.495332   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:56:59.506814   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:56:59.507819   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:59.700658   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:59.995245   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:00.004708   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:00.006302   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:00.200967   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:00.219938   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:00.495921   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:00.506377   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:00.506950   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:00.703768   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:00.995363   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:01.010398   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:01.011329   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:01.202047   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:01.495085   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:01.504652   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:01.505645   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:01.702029   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:01.994945   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:02.006766   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:02.008040   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:02.200473   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:02.221720   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:02.495451   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:02.504315   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:02.506062   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:02.700326   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:02.995096   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:03.005924   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:03.006819   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:03.201912   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:03.495000   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:03.504765   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:03.505943   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:03.701922   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:03.995337   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:04.004819   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:04.005035   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:04.201761   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:04.494642   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:04.504915   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:04.505321   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:04.702013   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:04.719604   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:04.995214   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:05.004602   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:05.005121   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:05.200850   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:05.494936   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:05.505716   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:05.506224   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:05.700440   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:05.994611   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:06.004208   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:06.006099   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:06.200977   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:06.528028   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:06.528127   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:06.528173   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:06.701154   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:06.994040   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:07.004294   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:07.004738   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:07.200229   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:07.219592   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:07.495326   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:07.504606   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:07.505193   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:07.700901   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:07.995249   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:08.004764   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:08.004900   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:08.200699   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:08.495328   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:08.503987   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:08.506826   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:08.700862   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:08.994609   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:09.004062   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:09.004349   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:09.202126   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:09.220482   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:09.494945   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:09.505116   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:09.506159   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:09.701734   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:09.996629   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:10.019821   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:10.021645   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:10.201473   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:10.495799   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:10.504801   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:10.506075   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:10.704466   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:10.994193   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:11.005581   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:11.005762   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:11.201601   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:11.495169   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:11.504802   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:11.505211   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:11.700302   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:11.719276   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:11.994525   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:12.004692   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:12.005129   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:12.201376   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:12.494979   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:12.505561   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:12.505703   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:12.975902   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:12.995801   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:13.004147   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:13.006830   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:13.200882   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:13.496008   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:13.506567   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:13.507195   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:13.701055   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:13.719675   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:13.994939   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:14.004466   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:14.004915   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:14.202094   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:14.495836   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:14.507503   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:14.508148   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:14.700728   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:14.996044   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:15.006105   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:15.006707   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:15.201653   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:15.494526   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:15.504505   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:15.505363   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:15.703586   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:15.994788   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:16.005108   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:16.005808   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:16.206044   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:16.220095   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:16.494315   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:16.505315   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:16.506169   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:16.704765   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:16.995307   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:17.096405   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:17.096552   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:17.200374   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:17.495743   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:17.505031   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:17.506329   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:17.721075   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:17.995723   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:18.004552   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:18.005928   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:18.200087   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:18.495274   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:18.504597   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:18.507379   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:18.700946   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:18.719392   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:18.994993   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:19.004577   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:19.005098   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:19.589168   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:19.589327   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:19.589667   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:19.589832   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:19.700535   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:19.994305   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:20.004913   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:20.005728   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:20.200701   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:20.494743   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:20.504820   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:20.506113   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:20.702270   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:20.995072   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:21.004890   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:21.005076   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:21.201054   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:21.219658   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:21.495297   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:21.505528   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:21.506012   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:21.702119   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:21.996390   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:22.005561   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:22.005652   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:22.200739   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:22.494563   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:22.506327   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:22.506676   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:22.700496   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:23.032136   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:23.032957   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:23.033036   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:23.202150   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:23.494360   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:23.504706   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:23.505348   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:23.947525   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:23.948575   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:23.994678   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:24.004245   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:24.005329   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:24.201222   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:24.495096   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:24.508318   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:24.510378   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:24.701555   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:24.995276   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:25.004269   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:25.007124   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:25.201504   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:25.495365   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:25.505283   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:25.505799   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:25.700648   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:26.039815   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:26.040228   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:26.040316   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:26.210495   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:26.220088   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:26.495232   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:26.510833   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:26.511093   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:26.700936   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:26.996436   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:27.004910   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:27.005741   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:27.202425   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:27.495288   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:27.505457   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:27.508530   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:27.700773   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:27.995376   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:28.005437   18924 kapi.go:107] duration metric: took 37.50525233s to wait for kubernetes.io/minikube-addons=registry ...
	I0917 16:57:28.005661   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:28.201963   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:28.495032   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:28.505610   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:28.701512   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:28.728312   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:28.995608   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:29.005993   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:29.202300   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:29.497995   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:29.504870   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:29.700212   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:29.995246   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:30.004884   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:30.202534   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:30.495333   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:30.505996   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:30.702019   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:30.994099   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:31.005314   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:31.202708   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:31.229988   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:31.493840   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:31.504120   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:31.701449   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:31.994920   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:32.004766   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:32.357159   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:32.495449   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:32.505535   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:32.701208   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:32.995100   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:33.004376   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:33.200664   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:33.498557   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:33.507115   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:33.700821   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:33.718587   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:33.995468   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:34.005462   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:34.201071   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:34.495519   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:34.505080   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:34.701276   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:34.995558   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:35.004981   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:35.203003   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:35.494303   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:35.504708   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:35.700739   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:35.718782   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:35.994881   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:36.097365   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:36.201890   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:36.495139   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:36.505487   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:36.701057   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:36.996834   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:37.005523   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:37.410454   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:37.516803   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:37.517120   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:37.701410   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:37.729938   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:37.996501   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:38.005193   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:38.200777   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:38.494507   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:38.504434   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:38.701189   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:38.994900   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:39.004122   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:39.201715   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:39.496473   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:39.506073   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:39.703094   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:39.994841   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:40.004452   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:40.201004   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:40.218439   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:40.495729   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:40.504143   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:40.853096   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:41.158440   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:41.159441   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:41.203681   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:41.494298   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:41.505342   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:41.701128   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:41.993947   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:42.005059   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:42.201190   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:42.219465   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:42.495543   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:42.505413   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:42.701555   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:42.995239   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:43.004317   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:43.201671   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:43.495708   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:43.505113   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:43.702002   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:43.997002   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:44.004765   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:44.200983   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:44.507042   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:44.510903   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:44.702550   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:44.723909   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:44.996307   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:45.004982   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:45.201479   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:45.495981   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:45.505405   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:45.700916   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:45.998807   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:46.011459   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:46.201895   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:46.495657   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:46.506169   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:46.701933   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:46.999183   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:47.006964   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:47.203049   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:47.219100   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:47.498008   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:47.506371   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:47.707797   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:47.996867   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:48.004924   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:48.201042   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:48.495636   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:48.511151   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:48.701120   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:48.996590   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:49.012436   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:49.202003   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:49.494728   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:49.505025   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:49.785313   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:49.788215   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:49.994837   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:50.004304   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:50.201021   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:50.495181   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:50.505534   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:50.701006   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:50.994275   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:51.005002   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:51.203019   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:51.495078   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:51.596463   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:51.701421   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:51.994676   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:52.005680   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:52.200539   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:52.218738   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:52.497799   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:52.504566   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:52.700922   18924 kapi.go:107] duration metric: took 1m0.504912498s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0917 16:57:52.995147   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:53.004995   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:53.494512   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:53.505190   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:53.994795   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:54.004440   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:54.219175   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:54.495330   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:54.505134   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:54.995438   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:55.004940   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:55.495125   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:55.504590   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:55.995062   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:56.004478   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:56.225636   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:56.499868   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:56.505194   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:56.996724   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:57.005674   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:57.495684   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:57.505781   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:57.995631   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:58.008631   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:58.494323   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:58.504300   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:58.718959   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:58.993981   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:59.004453   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:59.494338   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:59.505823   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:59.995264   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:00.005723   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:00.500318   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:00.507383   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:00.937272   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:00.995905   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:01.004584   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:01.494776   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:01.504156   18924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:01.995441   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:02.005703   18924 kapi.go:107] duration metric: took 1m11.505283995s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0917 16:58:02.495427   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:02.994984   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:03.220146   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:03.494557   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:03.995254   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:04.495578   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:04.995390   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:05.592860   18924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:05.720255   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:05.995373   18924 kapi.go:107] duration metric: took 1m12.00454435s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0917 16:58:05.997195   18924 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-408385 cluster.
	I0917 16:58:05.998523   18924 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0917 16:58:05.999866   18924 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0917 16:58:06.001358   18924 out.go:177] * Enabled addons: helm-tiller, nvidia-device-plugin, storage-provisioner, inspektor-gadget, metrics-server, ingress-dns, cloud-spanner, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0917 16:58:06.002828   18924 addons.go:510] duration metric: took 1m25.721995771s for enable addons: enabled=[helm-tiller nvidia-device-plugin storage-provisioner inspektor-gadget metrics-server ingress-dns cloud-spanner yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0917 16:58:08.220603   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:10.720898   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:13.220582   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:15.719610   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:18.218981   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:20.219159   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:22.219968   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:24.220176   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:26.719061   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:28.719706   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:31.220522   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:33.222077   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:35.720029   18924 pod_ready.go:103] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"False"
	I0917 16:58:37.220204   18924 pod_ready.go:93] pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace has status "Ready":"True"
	I0917 16:58:37.220228   18924 pod_ready.go:82] duration metric: took 1m45.507511223s for pod "metrics-server-84c5f94fbc-nxwr4" in "kube-system" namespace to be "Ready" ...
	I0917 16:58:37.220238   18924 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-95n5v" in "kube-system" namespace to be "Ready" ...
	I0917 16:58:37.225164   18924 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-95n5v" in "kube-system" namespace has status "Ready":"True"
	I0917 16:58:37.225186   18924 pod_ready.go:82] duration metric: took 4.941018ms for pod "nvidia-device-plugin-daemonset-95n5v" in "kube-system" namespace to be "Ready" ...
	I0917 16:58:37.225205   18924 pod_ready.go:39] duration metric: took 1m46.70687885s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 16:58:37.225220   18924 api_server.go:52] waiting for apiserver process to appear ...
	I0917 16:58:37.225261   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 16:58:37.225308   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 16:58:37.279317   18924 cri.go:89] found id: "bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454"
	I0917 16:58:37.279344   18924 cri.go:89] found id: ""
	I0917 16:58:37.279354   18924 logs.go:276] 1 containers: [bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454]
	I0917 16:58:37.279413   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:37.283927   18924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 16:58:37.283993   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 16:58:37.333054   18924 cri.go:89] found id: "535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15"
	I0917 16:58:37.333075   18924 cri.go:89] found id: ""
	I0917 16:58:37.333082   18924 logs.go:276] 1 containers: [535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15]
	I0917 16:58:37.333127   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:37.337854   18924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 16:58:37.337913   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 16:58:37.376799   18924 cri.go:89] found id: "bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707"
	I0917 16:58:37.376819   18924 cri.go:89] found id: ""
	I0917 16:58:37.376826   18924 logs.go:276] 1 containers: [bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707]
	I0917 16:58:37.376871   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:37.381347   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 16:58:37.381426   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 16:58:37.427851   18924 cri.go:89] found id: "5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad"
	I0917 16:58:37.427871   18924 cri.go:89] found id: ""
	I0917 16:58:37.427878   18924 logs.go:276] 1 containers: [5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad]
	I0917 16:58:37.427920   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:37.432240   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 16:58:37.432302   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 16:58:37.479690   18924 cri.go:89] found id: "78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0"
	I0917 16:58:37.479709   18924 cri.go:89] found id: ""
	I0917 16:58:37.479720   18924 logs.go:276] 1 containers: [78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0]
	I0917 16:58:37.479769   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:37.484307   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 16:58:37.484359   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 16:58:37.530462   18924 cri.go:89] found id: "eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44"
	I0917 16:58:37.530482   18924 cri.go:89] found id: ""
	I0917 16:58:37.530490   18924 logs.go:276] 1 containers: [eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44]
	I0917 16:58:37.530536   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:37.534804   18924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 16:58:37.534867   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 16:58:37.576826   18924 cri.go:89] found id: ""
	I0917 16:58:37.576855   18924 logs.go:276] 0 containers: []
	W0917 16:58:37.576867   18924 logs.go:278] No container was found matching "kindnet"
	I0917 16:58:37.576879   18924 logs.go:123] Gathering logs for kube-apiserver [bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454] ...
	I0917 16:58:37.576897   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454"
	I0917 16:58:37.628751   18924 logs.go:123] Gathering logs for etcd [535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15] ...
	I0917 16:58:37.628793   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15"
	I0917 16:58:37.693416   18924 logs.go:123] Gathering logs for kube-proxy [78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0] ...
	I0917 16:58:37.693451   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0"
	I0917 16:58:37.734110   18924 logs.go:123] Gathering logs for CRI-O ...
	I0917 16:58:37.734140   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 16:58:38.409207   18924 logs.go:123] Gathering logs for container status ...
	I0917 16:58:38.409261   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 16:58:38.463953   18924 logs.go:123] Gathering logs for kubelet ...
	I0917 16:58:38.463988   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 16:58:38.554114   18924 logs.go:123] Gathering logs for dmesg ...
	I0917 16:58:38.554151   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 16:58:38.572938   18924 logs.go:123] Gathering logs for describe nodes ...
	I0917 16:58:38.572963   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 16:58:38.770050   18924 logs.go:123] Gathering logs for coredns [bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707] ...
	I0917 16:58:38.770086   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707"
	I0917 16:58:38.817495   18924 logs.go:123] Gathering logs for kube-scheduler [5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad] ...
	I0917 16:58:38.817523   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad"
	I0917 16:58:38.864149   18924 logs.go:123] Gathering logs for kube-controller-manager [eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44] ...
	I0917 16:58:38.864183   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44"
	I0917 16:58:41.429718   18924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 16:58:41.453455   18924 api_server.go:72] duration metric: took 2m1.172653121s to wait for apiserver process to appear ...
	I0917 16:58:41.453496   18924 api_server.go:88] waiting for apiserver healthz status ...
	I0917 16:58:41.453536   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 16:58:41.453601   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 16:58:41.494855   18924 cri.go:89] found id: "bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454"
	I0917 16:58:41.494880   18924 cri.go:89] found id: ""
	I0917 16:58:41.494890   18924 logs.go:276] 1 containers: [bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454]
	I0917 16:58:41.494938   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:41.499492   18924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 16:58:41.499556   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 16:58:41.538940   18924 cri.go:89] found id: "535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15"
	I0917 16:58:41.538965   18924 cri.go:89] found id: ""
	I0917 16:58:41.538974   18924 logs.go:276] 1 containers: [535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15]
	I0917 16:58:41.539031   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:41.543179   18924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 16:58:41.543238   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 16:58:41.592083   18924 cri.go:89] found id: "bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707"
	I0917 16:58:41.592107   18924 cri.go:89] found id: ""
	I0917 16:58:41.592115   18924 logs.go:276] 1 containers: [bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707]
	I0917 16:58:41.592162   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:41.596864   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 16:58:41.596926   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 16:58:41.642101   18924 cri.go:89] found id: "5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad"
	I0917 16:58:41.642126   18924 cri.go:89] found id: ""
	I0917 16:58:41.642136   18924 logs.go:276] 1 containers: [5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad]
	I0917 16:58:41.642182   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:41.647074   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 16:58:41.647150   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 16:58:41.689215   18924 cri.go:89] found id: "78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0"
	I0917 16:58:41.689253   18924 cri.go:89] found id: ""
	I0917 16:58:41.689262   18924 logs.go:276] 1 containers: [78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0]
	I0917 16:58:41.689322   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:41.693834   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 16:58:41.693902   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 16:58:41.736215   18924 cri.go:89] found id: "eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44"
	I0917 16:58:41.736241   18924 cri.go:89] found id: ""
	I0917 16:58:41.736251   18924 logs.go:276] 1 containers: [eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44]
	I0917 16:58:41.736309   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:41.740897   18924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 16:58:41.740965   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 16:58:41.782588   18924 cri.go:89] found id: ""
	I0917 16:58:41.782611   18924 logs.go:276] 0 containers: []
	W0917 16:58:41.782619   18924 logs.go:278] No container was found matching "kindnet"
	I0917 16:58:41.782626   18924 logs.go:123] Gathering logs for etcd [535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15] ...
	I0917 16:58:41.782637   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15"
	I0917 16:58:41.843944   18924 logs.go:123] Gathering logs for coredns [bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707] ...
	I0917 16:58:41.843982   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707"
	I0917 16:58:41.886360   18924 logs.go:123] Gathering logs for kube-scheduler [5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad] ...
	I0917 16:58:41.886389   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad"
	I0917 16:58:41.932278   18924 logs.go:123] Gathering logs for kube-controller-manager [eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44] ...
	I0917 16:58:41.932318   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44"
	I0917 16:58:42.000845   18924 logs.go:123] Gathering logs for CRI-O ...
	I0917 16:58:42.000894   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 16:58:42.752922   18924 logs.go:123] Gathering logs for container status ...
	I0917 16:58:42.752965   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 16:58:42.806585   18924 logs.go:123] Gathering logs for dmesg ...
	I0917 16:58:42.806621   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 16:58:42.822915   18924 logs.go:123] Gathering logs for kube-apiserver [bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454] ...
	I0917 16:58:42.822950   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454"
	I0917 16:58:42.872331   18924 logs.go:123] Gathering logs for kube-proxy [78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0] ...
	I0917 16:58:42.872363   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0"
	I0917 16:58:42.911531   18924 logs.go:123] Gathering logs for kubelet ...
	I0917 16:58:42.911556   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 16:58:43.001970   18924 logs.go:123] Gathering logs for describe nodes ...
	I0917 16:58:43.002011   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 16:58:45.635837   18924 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I0917 16:58:45.642306   18924 api_server.go:279] https://192.168.39.170:8443/healthz returned 200:
	ok
	I0917 16:58:45.643242   18924 api_server.go:141] control plane version: v1.31.1
	I0917 16:58:45.643264   18924 api_server.go:131] duration metric: took 4.189760157s to wait for apiserver health ...
	I0917 16:58:45.643271   18924 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 16:58:45.643288   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 16:58:45.643328   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 16:58:45.693219   18924 cri.go:89] found id: "bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454"
	I0917 16:58:45.693256   18924 cri.go:89] found id: ""
	I0917 16:58:45.693265   18924 logs.go:276] 1 containers: [bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454]
	I0917 16:58:45.693322   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:45.698334   18924 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 16:58:45.698400   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 16:58:45.762484   18924 cri.go:89] found id: "535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15"
	I0917 16:58:45.762509   18924 cri.go:89] found id: ""
	I0917 16:58:45.762517   18924 logs.go:276] 1 containers: [535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15]
	I0917 16:58:45.762574   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:45.767293   18924 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 16:58:45.767362   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 16:58:45.815706   18924 cri.go:89] found id: "bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707"
	I0917 16:58:45.815734   18924 cri.go:89] found id: ""
	I0917 16:58:45.815743   18924 logs.go:276] 1 containers: [bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707]
	I0917 16:58:45.815801   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:45.821316   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 16:58:45.821379   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 16:58:45.872354   18924 cri.go:89] found id: "5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad"
	I0917 16:58:45.872375   18924 cri.go:89] found id: ""
	I0917 16:58:45.872384   18924 logs.go:276] 1 containers: [5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad]
	I0917 16:58:45.872457   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:45.876864   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 16:58:45.876916   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 16:58:45.933435   18924 cri.go:89] found id: "78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0"
	I0917 16:58:45.933456   18924 cri.go:89] found id: ""
	I0917 16:58:45.933464   18924 logs.go:276] 1 containers: [78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0]
	I0917 16:58:45.933522   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:45.937839   18924 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 16:58:45.937893   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 16:58:45.990922   18924 cri.go:89] found id: "eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44"
	I0917 16:58:45.990950   18924 cri.go:89] found id: ""
	I0917 16:58:45.990960   18924 logs.go:276] 1 containers: [eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44]
	I0917 16:58:45.991013   18924 ssh_runner.go:195] Run: which crictl
	I0917 16:58:45.995807   18924 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 16:58:45.995870   18924 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 16:58:46.057313   18924 cri.go:89] found id: ""
	I0917 16:58:46.057345   18924 logs.go:276] 0 containers: []
	W0917 16:58:46.057362   18924 logs.go:278] No container was found matching "kindnet"
	I0917 16:58:46.057372   18924 logs.go:123] Gathering logs for kubelet ...
	I0917 16:58:46.057385   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 16:58:46.149501   18924 logs.go:123] Gathering logs for describe nodes ...
	I0917 16:58:46.149539   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 16:58:46.282319   18924 logs.go:123] Gathering logs for kube-apiserver [bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454] ...
	I0917 16:58:46.282352   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454"
	I0917 16:58:46.337878   18924 logs.go:123] Gathering logs for coredns [bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707] ...
	I0917 16:58:46.337916   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707"
	I0917 16:58:46.391452   18924 logs.go:123] Gathering logs for kube-proxy [78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0] ...
	I0917 16:58:46.391485   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0"
	I0917 16:58:46.429573   18924 logs.go:123] Gathering logs for CRI-O ...
	I0917 16:58:46.429607   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 16:58:47.320590   18924 logs.go:123] Gathering logs for dmesg ...
	I0917 16:58:47.320629   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 16:58:47.339176   18924 logs.go:123] Gathering logs for etcd [535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15] ...
	I0917 16:58:47.339207   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15"
	I0917 16:58:47.401618   18924 logs.go:123] Gathering logs for kube-scheduler [5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad] ...
	I0917 16:58:47.401661   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad"
	I0917 16:58:47.448277   18924 logs.go:123] Gathering logs for kube-controller-manager [eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44] ...
	I0917 16:58:47.448312   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44"
	I0917 16:58:47.519002   18924 logs.go:123] Gathering logs for container status ...
	I0917 16:58:47.519038   18924 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 16:58:50.090393   18924 system_pods.go:59] 18 kube-system pods found
	I0917 16:58:50.090427   18924 system_pods.go:61] "coredns-7c65d6cfc9-6scmn" [8db4f4dd-ff63-4e6e-8533-37fc690e481f] Running
	I0917 16:58:50.090432   18924 system_pods.go:61] "csi-hostpath-attacher-0" [65b71f4b-d36f-4dc6-bdae-333899320ff0] Running
	I0917 16:58:50.090436   18924 system_pods.go:61] "csi-hostpath-resizer-0" [c83c2084-ccc8-4b76-9ea2-170c35f90d38] Running
	I0917 16:58:50.090440   18924 system_pods.go:61] "csi-hostpathplugin-l4qgp" [3d956da8-0046-445f-91ca-13ca2f599dd9] Running
	I0917 16:58:50.090443   18924 system_pods.go:61] "etcd-addons-408385" [12d66991-8c52-4c93-bbc7-62243564fa8c] Running
	I0917 16:58:50.090446   18924 system_pods.go:61] "kube-apiserver-addons-408385" [e7968656-cd51-4c73-b4d3-8fdf9e3a0397] Running
	I0917 16:58:50.090449   18924 system_pods.go:61] "kube-controller-manager-addons-408385" [f969f875-2b8a-4c74-9989-03e557f8a909] Running
	I0917 16:58:50.090453   18924 system_pods.go:61] "kube-ingress-dns-minikube" [a365fa42-68bf-4f57-ad20-e437ef76117e] Running
	I0917 16:58:50.090456   18924 system_pods.go:61] "kube-proxy-6blpt" [fea4c9da-fbd5-4ac9-be8e-7cd0b574e3fc] Running
	I0917 16:58:50.090459   18924 system_pods.go:61] "kube-scheduler-addons-408385" [4c2a228c-678f-48c1-96df-80d490cf18de] Running
	I0917 16:58:50.090462   18924 system_pods.go:61] "metrics-server-84c5f94fbc-nxwr4" [b55954ef-19c5-428e-b2f5-64cb84921e99] Running
	I0917 16:58:50.090465   18924 system_pods.go:61] "nvidia-device-plugin-daemonset-95n5v" [48c0bfc6-64c7-473b-9f8c-429d8af8f349] Running
	I0917 16:58:50.090468   18924 system_pods.go:61] "registry-66c9cd494c-5dzpj" [2f4278c0-9bc9-4d2d-8e73-43d39ddd1504] Running
	I0917 16:58:50.090472   18924 system_pods.go:61] "registry-proxy-84sgt" [93e3187d-0292-45df-9221-e406397b489f] Running
	I0917 16:58:50.090477   18924 system_pods.go:61] "snapshot-controller-56fcc65765-hzt86" [80bf610f-3214-4cdb-90db-4fb1bf38882c] Running
	I0917 16:58:50.090480   18924 system_pods.go:61] "snapshot-controller-56fcc65765-v8kzp" [d6dcec3f-4138-4065-aa77-d339d5b2a2d6] Running
	I0917 16:58:50.090483   18924 system_pods.go:61] "storage-provisioner" [308ed5f9-f16c-45d9-b7c4-edb96a6aa2d1] Running
	I0917 16:58:50.090486   18924 system_pods.go:61] "tiller-deploy-b48cc5f79-r4h85" [6b8d783b-0417-4bca-bedd-0283ba1faf18] Running
	I0917 16:58:50.090492   18924 system_pods.go:74] duration metric: took 4.447215491s to wait for pod list to return data ...
	I0917 16:58:50.090505   18924 default_sa.go:34] waiting for default service account to be created ...
	I0917 16:58:50.093151   18924 default_sa.go:45] found service account: "default"
	I0917 16:58:50.093172   18924 default_sa.go:55] duration metric: took 2.662022ms for default service account to be created ...
	I0917 16:58:50.093180   18924 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 16:58:50.100564   18924 system_pods.go:86] 18 kube-system pods found
	I0917 16:58:50.100596   18924 system_pods.go:89] "coredns-7c65d6cfc9-6scmn" [8db4f4dd-ff63-4e6e-8533-37fc690e481f] Running
	I0917 16:58:50.100607   18924 system_pods.go:89] "csi-hostpath-attacher-0" [65b71f4b-d36f-4dc6-bdae-333899320ff0] Running
	I0917 16:58:50.100619   18924 system_pods.go:89] "csi-hostpath-resizer-0" [c83c2084-ccc8-4b76-9ea2-170c35f90d38] Running
	I0917 16:58:50.100623   18924 system_pods.go:89] "csi-hostpathplugin-l4qgp" [3d956da8-0046-445f-91ca-13ca2f599dd9] Running
	I0917 16:58:50.100628   18924 system_pods.go:89] "etcd-addons-408385" [12d66991-8c52-4c93-bbc7-62243564fa8c] Running
	I0917 16:58:50.100632   18924 system_pods.go:89] "kube-apiserver-addons-408385" [e7968656-cd51-4c73-b4d3-8fdf9e3a0397] Running
	I0917 16:58:50.100637   18924 system_pods.go:89] "kube-controller-manager-addons-408385" [f969f875-2b8a-4c74-9989-03e557f8a909] Running
	I0917 16:58:50.100640   18924 system_pods.go:89] "kube-ingress-dns-minikube" [a365fa42-68bf-4f57-ad20-e437ef76117e] Running
	I0917 16:58:50.100643   18924 system_pods.go:89] "kube-proxy-6blpt" [fea4c9da-fbd5-4ac9-be8e-7cd0b574e3fc] Running
	I0917 16:58:50.100647   18924 system_pods.go:89] "kube-scheduler-addons-408385" [4c2a228c-678f-48c1-96df-80d490cf18de] Running
	I0917 16:58:50.100650   18924 system_pods.go:89] "metrics-server-84c5f94fbc-nxwr4" [b55954ef-19c5-428e-b2f5-64cb84921e99] Running
	I0917 16:58:50.100657   18924 system_pods.go:89] "nvidia-device-plugin-daemonset-95n5v" [48c0bfc6-64c7-473b-9f8c-429d8af8f349] Running
	I0917 16:58:50.100664   18924 system_pods.go:89] "registry-66c9cd494c-5dzpj" [2f4278c0-9bc9-4d2d-8e73-43d39ddd1504] Running
	I0917 16:58:50.100667   18924 system_pods.go:89] "registry-proxy-84sgt" [93e3187d-0292-45df-9221-e406397b489f] Running
	I0917 16:58:50.100670   18924 system_pods.go:89] "snapshot-controller-56fcc65765-hzt86" [80bf610f-3214-4cdb-90db-4fb1bf38882c] Running
	I0917 16:58:50.100674   18924 system_pods.go:89] "snapshot-controller-56fcc65765-v8kzp" [d6dcec3f-4138-4065-aa77-d339d5b2a2d6] Running
	I0917 16:58:50.100677   18924 system_pods.go:89] "storage-provisioner" [308ed5f9-f16c-45d9-b7c4-edb96a6aa2d1] Running
	I0917 16:58:50.100680   18924 system_pods.go:89] "tiller-deploy-b48cc5f79-r4h85" [6b8d783b-0417-4bca-bedd-0283ba1faf18] Running
	I0917 16:58:50.100687   18924 system_pods.go:126] duration metric: took 7.502942ms to wait for k8s-apps to be running ...
	I0917 16:58:50.100695   18924 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 16:58:50.100746   18924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 16:58:50.115763   18924 system_svc.go:56] duration metric: took 15.057221ms WaitForService to wait for kubelet
	I0917 16:58:50.115798   18924 kubeadm.go:582] duration metric: took 2m9.83500224s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 16:58:50.115816   18924 node_conditions.go:102] verifying NodePressure condition ...
	I0917 16:58:50.119437   18924 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 16:58:50.119462   18924 node_conditions.go:123] node cpu capacity is 2
	I0917 16:58:50.119474   18924 node_conditions.go:105] duration metric: took 3.65352ms to run NodePressure ...
	I0917 16:58:50.119484   18924 start.go:241] waiting for startup goroutines ...
	I0917 16:58:50.119490   18924 start.go:246] waiting for cluster config update ...
	I0917 16:58:50.119505   18924 start.go:255] writing updated cluster config ...
	I0917 16:58:50.119789   18924 ssh_runner.go:195] Run: rm -f paused
	I0917 16:58:50.169934   18924 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 16:58:50.173108   18924 out.go:177] * Done! kubectl is now configured to use "addons-408385" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 17 17:12:37 addons-408385 crio[662]: time="2024-09-17 17:12:37.867732947Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26eccf0c-e1b3-49d4-b47d-aee12df64c2a name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:12:37 addons-408385 crio[662]: time="2024-09-17 17:12:37.867997638Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00b9a002ec125abe4e7f50a4e6bd2705bc2fed2a77e71ae5ac798b3114e1db6c,PodSandboxId:9e624bc2158e468932af5a0902901aa8f1bf34037db6c4bab45b08b219f11247,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726593018687650340,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-cnzjd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cceea962-d19d-4282-aaaf-96c6277ba99f,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397f39163049566d23be8376dbed334aadd27c8c56e83f417aa2c59f2a252f9b,PodSandboxId:6e889439508c6580558306b641e7b6cfc5d4ce54fb03881be02e737d80da3344,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726592879375090193,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 076f30f2-e5f7-4810-8e8d-613a12b5664c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4c5e175eedc073e97f9ea9680c41d68c3c000caa69196b46e03499304831c0d,PodSandboxId:e39bbf11dc16564121b75b3ae0c124b1d4b3e667c00c6d0e30fe71b8dcc2eb3d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726592284895958469,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-b7hz4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 0d20c2e4-2c70-48c2-8fb9-a28309d6b41f,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c35ba12caa08b4b7dff115e7bec3ccfea7487866b24701619f1eb1dec01e5cdf,PodSandboxId:d5f73be9090dd4d27345f77aa93b450ea10f7e369ead9c0ec9078f75a9967238,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726592248653089159,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-nxwr4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b55954ef-19c5-428e-b2f5-64cb84921e99,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b3332c3d67663e332214a4d2a7600673eb50a2e85f955febcae8ae6bdc61cea,PodSandboxId:1266dc642a5d5d817566a303855b89ad35e9ba5ce0cd09a6987308c623d146d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1726592207362039245,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 308ed5f9-f16c-45d9-b7c4-edb96a6aa2d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707,PodSandboxId:147e55def5b9c0c7a9d8b335c46dbfd7c65e5774b9251734e2b6257b17749d03,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726592204
223691882,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6scmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db4f4dd-ff63-4e6e-8533-37fc690e481f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0,PodSandboxId:ba0e8772c0eef1236f4fa4985d24c32a210d0f1bdd86f2a9a4221eb1e6e06384,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726592200857172807,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6blpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fea4c9da-fbd5-4ac9-be8e-7cd0b574e3fc,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad,PodSandboxId:900247dfddd23f3136780bf7695d0bc30603abe8a6d1321d2c4ce6551729d09a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[str
ing]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726592190061688263,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3943b5aa55300847a88d97baf9f5fcd9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15,PodSandboxId:0d778775904583ba97eb27717ac52155cf18a8980c70a3e42c566fb034a6538c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726592190061957344,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b22844b2da94ceb2dd6e2ff998a06b7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44,PodSandboxId:b694988dbe8974b1414fe63b75517e2c0ec0abb7613fd6db4ec17ca7ba275fc4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726592190025949182,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc7054c0bf6f7ef8f456663c4c477a6e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454,PodSandboxId:b5606602c326c9d22fba7b773a1101738483f2c55a045ae530eb2568b3631e79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726592189934323923,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e28321a1692b9c5e59016b226421277,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=26eccf0c-e1b3-49d4-b47d-aee12df64c2a name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:12:37 addons-408385 crio[662]: time="2024-09-17 17:12:37.910459583Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=87037853-7ddd-4dd9-b4bd-504ae63fd065 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:12:37 addons-408385 crio[662]: time="2024-09-17 17:12:37.910592991Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=87037853-7ddd-4dd9-b4bd-504ae63fd065 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:12:37 addons-408385 crio[662]: time="2024-09-17 17:12:37.911718589Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=22d1de5f-06d8-4553-93e8-33f699bfad2d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:12:37 addons-408385 crio[662]: time="2024-09-17 17:12:37.913085154Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593157913057111,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579800,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=22d1de5f-06d8-4553-93e8-33f699bfad2d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:12:37 addons-408385 crio[662]: time="2024-09-17 17:12:37.913730141Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=128980b3-f8ac-4d3c-b71b-00842311881e name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:12:37 addons-408385 crio[662]: time="2024-09-17 17:12:37.913809951Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=128980b3-f8ac-4d3c-b71b-00842311881e name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:12:37 addons-408385 crio[662]: time="2024-09-17 17:12:37.914061984Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00b9a002ec125abe4e7f50a4e6bd2705bc2fed2a77e71ae5ac798b3114e1db6c,PodSandboxId:9e624bc2158e468932af5a0902901aa8f1bf34037db6c4bab45b08b219f11247,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726593018687650340,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-cnzjd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cceea962-d19d-4282-aaaf-96c6277ba99f,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397f39163049566d23be8376dbed334aadd27c8c56e83f417aa2c59f2a252f9b,PodSandboxId:6e889439508c6580558306b641e7b6cfc5d4ce54fb03881be02e737d80da3344,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726592879375090193,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 076f30f2-e5f7-4810-8e8d-613a12b5664c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4c5e175eedc073e97f9ea9680c41d68c3c000caa69196b46e03499304831c0d,PodSandboxId:e39bbf11dc16564121b75b3ae0c124b1d4b3e667c00c6d0e30fe71b8dcc2eb3d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726592284895958469,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-b7hz4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 0d20c2e4-2c70-48c2-8fb9-a28309d6b41f,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c35ba12caa08b4b7dff115e7bec3ccfea7487866b24701619f1eb1dec01e5cdf,PodSandboxId:d5f73be9090dd4d27345f77aa93b450ea10f7e369ead9c0ec9078f75a9967238,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726592248653089159,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-nxwr4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b55954ef-19c5-428e-b2f5-64cb84921e99,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b3332c3d67663e332214a4d2a7600673eb50a2e85f955febcae8ae6bdc61cea,PodSandboxId:1266dc642a5d5d817566a303855b89ad35e9ba5ce0cd09a6987308c623d146d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1726592207362039245,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 308ed5f9-f16c-45d9-b7c4-edb96a6aa2d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707,PodSandboxId:147e55def5b9c0c7a9d8b335c46dbfd7c65e5774b9251734e2b6257b17749d03,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726592204
223691882,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6scmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db4f4dd-ff63-4e6e-8533-37fc690e481f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0,PodSandboxId:ba0e8772c0eef1236f4fa4985d24c32a210d0f1bdd86f2a9a4221eb1e6e06384,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726592200857172807,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6blpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fea4c9da-fbd5-4ac9-be8e-7cd0b574e3fc,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad,PodSandboxId:900247dfddd23f3136780bf7695d0bc30603abe8a6d1321d2c4ce6551729d09a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[str
ing]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726592190061688263,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3943b5aa55300847a88d97baf9f5fcd9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15,PodSandboxId:0d778775904583ba97eb27717ac52155cf18a8980c70a3e42c566fb034a6538c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726592190061957344,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b22844b2da94ceb2dd6e2ff998a06b7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44,PodSandboxId:b694988dbe8974b1414fe63b75517e2c0ec0abb7613fd6db4ec17ca7ba275fc4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726592190025949182,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc7054c0bf6f7ef8f456663c4c477a6e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454,PodSandboxId:b5606602c326c9d22fba7b773a1101738483f2c55a045ae530eb2568b3631e79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726592189934323923,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e28321a1692b9c5e59016b226421277,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=128980b3-f8ac-4d3c-b71b-00842311881e name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:12:37 addons-408385 crio[662]: time="2024-09-17 17:12:37.953947792Z" level=debug msg="Event: WRITE         \"/var/run/crio/exits/c35ba12caa08b4b7dff115e7bec3ccfea7487866b24701619f1eb1dec01e5cdf.7ZM5T2\"" file="server/server.go:805"
	Sep 17 17:12:37 addons-408385 crio[662]: time="2024-09-17 17:12:37.954048873Z" level=debug msg="Event: CREATE        \"/var/run/crio/exits/c35ba12caa08b4b7dff115e7bec3ccfea7487866b24701619f1eb1dec01e5cdf.7ZM5T2\"" file="server/server.go:805"
	Sep 17 17:12:37 addons-408385 crio[662]: time="2024-09-17 17:12:37.954072070Z" level=debug msg="Container or sandbox exited: c35ba12caa08b4b7dff115e7bec3ccfea7487866b24701619f1eb1dec01e5cdf.7ZM5T2" file="server/server.go:810"
	Sep 17 17:12:37 addons-408385 crio[662]: time="2024-09-17 17:12:37.954102733Z" level=debug msg="Event: CREATE        \"/var/run/crio/exits/c35ba12caa08b4b7dff115e7bec3ccfea7487866b24701619f1eb1dec01e5cdf\"" file="server/server.go:805"
	Sep 17 17:12:37 addons-408385 crio[662]: time="2024-09-17 17:12:37.954121509Z" level=debug msg="Container or sandbox exited: c35ba12caa08b4b7dff115e7bec3ccfea7487866b24701619f1eb1dec01e5cdf" file="server/server.go:810"
	Sep 17 17:12:37 addons-408385 crio[662]: time="2024-09-17 17:12:37.954140130Z" level=debug msg="container exited and found: c35ba12caa08b4b7dff115e7bec3ccfea7487866b24701619f1eb1dec01e5cdf" file="server/server.go:825"
	Sep 17 17:12:37 addons-408385 crio[662]: time="2024-09-17 17:12:37.954175238Z" level=debug msg="Event: RENAME        \"/var/run/crio/exits/c35ba12caa08b4b7dff115e7bec3ccfea7487866b24701619f1eb1dec01e5cdf.7ZM5T2\"" file="server/server.go:805"
	Sep 17 17:12:37 addons-408385 crio[662]: time="2024-09-17 17:12:37.962143267Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7148dfcb-cf55-4a1b-bb5c-079d6a09ff6e name=/runtime.v1.RuntimeService/Version
	Sep 17 17:12:37 addons-408385 crio[662]: time="2024-09-17 17:12:37.962275797Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7148dfcb-cf55-4a1b-bb5c-079d6a09ff6e name=/runtime.v1.RuntimeService/Version
	Sep 17 17:12:37 addons-408385 crio[662]: time="2024-09-17 17:12:37.963666138Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=29d952fe-6bea-469e-a8bc-70df895595f8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:12:37 addons-408385 crio[662]: time="2024-09-17 17:12:37.964968586Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593157964933397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579800,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=29d952fe-6bea-469e-a8bc-70df895595f8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:12:37 addons-408385 crio[662]: time="2024-09-17 17:12:37.965664137Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c8ddef9-e528-4572-9183-4bf4152a6cbf name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:12:37 addons-408385 crio[662]: time="2024-09-17 17:12:37.965738922Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c8ddef9-e528-4572-9183-4bf4152a6cbf name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:12:37 addons-408385 crio[662]: time="2024-09-17 17:12:37.965993300Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00b9a002ec125abe4e7f50a4e6bd2705bc2fed2a77e71ae5ac798b3114e1db6c,PodSandboxId:9e624bc2158e468932af5a0902901aa8f1bf34037db6c4bab45b08b219f11247,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726593018687650340,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-cnzjd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cceea962-d19d-4282-aaaf-96c6277ba99f,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397f39163049566d23be8376dbed334aadd27c8c56e83f417aa2c59f2a252f9b,PodSandboxId:6e889439508c6580558306b641e7b6cfc5d4ce54fb03881be02e737d80da3344,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726592879375090193,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 076f30f2-e5f7-4810-8e8d-613a12b5664c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4c5e175eedc073e97f9ea9680c41d68c3c000caa69196b46e03499304831c0d,PodSandboxId:e39bbf11dc16564121b75b3ae0c124b1d4b3e667c00c6d0e30fe71b8dcc2eb3d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726592284895958469,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-b7hz4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 0d20c2e4-2c70-48c2-8fb9-a28309d6b41f,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c35ba12caa08b4b7dff115e7bec3ccfea7487866b24701619f1eb1dec01e5cdf,PodSandboxId:d5f73be9090dd4d27345f77aa93b450ea10f7e369ead9c0ec9078f75a9967238,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726592248653089159,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-nxwr4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b55954ef-19c5-428e-b2f5-64cb84921e99,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b3332c3d67663e332214a4d2a7600673eb50a2e85f955febcae8ae6bdc61cea,PodSandboxId:1266dc642a5d5d817566a303855b89ad35e9ba5ce0cd09a6987308c623d146d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1726592207362039245,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 308ed5f9-f16c-45d9-b7c4-edb96a6aa2d1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707,PodSandboxId:147e55def5b9c0c7a9d8b335c46dbfd7c65e5774b9251734e2b6257b17749d03,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726592204
223691882,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6scmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db4f4dd-ff63-4e6e-8533-37fc690e481f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0,PodSandboxId:ba0e8772c0eef1236f4fa4985d24c32a210d0f1bdd86f2a9a4221eb1e6e06384,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726592200857172807,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6blpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fea4c9da-fbd5-4ac9-be8e-7cd0b574e3fc,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad,PodSandboxId:900247dfddd23f3136780bf7695d0bc30603abe8a6d1321d2c4ce6551729d09a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[str
ing]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726592190061688263,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3943b5aa55300847a88d97baf9f5fcd9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15,PodSandboxId:0d778775904583ba97eb27717ac52155cf18a8980c70a3e42c566fb034a6538c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726592190061957344,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b22844b2da94ceb2dd6e2ff998a06b7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44,PodSandboxId:b694988dbe8974b1414fe63b75517e2c0ec0abb7613fd6db4ec17ca7ba275fc4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726592190025949182,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc7054c0bf6f7ef8f456663c4c477a6e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454,PodSandboxId:b5606602c326c9d22fba7b773a1101738483f2c55a045ae530eb2568b3631e79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726592189934323923,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-408385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e28321a1692b9c5e59016b226421277,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c8ddef9-e528-4572-9183-4bf4152a6cbf name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:12:37 addons-408385 crio[662]: time="2024-09-17 17:12:37.979715989Z" level=debug msg="Unmounted container c35ba12caa08b4b7dff115e7bec3ccfea7487866b24701619f1eb1dec01e5cdf" file="storage/runtime.go:495" id=500b907e-a7cf-4a2e-a0d4-00296abfdc95 name=/runtime.v1.RuntimeService/StopContainer
	Sep 17 17:12:38 addons-408385 crio[662]: time="2024-09-17 17:12:38.004720712Z" level=debug msg="Found exit code for c35ba12caa08b4b7dff115e7bec3ccfea7487866b24701619f1eb1dec01e5cdf: 0" file="oci/runtime_oci.go:1022" id=500b907e-a7cf-4a2e-a0d4-00296abfdc95 name=/runtime.v1.RuntimeService/StopContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	00b9a002ec125       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   9e624bc2158e4       hello-world-app-55bf9c44b4-cnzjd
	397f391630495       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                         4 minutes ago       Running             nginx                     0                   6e889439508c6       nginx
	f4c5e175eedc0       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            14 minutes ago      Running             gcp-auth                  0                   e39bbf11dc165       gcp-auth-89d5ffd79-b7hz4
	c35ba12caa08b       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   15 minutes ago      Exited              metrics-server            0                   d5f73be9090dd       metrics-server-84c5f94fbc-nxwr4
	4b3332c3d6766       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        15 minutes ago      Running             storage-provisioner       0                   1266dc642a5d5       storage-provisioner
	bc6baaebe3ad7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        15 minutes ago      Running             coredns                   0                   147e55def5b9c       coredns-7c65d6cfc9-6scmn
	78abe757b26b6       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        15 minutes ago      Running             kube-proxy                0                   ba0e8772c0eef       kube-proxy-6blpt
	535459bc7374f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        16 minutes ago      Running             etcd                      0                   0d77877590458       etcd-addons-408385
	5e8239454541e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        16 minutes ago      Running             kube-scheduler            0                   900247dfddd23       kube-scheduler-addons-408385
	eb8765767a52a       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        16 minutes ago      Running             kube-controller-manager   0                   b694988dbe897       kube-controller-manager-addons-408385
	bd97816994086       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        16 minutes ago      Running             kube-apiserver            0                   b5606602c326c       kube-apiserver-addons-408385
	
	
	==> coredns [bc6baaebe3ad74013549e8c5239be044bfe3731677ed27631cdfb45f24d70707] <==
	[INFO] 127.0.0.1:57157 - 41320 "HINFO IN 6395580120945152869.1644042831807943476. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013215393s
	[INFO] 10.244.0.7:46761 - 53795 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000416235s
	[INFO] 10.244.0.7:46761 - 26406 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000534252s
	[INFO] 10.244.0.7:48828 - 43868 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00012662s
	[INFO] 10.244.0.7:48828 - 6464 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000151166s
	[INFO] 10.244.0.7:37590 - 72 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000092427s
	[INFO] 10.244.0.7:37590 - 33095 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000167498s
	[INFO] 10.244.0.7:58968 - 53960 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000109529s
	[INFO] 10.244.0.7:58968 - 34006 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000103021s
	[INFO] 10.244.0.7:37473 - 44286 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000113543s
	[INFO] 10.244.0.7:37473 - 56545 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000093243s
	[INFO] 10.244.0.7:41216 - 28183 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000250593s
	[INFO] 10.244.0.7:41216 - 45082 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00008693s
	[INFO] 10.244.0.7:54147 - 34285 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000052976s
	[INFO] 10.244.0.7:54147 - 34283 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000055269s
	[INFO] 10.244.0.7:52498 - 26622 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000077619s
	[INFO] 10.244.0.7:52498 - 59135 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000083094s
	[INFO] 10.244.0.22:33436 - 15658 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000483817s
	[INFO] 10.244.0.22:54534 - 52664 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000596513s
	[INFO] 10.244.0.22:60274 - 25830 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000160929s
	[INFO] 10.244.0.22:55742 - 23361 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000135249s
	[INFO] 10.244.0.22:58422 - 120 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000117693s
	[INFO] 10.244.0.22:60253 - 8920 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000245419s
	[INFO] 10.244.0.22:47422 - 15749 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000974274s
	[INFO] 10.244.0.22:57287 - 962 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001082864s
	
	
	==> describe nodes <==
	Name:               addons-408385
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-408385
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=addons-408385
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T16_56_36_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-408385
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 16:56:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-408385
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:12:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:10:41 +0000   Tue, 17 Sep 2024 16:56:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:10:41 +0000   Tue, 17 Sep 2024 16:56:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:10:41 +0000   Tue, 17 Sep 2024 16:56:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:10:41 +0000   Tue, 17 Sep 2024 16:56:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.170
	  Hostname:    addons-408385
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 303ab64fe93940c69a272a146d3d7928
	  System UUID:                303ab64f-e939-40c6-9a27-2a146d3d7928
	  Boot ID:                    fb6d0db4-ddc4-405a-8acb-6d4fe2f98715
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-world-app-55bf9c44b4-cnzjd         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  gcp-auth                    gcp-auth-89d5ffd79-b7hz4                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-7c65d6cfc9-6scmn                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     15m
	  kube-system                 etcd-addons-408385                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         16m
	  kube-system                 kube-apiserver-addons-408385             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-addons-408385    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-6blpt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-408385             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node addons-408385 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node addons-408385 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node addons-408385 status is now: NodeHasSufficientPID
	  Normal  NodeReady                16m   kubelet          Node addons-408385 status is now: NodeReady
	  Normal  RegisteredNode           15m   node-controller  Node addons-408385 event: Registered Node addons-408385 in Controller
	
	
	==> dmesg <==
	[  +7.274919] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.009609] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.152447] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.508316] kauditd_printk_skb: 31 callbacks suppressed
	[  +9.343675] kauditd_printk_skb: 13 callbacks suppressed
	[Sep17 16:58] kauditd_printk_skb: 11 callbacks suppressed
	[  +9.434512] kauditd_printk_skb: 35 callbacks suppressed
	[ +36.392548] kauditd_printk_skb: 30 callbacks suppressed
	[Sep17 16:59] kauditd_printk_skb: 2 callbacks suppressed
	[Sep17 17:00] kauditd_printk_skb: 28 callbacks suppressed
	[Sep17 17:03] kauditd_printk_skb: 28 callbacks suppressed
	[Sep17 17:06] kauditd_printk_skb: 28 callbacks suppressed
	[Sep17 17:07] kauditd_printk_skb: 27 callbacks suppressed
	[  +6.051902] kauditd_printk_skb: 49 callbacks suppressed
	[ +21.859037] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.899065] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.636828] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.569598] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.336851] kauditd_printk_skb: 27 callbacks suppressed
	[Sep17 17:08] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.561003] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.057084] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.541549] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.340073] kauditd_printk_skb: 2 callbacks suppressed
	[Sep17 17:10] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [535459bc7374f0d455056fdbb805ea54bf13eddb8536862b9a891b32e9662c15] <==
	{"level":"info","ts":"2024-09-17T16:57:40.825956Z","caller":"traceutil/trace.go:171","msg":"trace[1450865517] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1019; }","duration":"142.345433ms","start":"2024-09-17T16:57:40.683601Z","end":"2024-09-17T16:57:40.825947Z","steps":["trace[1450865517] 'agreement among raft nodes before linearized reading'  (duration: 142.316331ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T16:57:41.126635Z","caller":"traceutil/trace.go:171","msg":"trace[957294072] linearizableReadLoop","detail":"{readStateIndex:1046; appliedIndex:1045; }","duration":"156.015075ms","start":"2024-09-17T16:57:40.970600Z","end":"2024-09-17T16:57:41.126615Z","steps":["trace[957294072] 'read index received'  (duration: 151.361865ms)","trace[957294072] 'applied index is now lower than readState.Index'  (duration: 4.652523ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-17T16:57:41.127181Z","caller":"traceutil/trace.go:171","msg":"trace[1537240876] transaction","detail":"{read_only:false; response_revision:1020; number_of_response:1; }","duration":"288.479372ms","start":"2024-09-17T16:57:40.838686Z","end":"2024-09-17T16:57:41.127165Z","steps":["trace[1537240876] 'process raft request'  (duration: 283.161611ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T16:57:41.127246Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.650442ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T16:57:41.129441Z","caller":"traceutil/trace.go:171","msg":"trace[1844929121] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1020; }","duration":"158.756306ms","start":"2024-09-17T16:57:40.970576Z","end":"2024-09-17T16:57:41.129332Z","steps":["trace[1844929121] 'agreement among raft nodes before linearized reading'  (duration: 156.630189ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T16:57:41.128691Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.896858ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T16:57:41.130001Z","caller":"traceutil/trace.go:171","msg":"trace[2083427081] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1020; }","duration":"150.161523ms","start":"2024-09-17T16:57:40.979772Z","end":"2024-09-17T16:57:41.129934Z","steps":["trace[2083427081] 'agreement among raft nodes before linearized reading'  (duration: 148.875086ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T16:57:49.758492Z","caller":"traceutil/trace.go:171","msg":"trace[1749067213] linearizableReadLoop","detail":"{readStateIndex:1128; appliedIndex:1127; }","duration":"142.014283ms","start":"2024-09-17T16:57:49.616449Z","end":"2024-09-17T16:57:49.758464Z","steps":["trace[1749067213] 'read index received'  (duration: 137.745885ms)","trace[1749067213] 'applied index is now lower than readState.Index'  (duration: 4.264398ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-17T16:57:49.761832Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.490472ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T16:57:49.761946Z","caller":"traceutil/trace.go:171","msg":"trace[402711062] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1100; }","duration":"145.520155ms","start":"2024-09-17T16:57:49.616413Z","end":"2024-09-17T16:57:49.761933Z","steps":["trace[402711062] 'agreement among raft nodes before linearized reading'  (duration: 142.267631ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T16:58:00.905621Z","caller":"traceutil/trace.go:171","msg":"trace[1306530275] linearizableReadLoop","detail":"{readStateIndex:1157; appliedIndex:1156; }","duration":"289.751652ms","start":"2024-09-17T16:58:00.615836Z","end":"2024-09-17T16:58:00.905587Z","steps":["trace[1306530275] 'read index received'  (duration: 289.557299ms)","trace[1306530275] 'applied index is now lower than readState.Index'  (duration: 193.815µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-17T16:58:00.905791Z","caller":"traceutil/trace.go:171","msg":"trace[759752851] transaction","detail":"{read_only:false; response_revision:1127; number_of_response:1; }","duration":"392.185176ms","start":"2024-09-17T16:58:00.513584Z","end":"2024-09-17T16:58:00.905769Z","steps":["trace[759752851] 'process raft request'  (duration: 391.871349ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T16:58:00.905917Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-17T16:58:00.513568Z","time spent":"392.247218ms","remote":"127.0.0.1:46318","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1125 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-09-17T16:58:00.906045Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"290.207792ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T16:58:00.906091Z","caller":"traceutil/trace.go:171","msg":"trace[250334734] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1127; }","duration":"290.252982ms","start":"2024-09-17T16:58:00.615830Z","end":"2024-09-17T16:58:00.906083Z","steps":["trace[250334734] 'agreement among raft nodes before linearized reading'  (duration: 290.193074ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T16:58:00.906401Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.852875ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-84c5f94fbc-nxwr4\" ","response":"range_response_count:1 size:4566"}
	{"level":"info","ts":"2024-09-17T16:58:00.906443Z","caller":"traceutil/trace.go:171","msg":"trace[1208448744] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-84c5f94fbc-nxwr4; range_end:; response_count:1; response_revision:1127; }","duration":"214.898501ms","start":"2024-09-17T16:58:00.691537Z","end":"2024-09-17T16:58:00.906435Z","steps":["trace[1208448744] 'agreement among raft nodes before linearized reading'  (duration: 214.748925ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T16:58:05.566022Z","caller":"traceutil/trace.go:171","msg":"trace[1110581115] transaction","detail":"{read_only:false; response_revision:1159; number_of_response:1; }","duration":"216.906105ms","start":"2024-09-17T16:58:05.349099Z","end":"2024-09-17T16:58:05.566005Z","steps":["trace[1110581115] 'process raft request'  (duration: 216.44414ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T17:06:31.055466Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1534}
	{"level":"info","ts":"2024-09-17T17:06:31.099441Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1534,"took":"43.089418ms","hash":2805165075,"current-db-size-bytes":6627328,"current-db-size":"6.6 MB","current-db-size-in-use-bytes":3305472,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-09-17T17:06:31.099569Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2805165075,"revision":1534,"compact-revision":-1}
	{"level":"info","ts":"2024-09-17T17:08:47.292674Z","caller":"traceutil/trace.go:171","msg":"trace[380971827] transaction","detail":"{read_only:false; response_revision:2615; number_of_response:1; }","duration":"195.793214ms","start":"2024-09-17T17:08:47.096842Z","end":"2024-09-17T17:08:47.292635Z","steps":["trace[380971827] 'process raft request'  (duration: 195.290233ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T17:11:31.067778Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1954}
	{"level":"info","ts":"2024-09-17T17:11:31.089865Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1954,"took":"21.051389ms","hash":3248132247,"current-db-size-bytes":6627328,"current-db-size":"6.6 MB","current-db-size-in-use-bytes":4784128,"current-db-size-in-use":"4.8 MB"}
	{"level":"info","ts":"2024-09-17T17:11:31.089944Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3248132247,"revision":1954,"compact-revision":1534}
	
	
	==> gcp-auth [f4c5e175eedc073e97f9ea9680c41d68c3c000caa69196b46e03499304831c0d] <==
	2024/09/17 16:58:50 Ready to write response ...
	2024/09/17 17:06:53 Ready to marshal response ...
	2024/09/17 17:06:53 Ready to write response ...
	2024/09/17 17:06:53 Ready to marshal response ...
	2024/09/17 17:06:53 Ready to write response ...
	2024/09/17 17:06:55 Ready to marshal response ...
	2024/09/17 17:06:55 Ready to write response ...
	2024/09/17 17:07:04 Ready to marshal response ...
	2024/09/17 17:07:04 Ready to write response ...
	2024/09/17 17:07:06 Ready to marshal response ...
	2024/09/17 17:07:06 Ready to write response ...
	2024/09/17 17:07:29 Ready to marshal response ...
	2024/09/17 17:07:29 Ready to write response ...
	2024/09/17 17:07:50 Ready to marshal response ...
	2024/09/17 17:07:50 Ready to write response ...
	2024/09/17 17:07:56 Ready to marshal response ...
	2024/09/17 17:07:56 Ready to write response ...
	2024/09/17 17:08:08 Ready to marshal response ...
	2024/09/17 17:08:08 Ready to write response ...
	2024/09/17 17:08:08 Ready to marshal response ...
	2024/09/17 17:08:08 Ready to write response ...
	2024/09/17 17:08:08 Ready to marshal response ...
	2024/09/17 17:08:08 Ready to write response ...
	2024/09/17 17:10:17 Ready to marshal response ...
	2024/09/17 17:10:17 Ready to write response ...
	
	
	==> kernel <==
	 17:12:38 up 16 min,  0 users,  load average: 0.18, 0.46, 0.49
	Linux addons-408385 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [bd9781699408624010c918e7363ebd3fc69c5d7215a320162bbb531958465454] <==
	E0917 17:07:32.332918       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0917 17:07:33.341648       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0917 17:07:34.356998       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0917 17:07:35.368574       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0917 17:07:36.377836       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0917 17:07:45.022032       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:07:45.022241       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 17:07:45.069544       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:07:45.070007       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 17:07:45.095472       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:07:45.095596       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 17:07:45.140075       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:07:45.140334       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 17:07:45.293286       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:07:45.293432       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0917 17:07:46.095884       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0917 17:07:46.293643       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0917 17:07:46.311410       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0917 17:07:56.668400       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0917 17:07:56.842947       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.139.217"}
	I0917 17:08:01.899778       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0917 17:08:03.039111       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0917 17:08:08.207716       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.117.32"}
	I0917 17:10:17.545474       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.45.149"}
	E0917 17:10:18.941843       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	
	
	==> kube-controller-manager [eb8765767a52a4f8222028006b0b8faa8a8d97e3147ccf1bcc8fd12ccbe4ca44] <==
	W0917 17:10:21.588190       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:10:21.588272       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:10:25.769079       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:10:25.769115       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 17:10:29.690467       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	I0917 17:10:41.900744       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-408385"
	W0917 17:10:44.889256       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:10:44.889323       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:10:47.178150       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:10:47.178275       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:11:00.249589       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:11:00.249628       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:11:10.612569       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:11:10.612665       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:11:26.136417       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:11:26.136601       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:11:42.528541       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:11:42.528614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:11:49.299804       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:11:49.299925       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:11:56.195667       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:11:56.195894       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:12:07.519112       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:12:07.519526       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 17:12:36.826704       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="10.323µs"
	
	
	==> kube-proxy [78abe757b26b66b9fd1076e95424ab468b2f7513eb14773f514a077e0ba029e0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 16:56:41.756177       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 16:56:41.772932       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.170"]
	E0917 16:56:41.773152       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 16:56:41.851988       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 16:56:41.852089       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 16:56:41.852113       1 server_linux.go:169] "Using iptables Proxier"
	I0917 16:56:41.860672       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 16:56:41.861044       1 server.go:483] "Version info" version="v1.31.1"
	I0917 16:56:41.861068       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 16:56:41.862987       1 config.go:199] "Starting service config controller"
	I0917 16:56:41.863008       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 16:56:41.863038       1 config.go:105] "Starting endpoint slice config controller"
	I0917 16:56:41.863044       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 16:56:41.863485       1 config.go:328] "Starting node config controller"
	I0917 16:56:41.863493       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 16:56:41.963269       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 16:56:41.963331       1 shared_informer.go:320] Caches are synced for service config
	I0917 16:56:41.963540       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5e8239454541edc79aed584f9a4c91f03bdcf9d9e812b66b30c658c8b46abcad] <==
	W0917 16:56:32.615429       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0917 16:56:32.615914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:32.615566       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 16:56:32.615953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:32.615976       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 16:56:32.616015       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:33.493645       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0917 16:56:33.493786       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0917 16:56:33.498727       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0917 16:56:33.498778       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:33.500587       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0917 16:56:33.500622       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:33.737858       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0917 16:56:33.737918       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:33.762653       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0917 16:56:33.762736       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:33.781707       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 16:56:33.781836       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:33.846619       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 16:56:33.846670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:33.921594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 16:56:33.922678       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:33.924769       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 16:56:33.924819       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0917 16:56:35.706106       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 17 17:12:05 addons-408385 kubelet[1205]: E0917 17:12:05.800700    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593125799753924,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579800,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:12:07 addons-408385 kubelet[1205]: E0917 17:12:07.294176    1205 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="b1cf7e30-fdc3-4fed-88ab-f3634aace95b"
	Sep 17 17:12:15 addons-408385 kubelet[1205]: E0917 17:12:15.804056    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593135803612085,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579800,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:12:15 addons-408385 kubelet[1205]: E0917 17:12:15.804086    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593135803612085,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579800,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:12:18 addons-408385 kubelet[1205]: E0917 17:12:18.292731    1205 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="b1cf7e30-fdc3-4fed-88ab-f3634aace95b"
	Sep 17 17:12:25 addons-408385 kubelet[1205]: E0917 17:12:25.807519    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593145806859126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579800,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:12:25 addons-408385 kubelet[1205]: E0917 17:12:25.807806    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593145806859126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579800,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:12:31 addons-408385 kubelet[1205]: E0917 17:12:31.293409    1205 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="b1cf7e30-fdc3-4fed-88ab-f3634aace95b"
	Sep 17 17:12:35 addons-408385 kubelet[1205]: E0917 17:12:35.322233    1205 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 17 17:12:35 addons-408385 kubelet[1205]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 17 17:12:35 addons-408385 kubelet[1205]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 17 17:12:35 addons-408385 kubelet[1205]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 17 17:12:35 addons-408385 kubelet[1205]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 17 17:12:35 addons-408385 kubelet[1205]: E0917 17:12:35.811005    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593155810642954,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579800,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:12:35 addons-408385 kubelet[1205]: E0917 17:12:35.811052    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593155810642954,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579800,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:12:38 addons-408385 kubelet[1205]: I0917 17:12:38.328735    1205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtdhs\" (UniqueName: \"kubernetes.io/projected/b55954ef-19c5-428e-b2f5-64cb84921e99-kube-api-access-xtdhs\") pod \"b55954ef-19c5-428e-b2f5-64cb84921e99\" (UID: \"b55954ef-19c5-428e-b2f5-64cb84921e99\") "
	Sep 17 17:12:38 addons-408385 kubelet[1205]: I0917 17:12:38.328787    1205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b55954ef-19c5-428e-b2f5-64cb84921e99-tmp-dir\") pod \"b55954ef-19c5-428e-b2f5-64cb84921e99\" (UID: \"b55954ef-19c5-428e-b2f5-64cb84921e99\") "
	Sep 17 17:12:38 addons-408385 kubelet[1205]: I0917 17:12:38.329203    1205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b55954ef-19c5-428e-b2f5-64cb84921e99-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "b55954ef-19c5-428e-b2f5-64cb84921e99" (UID: "b55954ef-19c5-428e-b2f5-64cb84921e99"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 17 17:12:38 addons-408385 kubelet[1205]: I0917 17:12:38.332680    1205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b55954ef-19c5-428e-b2f5-64cb84921e99-kube-api-access-xtdhs" (OuterVolumeSpecName: "kube-api-access-xtdhs") pod "b55954ef-19c5-428e-b2f5-64cb84921e99" (UID: "b55954ef-19c5-428e-b2f5-64cb84921e99"). InnerVolumeSpecName "kube-api-access-xtdhs". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 17:12:38 addons-408385 kubelet[1205]: I0917 17:12:38.429862    1205 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xtdhs\" (UniqueName: \"kubernetes.io/projected/b55954ef-19c5-428e-b2f5-64cb84921e99-kube-api-access-xtdhs\") on node \"addons-408385\" DevicePath \"\""
	Sep 17 17:12:38 addons-408385 kubelet[1205]: I0917 17:12:38.429899    1205 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b55954ef-19c5-428e-b2f5-64cb84921e99-tmp-dir\") on node \"addons-408385\" DevicePath \"\""
	Sep 17 17:12:38 addons-408385 kubelet[1205]: I0917 17:12:38.463699    1205 scope.go:117] "RemoveContainer" containerID="c35ba12caa08b4b7dff115e7bec3ccfea7487866b24701619f1eb1dec01e5cdf"
	Sep 17 17:12:38 addons-408385 kubelet[1205]: I0917 17:12:38.511528    1205 scope.go:117] "RemoveContainer" containerID="c35ba12caa08b4b7dff115e7bec3ccfea7487866b24701619f1eb1dec01e5cdf"
	Sep 17 17:12:38 addons-408385 kubelet[1205]: E0917 17:12:38.512185    1205 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c35ba12caa08b4b7dff115e7bec3ccfea7487866b24701619f1eb1dec01e5cdf\": container with ID starting with c35ba12caa08b4b7dff115e7bec3ccfea7487866b24701619f1eb1dec01e5cdf not found: ID does not exist" containerID="c35ba12caa08b4b7dff115e7bec3ccfea7487866b24701619f1eb1dec01e5cdf"
	Sep 17 17:12:38 addons-408385 kubelet[1205]: I0917 17:12:38.512244    1205 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c35ba12caa08b4b7dff115e7bec3ccfea7487866b24701619f1eb1dec01e5cdf"} err="failed to get container status \"c35ba12caa08b4b7dff115e7bec3ccfea7487866b24701619f1eb1dec01e5cdf\": rpc error: code = NotFound desc = could not find container \"c35ba12caa08b4b7dff115e7bec3ccfea7487866b24701619f1eb1dec01e5cdf\": container with ID starting with c35ba12caa08b4b7dff115e7bec3ccfea7487866b24701619f1eb1dec01e5cdf not found: ID does not exist"
	
	
	==> storage-provisioner [4b3332c3d67663e332214a4d2a7600673eb50a2e85f955febcae8ae6bdc61cea] <==
	I0917 16:56:47.969062       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 16:56:48.024282       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 16:56:48.024402       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0917 16:56:48.040055       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 16:56:48.041757       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f8b747fa-ca28-40fb-9f2b-ae004859bb2e", APIVersion:"v1", ResourceVersion:"609", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-408385_c2b88aec-d75b-48d3-8371-4055fb3d5c3d became leader
	I0917 16:56:48.043770       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-408385_c2b88aec-d75b-48d3-8371-4055fb3d5c3d!
	I0917 16:56:48.148903       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-408385_c2b88aec-d75b-48d3-8371-4055fb3d5c3d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-408385 -n addons-408385
helpers_test.go:261: (dbg) Run:  kubectl --context addons-408385 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-408385 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-408385 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-408385/192.168.39.170
	Start Time:       Tue, 17 Sep 2024 16:58:50 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hhf5n (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hhf5n:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  13m                   default-scheduler  Successfully assigned default/busybox to addons-408385
	  Normal   Pulling    12m (x4 over 13m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     12m (x4 over 13m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     12m (x4 over 13m)     kubelet            Error: ErrImagePull
	  Warning  Failed     12m (x6 over 13m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3m45s (x43 over 13m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (346.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 node stop m02 -v=7 --alsologtostderr
E0917 17:21:45.478555   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:22:05.960159   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:22:46.921829   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-181247 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.490353898s)

                                                
                                                
-- stdout --
	* Stopping node "ha-181247-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:21:41.798804   34117 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:21:41.798984   34117 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:21:41.798996   34117 out.go:358] Setting ErrFile to fd 2...
	I0917 17:21:41.799002   34117 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:21:41.799215   34117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 17:21:41.799501   34117 mustload.go:65] Loading cluster: ha-181247
	I0917 17:21:41.799912   34117 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:21:41.799933   34117 stop.go:39] StopHost: ha-181247-m02
	I0917 17:21:41.800315   34117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:21:41.800365   34117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:21:41.816796   34117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42731
	I0917 17:21:41.817308   34117 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:21:41.817863   34117 main.go:141] libmachine: Using API Version  1
	I0917 17:21:41.817889   34117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:21:41.818217   34117 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:21:41.820548   34117 out.go:177] * Stopping node "ha-181247-m02"  ...
	I0917 17:21:41.821743   34117 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0917 17:21:41.821784   34117 main.go:141] libmachine: (ha-181247-m02) Calling .DriverName
	I0917 17:21:41.822020   34117 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0917 17:21:41.822043   34117 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHHostname
	I0917 17:21:41.824888   34117 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:21:41.825368   34117 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:21:41.825392   34117 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:21:41.825571   34117 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHPort
	I0917 17:21:41.825771   34117 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:21:41.825913   34117 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHUsername
	I0917 17:21:41.826047   34117 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02/id_rsa Username:docker}
	I0917 17:21:41.916313   34117 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0917 17:21:41.974249   34117 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0917 17:21:42.030489   34117 main.go:141] libmachine: Stopping "ha-181247-m02"...
	I0917 17:21:42.030557   34117 main.go:141] libmachine: (ha-181247-m02) Calling .GetState
	I0917 17:21:42.032305   34117 main.go:141] libmachine: (ha-181247-m02) Calling .Stop
	I0917 17:21:42.035960   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 0/120
	I0917 17:21:43.037417   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 1/120
	I0917 17:21:44.039638   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 2/120
	I0917 17:21:45.041245   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 3/120
	I0917 17:21:46.042487   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 4/120
	I0917 17:21:47.044404   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 5/120
	I0917 17:21:48.045869   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 6/120
	I0917 17:21:49.047497   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 7/120
	I0917 17:21:50.048737   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 8/120
	I0917 17:21:51.050719   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 9/120
	I0917 17:21:52.052978   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 10/120
	I0917 17:21:53.054233   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 11/120
	I0917 17:21:54.055360   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 12/120
	I0917 17:21:55.056921   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 13/120
	I0917 17:21:56.058392   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 14/120
	I0917 17:21:57.060132   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 15/120
	I0917 17:21:58.061787   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 16/120
	I0917 17:21:59.063533   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 17/120
	I0917 17:22:00.064796   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 18/120
	I0917 17:22:01.066015   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 19/120
	I0917 17:22:02.068043   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 20/120
	I0917 17:22:03.070116   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 21/120
	I0917 17:22:04.071851   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 22/120
	I0917 17:22:05.073514   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 23/120
	I0917 17:22:06.074940   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 24/120
	I0917 17:22:07.077300   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 25/120
	I0917 17:22:08.078984   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 26/120
	I0917 17:22:09.081406   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 27/120
	I0917 17:22:10.083737   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 28/120
	I0917 17:22:11.086141   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 29/120
	I0917 17:22:12.088310   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 30/120
	I0917 17:22:13.089819   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 31/120
	I0917 17:22:14.091845   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 32/120
	I0917 17:22:15.093225   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 33/120
	I0917 17:22:16.095099   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 34/120
	I0917 17:22:17.097009   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 35/120
	I0917 17:22:18.098339   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 36/120
	I0917 17:22:19.099660   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 37/120
	I0917 17:22:20.101290   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 38/120
	I0917 17:22:21.103045   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 39/120
	I0917 17:22:22.105446   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 40/120
	I0917 17:22:23.107382   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 41/120
	I0917 17:22:24.108751   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 42/120
	I0917 17:22:25.111109   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 43/120
	I0917 17:22:26.112553   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 44/120
	I0917 17:22:27.113965   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 45/120
	I0917 17:22:28.115858   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 46/120
	I0917 17:22:29.117061   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 47/120
	I0917 17:22:30.118926   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 48/120
	I0917 17:22:31.120219   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 49/120
	I0917 17:22:32.121539   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 50/120
	I0917 17:22:33.123666   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 51/120
	I0917 17:22:34.125314   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 52/120
	I0917 17:22:35.127113   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 53/120
	I0917 17:22:36.128852   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 54/120
	I0917 17:22:37.130959   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 55/120
	I0917 17:22:38.132466   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 56/120
	I0917 17:22:39.134051   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 57/120
	I0917 17:22:40.136352   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 58/120
	I0917 17:22:41.137877   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 59/120
	I0917 17:22:42.140262   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 60/120
	I0917 17:22:43.141667   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 61/120
	I0917 17:22:44.142913   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 62/120
	I0917 17:22:45.144030   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 63/120
	I0917 17:22:46.145598   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 64/120
	I0917 17:22:47.147430   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 65/120
	I0917 17:22:48.148679   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 66/120
	I0917 17:22:49.150637   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 67/120
	I0917 17:22:50.152100   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 68/120
	I0917 17:22:51.154050   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 69/120
	I0917 17:22:52.156442   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 70/120
	I0917 17:22:53.157851   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 71/120
	I0917 17:22:54.159935   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 72/120
	I0917 17:22:55.161165   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 73/120
	I0917 17:22:56.162554   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 74/120
	I0917 17:22:57.164604   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 75/120
	I0917 17:22:58.166976   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 76/120
	I0917 17:22:59.168420   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 77/120
	I0917 17:23:00.169890   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 78/120
	I0917 17:23:01.171771   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 79/120
	I0917 17:23:02.173562   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 80/120
	I0917 17:23:03.174973   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 81/120
	I0917 17:23:04.176470   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 82/120
	I0917 17:23:05.178523   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 83/120
	I0917 17:23:06.179645   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 84/120
	I0917 17:23:07.181663   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 85/120
	I0917 17:23:08.183867   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 86/120
	I0917 17:23:09.185151   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 87/120
	I0917 17:23:10.186466   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 88/120
	I0917 17:23:11.188197   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 89/120
	I0917 17:23:12.190622   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 90/120
	I0917 17:23:13.191972   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 91/120
	I0917 17:23:14.193451   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 92/120
	I0917 17:23:15.195808   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 93/120
	I0917 17:23:16.197388   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 94/120
	I0917 17:23:17.199266   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 95/120
	I0917 17:23:18.200966   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 96/120
	I0917 17:23:19.203063   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 97/120
	I0917 17:23:20.204476   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 98/120
	I0917 17:23:21.205972   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 99/120
	I0917 17:23:22.208141   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 100/120
	I0917 17:23:23.210108   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 101/120
	I0917 17:23:24.211434   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 102/120
	I0917 17:23:25.212909   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 103/120
	I0917 17:23:26.214416   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 104/120
	I0917 17:23:27.216602   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 105/120
	I0917 17:23:28.218038   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 106/120
	I0917 17:23:29.219556   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 107/120
	I0917 17:23:30.221029   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 108/120
	I0917 17:23:31.223075   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 109/120
	I0917 17:23:32.225286   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 110/120
	I0917 17:23:33.226893   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 111/120
	I0917 17:23:34.228382   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 112/120
	I0917 17:23:35.230008   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 113/120
	I0917 17:23:36.231787   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 114/120
	I0917 17:23:37.233188   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 115/120
	I0917 17:23:38.234752   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 116/120
	I0917 17:23:39.236129   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 117/120
	I0917 17:23:40.237834   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 118/120
	I0917 17:23:41.240027   34117 main.go:141] libmachine: (ha-181247-m02) Waiting for machine to stop 119/120
	I0917 17:23:42.240621   34117 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0917 17:23:42.240794   34117 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-181247 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 status -v=7 --alsologtostderr
E0917 17:23:50.533297   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-181247 status -v=7 --alsologtostderr: exit status 3 (19.159313799s)

                                                
                                                
-- stdout --
	ha-181247
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-181247-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-181247-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-181247-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:23:42.289431   34564 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:23:42.289760   34564 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:23:42.289771   34564 out.go:358] Setting ErrFile to fd 2...
	I0917 17:23:42.289778   34564 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:23:42.290102   34564 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 17:23:42.290347   34564 out.go:352] Setting JSON to false
	I0917 17:23:42.290382   34564 mustload.go:65] Loading cluster: ha-181247
	I0917 17:23:42.290509   34564 notify.go:220] Checking for updates...
	I0917 17:23:42.291001   34564 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:23:42.291021   34564 status.go:255] checking status of ha-181247 ...
	I0917 17:23:42.291627   34564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:23:42.291706   34564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:23:42.314303   34564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33797
	I0917 17:23:42.314824   34564 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:23:42.315505   34564 main.go:141] libmachine: Using API Version  1
	I0917 17:23:42.315531   34564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:23:42.315948   34564 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:23:42.316124   34564 main.go:141] libmachine: (ha-181247) Calling .GetState
	I0917 17:23:42.317746   34564 status.go:330] ha-181247 host status = "Running" (err=<nil>)
	I0917 17:23:42.317761   34564 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:23:42.318083   34564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:23:42.318122   34564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:23:42.333445   34564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I0917 17:23:42.333935   34564 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:23:42.334502   34564 main.go:141] libmachine: Using API Version  1
	I0917 17:23:42.334523   34564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:23:42.334861   34564 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:23:42.335076   34564 main.go:141] libmachine: (ha-181247) Calling .GetIP
	I0917 17:23:42.337912   34564 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:23:42.338394   34564 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:23:42.338430   34564 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:23:42.338628   34564 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:23:42.339081   34564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:23:42.339121   34564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:23:42.356807   34564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36795
	I0917 17:23:42.357326   34564 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:23:42.357831   34564 main.go:141] libmachine: Using API Version  1
	I0917 17:23:42.357854   34564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:23:42.358230   34564 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:23:42.358443   34564 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:23:42.358663   34564 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:23:42.358699   34564 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:23:42.361799   34564 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:23:42.362231   34564 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:23:42.362269   34564 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:23:42.362479   34564 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:23:42.362653   34564 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:23:42.362795   34564 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:23:42.362905   34564 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:23:42.455356   34564 ssh_runner.go:195] Run: systemctl --version
	I0917 17:23:42.463928   34564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:23:42.482725   34564 kubeconfig.go:125] found "ha-181247" server: "https://192.168.39.254:8443"
	I0917 17:23:42.482761   34564 api_server.go:166] Checking apiserver status ...
	I0917 17:23:42.482799   34564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:23:42.499292   34564 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1072/cgroup
	W0917 17:23:42.509619   34564 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1072/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 17:23:42.509675   34564 ssh_runner.go:195] Run: ls
	I0917 17:23:42.514464   34564 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0917 17:23:42.520670   34564 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0917 17:23:42.520696   34564 status.go:422] ha-181247 apiserver status = Running (err=<nil>)
	I0917 17:23:42.520708   34564 status.go:257] ha-181247 status: &{Name:ha-181247 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:23:42.520738   34564 status.go:255] checking status of ha-181247-m02 ...
	I0917 17:23:42.521042   34564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:23:42.521090   34564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:23:42.536145   34564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33259
	I0917 17:23:42.536676   34564 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:23:42.537279   34564 main.go:141] libmachine: Using API Version  1
	I0917 17:23:42.537302   34564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:23:42.537642   34564 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:23:42.537802   34564 main.go:141] libmachine: (ha-181247-m02) Calling .GetState
	I0917 17:23:42.539409   34564 status.go:330] ha-181247-m02 host status = "Running" (err=<nil>)
	I0917 17:23:42.539423   34564 host.go:66] Checking if "ha-181247-m02" exists ...
	I0917 17:23:42.539736   34564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:23:42.539770   34564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:23:42.555287   34564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35377
	I0917 17:23:42.555801   34564 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:23:42.556267   34564 main.go:141] libmachine: Using API Version  1
	I0917 17:23:42.556287   34564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:23:42.556651   34564 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:23:42.556874   34564 main.go:141] libmachine: (ha-181247-m02) Calling .GetIP
	I0917 17:23:42.559793   34564 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:23:42.560234   34564 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:23:42.560262   34564 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:23:42.560380   34564 host.go:66] Checking if "ha-181247-m02" exists ...
	I0917 17:23:42.560798   34564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:23:42.560850   34564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:23:42.575876   34564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45325
	I0917 17:23:42.576345   34564 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:23:42.576867   34564 main.go:141] libmachine: Using API Version  1
	I0917 17:23:42.576896   34564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:23:42.577200   34564 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:23:42.577451   34564 main.go:141] libmachine: (ha-181247-m02) Calling .DriverName
	I0917 17:23:42.577649   34564 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:23:42.577671   34564 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHHostname
	I0917 17:23:42.580902   34564 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:23:42.581449   34564 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:23:42.581481   34564 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:23:42.581703   34564 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHPort
	I0917 17:23:42.581874   34564 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:23:42.582033   34564 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHUsername
	I0917 17:23:42.582172   34564 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02/id_rsa Username:docker}
	W0917 17:24:01.005581   34564 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.11:22: connect: no route to host
	W0917 17:24:01.005672   34564 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.11:22: connect: no route to host
	E0917 17:24:01.005715   34564 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.11:22: connect: no route to host
	I0917 17:24:01.005734   34564 status.go:257] ha-181247-m02 status: &{Name:ha-181247-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0917 17:24:01.005759   34564 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.11:22: connect: no route to host
	I0917 17:24:01.005771   34564 status.go:255] checking status of ha-181247-m03 ...
	I0917 17:24:01.006094   34564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:01.006151   34564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:01.022989   34564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45213
	I0917 17:24:01.023441   34564 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:01.024031   34564 main.go:141] libmachine: Using API Version  1
	I0917 17:24:01.024060   34564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:01.024391   34564 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:01.024581   34564 main.go:141] libmachine: (ha-181247-m03) Calling .GetState
	I0917 17:24:01.026300   34564 status.go:330] ha-181247-m03 host status = "Running" (err=<nil>)
	I0917 17:24:01.026316   34564 host.go:66] Checking if "ha-181247-m03" exists ...
	I0917 17:24:01.026629   34564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:01.026674   34564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:01.044340   34564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39303
	I0917 17:24:01.044788   34564 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:01.045340   34564 main.go:141] libmachine: Using API Version  1
	I0917 17:24:01.045368   34564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:01.045736   34564 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:01.045926   34564 main.go:141] libmachine: (ha-181247-m03) Calling .GetIP
	I0917 17:24:01.048859   34564 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:01.049367   34564 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:24:01.049417   34564 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:01.049604   34564 host.go:66] Checking if "ha-181247-m03" exists ...
	I0917 17:24:01.050073   34564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:01.050128   34564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:01.066009   34564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34405
	I0917 17:24:01.066477   34564 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:01.066956   34564 main.go:141] libmachine: Using API Version  1
	I0917 17:24:01.066977   34564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:01.067342   34564 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:01.067532   34564 main.go:141] libmachine: (ha-181247-m03) Calling .DriverName
	I0917 17:24:01.067740   34564 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:24:01.067764   34564 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	I0917 17:24:01.070821   34564 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:01.071389   34564 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:24:01.071415   34564 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:01.071520   34564 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHPort
	I0917 17:24:01.071691   34564 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:24:01.071858   34564 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHUsername
	I0917 17:24:01.072028   34564 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03/id_rsa Username:docker}
	I0917 17:24:01.162450   34564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:24:01.182490   34564 kubeconfig.go:125] found "ha-181247" server: "https://192.168.39.254:8443"
	I0917 17:24:01.182528   34564 api_server.go:166] Checking apiserver status ...
	I0917 17:24:01.182574   34564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:24:01.201350   34564 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1374/cgroup
	W0917 17:24:01.214550   34564 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1374/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 17:24:01.214645   34564 ssh_runner.go:195] Run: ls
	I0917 17:24:01.219618   34564 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0917 17:24:01.224203   34564 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0917 17:24:01.224232   34564 status.go:422] ha-181247-m03 apiserver status = Running (err=<nil>)
	I0917 17:24:01.224243   34564 status.go:257] ha-181247-m03 status: &{Name:ha-181247-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:24:01.224280   34564 status.go:255] checking status of ha-181247-m04 ...
	I0917 17:24:01.224639   34564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:01.224673   34564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:01.239803   34564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42201
	I0917 17:24:01.240273   34564 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:01.240714   34564 main.go:141] libmachine: Using API Version  1
	I0917 17:24:01.240735   34564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:01.241105   34564 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:01.241279   34564 main.go:141] libmachine: (ha-181247-m04) Calling .GetState
	I0917 17:24:01.243141   34564 status.go:330] ha-181247-m04 host status = "Running" (err=<nil>)
	I0917 17:24:01.243159   34564 host.go:66] Checking if "ha-181247-m04" exists ...
	I0917 17:24:01.243462   34564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:01.243517   34564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:01.259385   34564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35963
	I0917 17:24:01.259828   34564 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:01.260336   34564 main.go:141] libmachine: Using API Version  1
	I0917 17:24:01.260361   34564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:01.260679   34564 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:01.260893   34564 main.go:141] libmachine: (ha-181247-m04) Calling .GetIP
	I0917 17:24:01.263905   34564 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:01.264461   34564 main.go:141] libmachine: (ha-181247-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0a:d0", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:20:47 +0000 UTC Type:0 Mac:52:54:00:e5:0a:d0 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-181247-m04 Clientid:01:52:54:00:e5:0a:d0}
	I0917 17:24:01.264483   34564 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined IP address 192.168.39.63 and MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:01.264671   34564 host.go:66] Checking if "ha-181247-m04" exists ...
	I0917 17:24:01.265107   34564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:01.265163   34564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:01.280608   34564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34949
	I0917 17:24:01.281143   34564 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:01.281670   34564 main.go:141] libmachine: Using API Version  1
	I0917 17:24:01.281695   34564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:01.282006   34564 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:01.282186   34564 main.go:141] libmachine: (ha-181247-m04) Calling .DriverName
	I0917 17:24:01.282398   34564 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:24:01.282420   34564 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHHostname
	I0917 17:24:01.285484   34564 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:01.285948   34564 main.go:141] libmachine: (ha-181247-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0a:d0", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:20:47 +0000 UTC Type:0 Mac:52:54:00:e5:0a:d0 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-181247-m04 Clientid:01:52:54:00:e5:0a:d0}
	I0917 17:24:01.285979   34564 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined IP address 192.168.39.63 and MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:01.286112   34564 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHPort
	I0917 17:24:01.286320   34564 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHKeyPath
	I0917 17:24:01.286505   34564 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHUsername
	I0917 17:24:01.286682   34564 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m04/id_rsa Username:docker}
	I0917 17:24:01.379741   34564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:24:01.399999   34564 status.go:257] ha-181247-m04 status: &{Name:ha-181247-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-181247 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-181247 -n ha-181247
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-181247 logs -n 25: (1.451040413s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-181247 cp ha-181247-m03:/home/docker/cp-test.txt                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3499385804/001/cp-test_ha-181247-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n                                                                 | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-181247 cp ha-181247-m03:/home/docker/cp-test.txt                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247:/home/docker/cp-test_ha-181247-m03_ha-181247.txt                       |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n                                                                 | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n ha-181247 sudo cat                                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-181247-m03_ha-181247.txt                                 |           |         |         |                     |                     |
	| cp      | ha-181247 cp ha-181247-m03:/home/docker/cp-test.txt                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m02:/home/docker/cp-test_ha-181247-m03_ha-181247-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n                                                                 | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n ha-181247-m02 sudo cat                                          | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-181247-m03_ha-181247-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-181247 cp ha-181247-m03:/home/docker/cp-test.txt                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m04:/home/docker/cp-test_ha-181247-m03_ha-181247-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n                                                                 | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n ha-181247-m04 sudo cat                                          | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-181247-m03_ha-181247-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-181247 cp testdata/cp-test.txt                                                | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n                                                                 | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-181247 cp ha-181247-m04:/home/docker/cp-test.txt                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3499385804/001/cp-test_ha-181247-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n                                                                 | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-181247 cp ha-181247-m04:/home/docker/cp-test.txt                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247:/home/docker/cp-test_ha-181247-m04_ha-181247.txt                       |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n                                                                 | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n ha-181247 sudo cat                                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-181247-m04_ha-181247.txt                                 |           |         |         |                     |                     |
	| cp      | ha-181247 cp ha-181247-m04:/home/docker/cp-test.txt                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m02:/home/docker/cp-test_ha-181247-m04_ha-181247-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n                                                                 | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n ha-181247-m02 sudo cat                                          | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-181247-m04_ha-181247-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-181247 cp ha-181247-m04:/home/docker/cp-test.txt                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m03:/home/docker/cp-test_ha-181247-m04_ha-181247-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n                                                                 | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n ha-181247-m03 sudo cat                                          | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-181247-m04_ha-181247-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-181247 node stop m02 -v=7                                                     | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 17:17:12
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 17:17:12.295260   29734 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:17:12.295383   29734 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:17:12.295392   29734 out.go:358] Setting ErrFile to fd 2...
	I0917 17:17:12.295396   29734 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:17:12.295568   29734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 17:17:12.296178   29734 out.go:352] Setting JSON to false
	I0917 17:17:12.297084   29734 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3547,"bootTime":1726589885,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 17:17:12.297187   29734 start.go:139] virtualization: kvm guest
	I0917 17:17:12.299632   29734 out.go:177] * [ha-181247] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 17:17:12.301202   29734 notify.go:220] Checking for updates...
	I0917 17:17:12.301208   29734 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 17:17:12.302756   29734 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 17:17:12.304156   29734 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 17:17:12.305489   29734 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 17:17:12.306572   29734 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 17:17:12.307710   29734 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 17:17:12.309117   29734 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 17:17:12.344556   29734 out.go:177] * Using the kvm2 driver based on user configuration
	I0917 17:17:12.345884   29734 start.go:297] selected driver: kvm2
	I0917 17:17:12.345897   29734 start.go:901] validating driver "kvm2" against <nil>
	I0917 17:17:12.345915   29734 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 17:17:12.346647   29734 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 17:17:12.346716   29734 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19662-11085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0917 17:17:12.362456   29734 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0917 17:17:12.362516   29734 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 17:17:12.362773   29734 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 17:17:12.362807   29734 cni.go:84] Creating CNI manager for ""
	I0917 17:17:12.362842   29734 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0917 17:17:12.362850   29734 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0917 17:17:12.362901   29734 start.go:340] cluster config:
	{Name:ha-181247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-181247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0917 17:17:12.362994   29734 iso.go:125] acquiring lock: {Name:mkee7131e09b1c5eb94d106bda884bcbdbf6f906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 17:17:12.365161   29734 out.go:177] * Starting "ha-181247" primary control-plane node in "ha-181247" cluster
	I0917 17:17:12.366603   29734 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 17:17:12.366647   29734 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0917 17:17:12.366658   29734 cache.go:56] Caching tarball of preloaded images
	I0917 17:17:12.366754   29734 preload.go:172] Found /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 17:17:12.366765   29734 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0917 17:17:12.367061   29734 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/config.json ...
	I0917 17:17:12.367079   29734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/config.json: {Name:mk21af64916b6c67dc99ac97417f17a21d879838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:17:12.367216   29734 start.go:360] acquireMachinesLock for ha-181247: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 17:17:12.367244   29734 start.go:364] duration metric: took 15.704µs to acquireMachinesLock for "ha-181247"
	I0917 17:17:12.367260   29734 start.go:93] Provisioning new machine with config: &{Name:ha-181247 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-181247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 17:17:12.367314   29734 start.go:125] createHost starting for "" (driver="kvm2")
	I0917 17:17:12.369104   29734 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 17:17:12.369279   29734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:17:12.369322   29734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:17:12.384105   29734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46807
	I0917 17:17:12.384598   29734 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:17:12.385148   29734 main.go:141] libmachine: Using API Version  1
	I0917 17:17:12.385167   29734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:17:12.385543   29734 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:17:12.385711   29734 main.go:141] libmachine: (ha-181247) Calling .GetMachineName
	I0917 17:17:12.385846   29734 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:17:12.385978   29734 start.go:159] libmachine.API.Create for "ha-181247" (driver="kvm2")
	I0917 17:17:12.386003   29734 client.go:168] LocalClient.Create starting
	I0917 17:17:12.386030   29734 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem
	I0917 17:17:12.386066   29734 main.go:141] libmachine: Decoding PEM data...
	I0917 17:17:12.386080   29734 main.go:141] libmachine: Parsing certificate...
	I0917 17:17:12.386133   29734 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem
	I0917 17:17:12.386155   29734 main.go:141] libmachine: Decoding PEM data...
	I0917 17:17:12.386170   29734 main.go:141] libmachine: Parsing certificate...
	I0917 17:17:12.386187   29734 main.go:141] libmachine: Running pre-create checks...
	I0917 17:17:12.386195   29734 main.go:141] libmachine: (ha-181247) Calling .PreCreateCheck
	I0917 17:17:12.386517   29734 main.go:141] libmachine: (ha-181247) Calling .GetConfigRaw
	I0917 17:17:12.386899   29734 main.go:141] libmachine: Creating machine...
	I0917 17:17:12.386911   29734 main.go:141] libmachine: (ha-181247) Calling .Create
	I0917 17:17:12.387046   29734 main.go:141] libmachine: (ha-181247) Creating KVM machine...
	I0917 17:17:12.388285   29734 main.go:141] libmachine: (ha-181247) DBG | found existing default KVM network
	I0917 17:17:12.388993   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:12.388835   29757 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001151e0}
	I0917 17:17:12.389058   29734 main.go:141] libmachine: (ha-181247) DBG | created network xml: 
	I0917 17:17:12.389082   29734 main.go:141] libmachine: (ha-181247) DBG | <network>
	I0917 17:17:12.389090   29734 main.go:141] libmachine: (ha-181247) DBG |   <name>mk-ha-181247</name>
	I0917 17:17:12.389097   29734 main.go:141] libmachine: (ha-181247) DBG |   <dns enable='no'/>
	I0917 17:17:12.389120   29734 main.go:141] libmachine: (ha-181247) DBG |   
	I0917 17:17:12.389132   29734 main.go:141] libmachine: (ha-181247) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0917 17:17:12.389140   29734 main.go:141] libmachine: (ha-181247) DBG |     <dhcp>
	I0917 17:17:12.389148   29734 main.go:141] libmachine: (ha-181247) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0917 17:17:12.389171   29734 main.go:141] libmachine: (ha-181247) DBG |     </dhcp>
	I0917 17:17:12.389194   29734 main.go:141] libmachine: (ha-181247) DBG |   </ip>
	I0917 17:17:12.389205   29734 main.go:141] libmachine: (ha-181247) DBG |   
	I0917 17:17:12.389211   29734 main.go:141] libmachine: (ha-181247) DBG | </network>
	I0917 17:17:12.389222   29734 main.go:141] libmachine: (ha-181247) DBG | 
	I0917 17:17:12.394697   29734 main.go:141] libmachine: (ha-181247) DBG | trying to create private KVM network mk-ha-181247 192.168.39.0/24...
	I0917 17:17:12.464235   29734 main.go:141] libmachine: (ha-181247) DBG | private KVM network mk-ha-181247 192.168.39.0/24 created
	I0917 17:17:12.464265   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:12.464199   29757 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 17:17:12.464275   29734 main.go:141] libmachine: (ha-181247) Setting up store path in /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247 ...
	I0917 17:17:12.464289   29734 main.go:141] libmachine: (ha-181247) Building disk image from file:///home/jenkins/minikube-integration/19662-11085/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0917 17:17:12.464466   29734 main.go:141] libmachine: (ha-181247) Downloading /home/jenkins/minikube-integration/19662-11085/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19662-11085/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0917 17:17:12.728745   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:12.728594   29757 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa...
	I0917 17:17:13.051914   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:13.051793   29757 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/ha-181247.rawdisk...
	I0917 17:17:13.051946   29734 main.go:141] libmachine: (ha-181247) DBG | Writing magic tar header
	I0917 17:17:13.051955   29734 main.go:141] libmachine: (ha-181247) DBG | Writing SSH key tar header
	I0917 17:17:13.051962   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:13.051909   29757 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247 ...
	I0917 17:17:13.052022   29734 main.go:141] libmachine: (ha-181247) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247
	I0917 17:17:13.052114   29734 main.go:141] libmachine: (ha-181247) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247 (perms=drwx------)
	I0917 17:17:13.052148   29734 main.go:141] libmachine: (ha-181247) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube/machines (perms=drwxr-xr-x)
	I0917 17:17:13.052165   29734 main.go:141] libmachine: (ha-181247) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube/machines
	I0917 17:17:13.052178   29734 main.go:141] libmachine: (ha-181247) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube (perms=drwxr-xr-x)
	I0917 17:17:13.052190   29734 main.go:141] libmachine: (ha-181247) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 17:17:13.052207   29734 main.go:141] libmachine: (ha-181247) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085
	I0917 17:17:13.052215   29734 main.go:141] libmachine: (ha-181247) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0917 17:17:13.052222   29734 main.go:141] libmachine: (ha-181247) DBG | Checking permissions on dir: /home/jenkins
	I0917 17:17:13.052228   29734 main.go:141] libmachine: (ha-181247) DBG | Checking permissions on dir: /home
	I0917 17:17:13.052235   29734 main.go:141] libmachine: (ha-181247) DBG | Skipping /home - not owner
	I0917 17:17:13.052245   29734 main.go:141] libmachine: (ha-181247) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085 (perms=drwxrwxr-x)
	I0917 17:17:13.052253   29734 main.go:141] libmachine: (ha-181247) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0917 17:17:13.052260   29734 main.go:141] libmachine: (ha-181247) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0917 17:17:13.052266   29734 main.go:141] libmachine: (ha-181247) Creating domain...
	I0917 17:17:13.053913   29734 main.go:141] libmachine: (ha-181247) define libvirt domain using xml: 
	I0917 17:17:13.053929   29734 main.go:141] libmachine: (ha-181247) <domain type='kvm'>
	I0917 17:17:13.053935   29734 main.go:141] libmachine: (ha-181247)   <name>ha-181247</name>
	I0917 17:17:13.053940   29734 main.go:141] libmachine: (ha-181247)   <memory unit='MiB'>2200</memory>
	I0917 17:17:13.053945   29734 main.go:141] libmachine: (ha-181247)   <vcpu>2</vcpu>
	I0917 17:17:13.053948   29734 main.go:141] libmachine: (ha-181247)   <features>
	I0917 17:17:13.053953   29734 main.go:141] libmachine: (ha-181247)     <acpi/>
	I0917 17:17:13.053957   29734 main.go:141] libmachine: (ha-181247)     <apic/>
	I0917 17:17:13.053961   29734 main.go:141] libmachine: (ha-181247)     <pae/>
	I0917 17:17:13.053966   29734 main.go:141] libmachine: (ha-181247)     
	I0917 17:17:13.053970   29734 main.go:141] libmachine: (ha-181247)   </features>
	I0917 17:17:13.053975   29734 main.go:141] libmachine: (ha-181247)   <cpu mode='host-passthrough'>
	I0917 17:17:13.053980   29734 main.go:141] libmachine: (ha-181247)   
	I0917 17:17:13.053984   29734 main.go:141] libmachine: (ha-181247)   </cpu>
	I0917 17:17:13.053988   29734 main.go:141] libmachine: (ha-181247)   <os>
	I0917 17:17:13.053993   29734 main.go:141] libmachine: (ha-181247)     <type>hvm</type>
	I0917 17:17:13.053998   29734 main.go:141] libmachine: (ha-181247)     <boot dev='cdrom'/>
	I0917 17:17:13.054004   29734 main.go:141] libmachine: (ha-181247)     <boot dev='hd'/>
	I0917 17:17:13.054009   29734 main.go:141] libmachine: (ha-181247)     <bootmenu enable='no'/>
	I0917 17:17:13.054015   29734 main.go:141] libmachine: (ha-181247)   </os>
	I0917 17:17:13.054044   29734 main.go:141] libmachine: (ha-181247)   <devices>
	I0917 17:17:13.054063   29734 main.go:141] libmachine: (ha-181247)     <disk type='file' device='cdrom'>
	I0917 17:17:13.054090   29734 main.go:141] libmachine: (ha-181247)       <source file='/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/boot2docker.iso'/>
	I0917 17:17:13.054111   29734 main.go:141] libmachine: (ha-181247)       <target dev='hdc' bus='scsi'/>
	I0917 17:17:13.054122   29734 main.go:141] libmachine: (ha-181247)       <readonly/>
	I0917 17:17:13.054136   29734 main.go:141] libmachine: (ha-181247)     </disk>
	I0917 17:17:13.054148   29734 main.go:141] libmachine: (ha-181247)     <disk type='file' device='disk'>
	I0917 17:17:13.054159   29734 main.go:141] libmachine: (ha-181247)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0917 17:17:13.054174   29734 main.go:141] libmachine: (ha-181247)       <source file='/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/ha-181247.rawdisk'/>
	I0917 17:17:13.054184   29734 main.go:141] libmachine: (ha-181247)       <target dev='hda' bus='virtio'/>
	I0917 17:17:13.054192   29734 main.go:141] libmachine: (ha-181247)     </disk>
	I0917 17:17:13.054200   29734 main.go:141] libmachine: (ha-181247)     <interface type='network'>
	I0917 17:17:13.054214   29734 main.go:141] libmachine: (ha-181247)       <source network='mk-ha-181247'/>
	I0917 17:17:13.054222   29734 main.go:141] libmachine: (ha-181247)       <model type='virtio'/>
	I0917 17:17:13.054240   29734 main.go:141] libmachine: (ha-181247)     </interface>
	I0917 17:17:13.054247   29734 main.go:141] libmachine: (ha-181247)     <interface type='network'>
	I0917 17:17:13.054252   29734 main.go:141] libmachine: (ha-181247)       <source network='default'/>
	I0917 17:17:13.054256   29734 main.go:141] libmachine: (ha-181247)       <model type='virtio'/>
	I0917 17:17:13.054260   29734 main.go:141] libmachine: (ha-181247)     </interface>
	I0917 17:17:13.054264   29734 main.go:141] libmachine: (ha-181247)     <serial type='pty'>
	I0917 17:17:13.054269   29734 main.go:141] libmachine: (ha-181247)       <target port='0'/>
	I0917 17:17:13.054272   29734 main.go:141] libmachine: (ha-181247)     </serial>
	I0917 17:17:13.054276   29734 main.go:141] libmachine: (ha-181247)     <console type='pty'>
	I0917 17:17:13.054280   29734 main.go:141] libmachine: (ha-181247)       <target type='serial' port='0'/>
	I0917 17:17:13.054287   29734 main.go:141] libmachine: (ha-181247)     </console>
	I0917 17:17:13.054296   29734 main.go:141] libmachine: (ha-181247)     <rng model='virtio'>
	I0917 17:17:13.054301   29734 main.go:141] libmachine: (ha-181247)       <backend model='random'>/dev/random</backend>
	I0917 17:17:13.054307   29734 main.go:141] libmachine: (ha-181247)     </rng>
	I0917 17:17:13.054312   29734 main.go:141] libmachine: (ha-181247)     
	I0917 17:17:13.054317   29734 main.go:141] libmachine: (ha-181247)     
	I0917 17:17:13.054322   29734 main.go:141] libmachine: (ha-181247)   </devices>
	I0917 17:17:13.054325   29734 main.go:141] libmachine: (ha-181247) </domain>
	I0917 17:17:13.054331   29734 main.go:141] libmachine: (ha-181247) 
	I0917 17:17:13.058801   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:e9:c7:ce in network default
	I0917 17:17:13.059353   29734 main.go:141] libmachine: (ha-181247) Ensuring networks are active...
	I0917 17:17:13.059369   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:13.060130   29734 main.go:141] libmachine: (ha-181247) Ensuring network default is active
	I0917 17:17:13.060461   29734 main.go:141] libmachine: (ha-181247) Ensuring network mk-ha-181247 is active
	I0917 17:17:13.060945   29734 main.go:141] libmachine: (ha-181247) Getting domain xml...
	I0917 17:17:13.061685   29734 main.go:141] libmachine: (ha-181247) Creating domain...
	I0917 17:17:14.270331   29734 main.go:141] libmachine: (ha-181247) Waiting to get IP...
	I0917 17:17:14.271018   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:14.271449   29734 main.go:141] libmachine: (ha-181247) DBG | unable to find current IP address of domain ha-181247 in network mk-ha-181247
	I0917 17:17:14.271500   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:14.271435   29757 retry.go:31] will retry after 207.881018ms: waiting for machine to come up
	I0917 17:17:14.480839   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:14.481383   29734 main.go:141] libmachine: (ha-181247) DBG | unable to find current IP address of domain ha-181247 in network mk-ha-181247
	I0917 17:17:14.481413   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:14.481338   29757 retry.go:31] will retry after 323.692976ms: waiting for machine to come up
	I0917 17:17:14.806856   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:14.807287   29734 main.go:141] libmachine: (ha-181247) DBG | unable to find current IP address of domain ha-181247 in network mk-ha-181247
	I0917 17:17:14.807309   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:14.807243   29757 retry.go:31] will retry after 339.921351ms: waiting for machine to come up
	I0917 17:17:15.148971   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:15.149412   29734 main.go:141] libmachine: (ha-181247) DBG | unable to find current IP address of domain ha-181247 in network mk-ha-181247
	I0917 17:17:15.149439   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:15.149393   29757 retry.go:31] will retry after 383.286106ms: waiting for machine to come up
	I0917 17:17:15.534034   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:15.534603   29734 main.go:141] libmachine: (ha-181247) DBG | unable to find current IP address of domain ha-181247 in network mk-ha-181247
	I0917 17:17:15.534629   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:15.534563   29757 retry.go:31] will retry after 575.428604ms: waiting for machine to come up
	I0917 17:17:16.111428   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:16.111851   29734 main.go:141] libmachine: (ha-181247) DBG | unable to find current IP address of domain ha-181247 in network mk-ha-181247
	I0917 17:17:16.111891   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:16.111782   29757 retry.go:31] will retry after 923.833339ms: waiting for machine to come up
	I0917 17:17:17.036886   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:17.037288   29734 main.go:141] libmachine: (ha-181247) DBG | unable to find current IP address of domain ha-181247 in network mk-ha-181247
	I0917 17:17:17.037324   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:17.037247   29757 retry.go:31] will retry after 853.549592ms: waiting for machine to come up
	I0917 17:17:17.892848   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:17.893205   29734 main.go:141] libmachine: (ha-181247) DBG | unable to find current IP address of domain ha-181247 in network mk-ha-181247
	I0917 17:17:17.893242   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:17.893158   29757 retry.go:31] will retry after 1.313972164s: waiting for machine to come up
	I0917 17:17:19.208284   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:19.208773   29734 main.go:141] libmachine: (ha-181247) DBG | unable to find current IP address of domain ha-181247 in network mk-ha-181247
	I0917 17:17:19.208798   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:19.208735   29757 retry.go:31] will retry after 1.71538151s: waiting for machine to come up
	I0917 17:17:20.926651   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:20.927074   29734 main.go:141] libmachine: (ha-181247) DBG | unable to find current IP address of domain ha-181247 in network mk-ha-181247
	I0917 17:17:20.927103   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:20.927027   29757 retry.go:31] will retry after 2.217693124s: waiting for machine to come up
	I0917 17:17:23.146319   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:23.146752   29734 main.go:141] libmachine: (ha-181247) DBG | unable to find current IP address of domain ha-181247 in network mk-ha-181247
	I0917 17:17:23.146783   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:23.146687   29757 retry.go:31] will retry after 1.923987178s: waiting for machine to come up
	I0917 17:17:25.072729   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:25.073147   29734 main.go:141] libmachine: (ha-181247) DBG | unable to find current IP address of domain ha-181247 in network mk-ha-181247
	I0917 17:17:25.073189   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:25.073114   29757 retry.go:31] will retry after 3.588058762s: waiting for machine to come up
	I0917 17:17:28.662628   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:28.663074   29734 main.go:141] libmachine: (ha-181247) DBG | unable to find current IP address of domain ha-181247 in network mk-ha-181247
	I0917 17:17:28.663093   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:28.663020   29757 retry.go:31] will retry after 4.377762468s: waiting for machine to come up
	I0917 17:17:33.042665   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.043121   29734 main.go:141] libmachine: (ha-181247) Found IP for machine: 192.168.39.195
	I0917 17:17:33.043147   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has current primary IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.043155   29734 main.go:141] libmachine: (ha-181247) Reserving static IP address...
	I0917 17:17:33.043511   29734 main.go:141] libmachine: (ha-181247) DBG | unable to find host DHCP lease matching {name: "ha-181247", mac: "52:54:00:51:1e:14", ip: "192.168.39.195"} in network mk-ha-181247
	I0917 17:17:33.119976   29734 main.go:141] libmachine: (ha-181247) Reserved static IP address: 192.168.39.195
	I0917 17:17:33.120002   29734 main.go:141] libmachine: (ha-181247) Waiting for SSH to be available...
	I0917 17:17:33.120013   29734 main.go:141] libmachine: (ha-181247) DBG | Getting to WaitForSSH function...
	I0917 17:17:33.122580   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.122982   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:minikube Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:33.123015   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.123194   29734 main.go:141] libmachine: (ha-181247) DBG | Using SSH client type: external
	I0917 17:17:33.123219   29734 main.go:141] libmachine: (ha-181247) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa (-rw-------)
	I0917 17:17:33.123283   29734 main.go:141] libmachine: (ha-181247) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.195 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 17:17:33.123304   29734 main.go:141] libmachine: (ha-181247) DBG | About to run SSH command:
	I0917 17:17:33.123322   29734 main.go:141] libmachine: (ha-181247) DBG | exit 0
	I0917 17:17:33.253772   29734 main.go:141] libmachine: (ha-181247) DBG | SSH cmd err, output: <nil>: 
	I0917 17:17:33.254049   29734 main.go:141] libmachine: (ha-181247) KVM machine creation complete!
	I0917 17:17:33.254406   29734 main.go:141] libmachine: (ha-181247) Calling .GetConfigRaw
	I0917 17:17:33.254993   29734 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:17:33.255173   29734 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:17:33.255336   29734 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0917 17:17:33.255371   29734 main.go:141] libmachine: (ha-181247) Calling .GetState
	I0917 17:17:33.256649   29734 main.go:141] libmachine: Detecting operating system of created instance...
	I0917 17:17:33.256662   29734 main.go:141] libmachine: Waiting for SSH to be available...
	I0917 17:17:33.256670   29734 main.go:141] libmachine: Getting to WaitForSSH function...
	I0917 17:17:33.256677   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:17:33.258972   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.259370   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:33.259394   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.259523   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:17:33.259702   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:33.259827   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:33.259954   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:17:33.260147   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:17:33.260340   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0917 17:17:33.260352   29734 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0917 17:17:33.368747   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 17:17:33.368769   29734 main.go:141] libmachine: Detecting the provisioner...
	I0917 17:17:33.368777   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:17:33.371379   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.371741   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:33.371768   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.371890   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:17:33.372061   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:33.372235   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:33.372326   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:17:33.372476   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:17:33.372646   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0917 17:17:33.372660   29734 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0917 17:17:33.486345   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0917 17:17:33.486422   29734 main.go:141] libmachine: found compatible host: buildroot
	I0917 17:17:33.486430   29734 main.go:141] libmachine: Provisioning with buildroot...
	I0917 17:17:33.486437   29734 main.go:141] libmachine: (ha-181247) Calling .GetMachineName
	I0917 17:17:33.486682   29734 buildroot.go:166] provisioning hostname "ha-181247"
	I0917 17:17:33.486709   29734 main.go:141] libmachine: (ha-181247) Calling .GetMachineName
	I0917 17:17:33.486904   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:17:33.489683   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.490031   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:33.490057   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.490210   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:17:33.490396   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:33.490505   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:33.490639   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:17:33.490837   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:17:33.491006   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0917 17:17:33.491017   29734 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181247 && echo "ha-181247" | sudo tee /etc/hostname
	I0917 17:17:33.617089   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181247
	
	I0917 17:17:33.617115   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:17:33.619660   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.619945   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:33.619972   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.620114   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:17:33.620302   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:33.620453   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:33.620612   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:17:33.620771   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:17:33.620926   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0917 17:17:33.620941   29734 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181247' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181247/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181247' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 17:17:33.738853   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 17:17:33.738881   29734 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 17:17:33.738936   29734 buildroot.go:174] setting up certificates
	I0917 17:17:33.738950   29734 provision.go:84] configureAuth start
	I0917 17:17:33.738967   29734 main.go:141] libmachine: (ha-181247) Calling .GetMachineName
	I0917 17:17:33.739211   29734 main.go:141] libmachine: (ha-181247) Calling .GetIP
	I0917 17:17:33.741845   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.742160   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:33.742179   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.742325   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:17:33.744318   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.744701   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:33.744727   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.744831   29734 provision.go:143] copyHostCerts
	I0917 17:17:33.744878   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 17:17:33.744930   29734 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 17:17:33.744945   29734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 17:17:33.745036   29734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 17:17:33.745171   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 17:17:33.745202   29734 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 17:17:33.745212   29734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 17:17:33.745274   29734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 17:17:33.745363   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 17:17:33.745487   29734 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 17:17:33.745507   29734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 17:17:33.745580   29734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 17:17:33.745692   29734 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.ha-181247 san=[127.0.0.1 192.168.39.195 ha-181247 localhost minikube]
	I0917 17:17:33.826857   29734 provision.go:177] copyRemoteCerts
	I0917 17:17:33.826917   29734 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 17:17:33.826943   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:17:33.829527   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.829844   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:33.829887   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.830118   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:17:33.830303   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:33.830463   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:17:33.830573   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:17:33.915861   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 17:17:33.915948   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 17:17:33.941842   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 17:17:33.941920   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0917 17:17:33.967877   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 17:17:33.967945   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 17:17:33.993729   29734 provision.go:87] duration metric: took 254.764989ms to configureAuth
	I0917 17:17:33.993752   29734 buildroot.go:189] setting minikube options for container-runtime
	I0917 17:17:33.993914   29734 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:17:33.994039   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:17:33.996709   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.997053   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:33.997081   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.997264   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:17:33.997459   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:33.997601   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:33.997716   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:17:33.997851   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:17:33.998110   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0917 17:17:33.998128   29734 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 17:17:34.246446   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 17:17:34.246473   29734 main.go:141] libmachine: Checking connection to Docker...
	I0917 17:17:34.246493   29734 main.go:141] libmachine: (ha-181247) Calling .GetURL
	I0917 17:17:34.247798   29734 main.go:141] libmachine: (ha-181247) DBG | Using libvirt version 6000000
	I0917 17:17:34.250061   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:34.250427   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:34.250452   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:34.250641   29734 main.go:141] libmachine: Docker is up and running!
	I0917 17:17:34.250653   29734 main.go:141] libmachine: Reticulating splines...
	I0917 17:17:34.250659   29734 client.go:171] duration metric: took 21.864649423s to LocalClient.Create
	I0917 17:17:34.250680   29734 start.go:167] duration metric: took 21.864702696s to libmachine.API.Create "ha-181247"
	I0917 17:17:34.250689   29734 start.go:293] postStartSetup for "ha-181247" (driver="kvm2")
	I0917 17:17:34.250697   29734 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 17:17:34.250712   29734 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:17:34.250982   29734 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 17:17:34.251008   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:17:34.253068   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:34.253358   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:34.253395   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:34.253512   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:17:34.253685   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:34.253843   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:17:34.254020   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:17:34.339891   29734 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 17:17:34.344617   29734 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 17:17:34.344645   29734 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 17:17:34.344722   29734 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 17:17:34.344816   29734 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 17:17:34.344827   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> /etc/ssl/certs/182592.pem
	I0917 17:17:34.344956   29734 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 17:17:34.355317   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 17:17:34.382262   29734 start.go:296] duration metric: took 131.561481ms for postStartSetup
	I0917 17:17:34.382311   29734 main.go:141] libmachine: (ha-181247) Calling .GetConfigRaw
	I0917 17:17:34.382983   29734 main.go:141] libmachine: (ha-181247) Calling .GetIP
	I0917 17:17:34.385552   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:34.385902   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:34.385928   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:34.386184   29734 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/config.json ...
	I0917 17:17:34.386420   29734 start.go:128] duration metric: took 22.019096291s to createHost
	I0917 17:17:34.386441   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:17:34.388754   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:34.389042   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:34.389073   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:34.389195   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:17:34.389386   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:34.389604   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:34.389763   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:17:34.389934   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:17:34.390094   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0917 17:17:34.390103   29734 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 17:17:34.502406   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726593454.466259739
	
	I0917 17:17:34.502432   29734 fix.go:216] guest clock: 1726593454.466259739
	I0917 17:17:34.502438   29734 fix.go:229] Guest: 2024-09-17 17:17:34.466259739 +0000 UTC Remote: 2024-09-17 17:17:34.386430471 +0000 UTC m=+22.127309401 (delta=79.829268ms)
	I0917 17:17:34.502463   29734 fix.go:200] guest clock delta is within tolerance: 79.829268ms
	I0917 17:17:34.502467   29734 start.go:83] releasing machines lock for "ha-181247", held for 22.135215361s
	I0917 17:17:34.502486   29734 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:17:34.502755   29734 main.go:141] libmachine: (ha-181247) Calling .GetIP
	I0917 17:17:34.505581   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:34.505944   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:34.505981   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:34.506132   29734 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:17:34.506672   29734 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:17:34.506814   29734 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:17:34.506899   29734 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 17:17:34.506935   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:17:34.507038   29734 ssh_runner.go:195] Run: cat /version.json
	I0917 17:17:34.507056   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:17:34.509573   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:34.509927   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:34.509981   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:34.510009   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:34.510033   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:17:34.510241   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:34.510425   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:17:34.510492   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:34.510512   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:34.510574   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:17:34.510671   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:17:34.510835   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:34.510969   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:17:34.511086   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:17:34.590070   29734 ssh_runner.go:195] Run: systemctl --version
	I0917 17:17:34.615056   29734 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 17:17:34.776832   29734 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 17:17:34.783180   29734 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 17:17:34.783255   29734 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 17:17:34.803950   29734 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 17:17:34.803973   29734 start.go:495] detecting cgroup driver to use...
	I0917 17:17:34.804063   29734 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 17:17:34.822926   29734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 17:17:34.838681   29734 docker.go:217] disabling cri-docker service (if available) ...
	I0917 17:17:34.838762   29734 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 17:17:34.853846   29734 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 17:17:34.869233   29734 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 17:17:34.994726   29734 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 17:17:35.155499   29734 docker.go:233] disabling docker service ...
	I0917 17:17:35.155559   29734 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 17:17:35.171137   29734 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 17:17:35.185712   29734 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 17:17:35.320259   29734 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 17:17:35.451710   29734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 17:17:35.466710   29734 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 17:17:35.487225   29734 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 17:17:35.487292   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:17:35.499290   29734 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 17:17:35.499383   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:17:35.511419   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:17:35.523655   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:17:35.536429   29734 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 17:17:35.549878   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:17:35.562463   29734 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:17:35.582664   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:17:35.595132   29734 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 17:17:35.606971   29734 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 17:17:35.607027   29734 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 17:17:35.621832   29734 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 17:17:35.633279   29734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:17:35.759427   29734 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 17:17:35.859232   29734 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 17:17:35.859323   29734 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 17:17:35.864467   29734 start.go:563] Will wait 60s for crictl version
	I0917 17:17:35.864539   29734 ssh_runner.go:195] Run: which crictl
	I0917 17:17:35.868712   29734 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 17:17:35.914425   29734 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 17:17:35.914511   29734 ssh_runner.go:195] Run: crio --version
	I0917 17:17:35.945749   29734 ssh_runner.go:195] Run: crio --version
	I0917 17:17:35.979543   29734 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 17:17:35.981161   29734 main.go:141] libmachine: (ha-181247) Calling .GetIP
	I0917 17:17:35.983776   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:35.984080   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:35.984123   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:35.984272   29734 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0917 17:17:35.988783   29734 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 17:17:36.003551   29734 kubeadm.go:883] updating cluster {Name:ha-181247 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-181247 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 17:17:36.003694   29734 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 17:17:36.003743   29734 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 17:17:36.040043   29734 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0917 17:17:36.040121   29734 ssh_runner.go:195] Run: which lz4
	I0917 17:17:36.044793   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0917 17:17:36.044906   29734 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 17:17:36.049616   29734 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 17:17:36.049651   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0917 17:17:37.526460   29734 crio.go:462] duration metric: took 1.481579452s to copy over tarball
	I0917 17:17:37.526554   29734 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 17:17:39.581865   29734 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.055284031s)
	I0917 17:17:39.581906   29734 crio.go:469] duration metric: took 2.055410897s to extract the tarball
	I0917 17:17:39.581916   29734 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 17:17:39.619715   29734 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 17:17:39.667830   29734 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 17:17:39.667853   29734 cache_images.go:84] Images are preloaded, skipping loading
	I0917 17:17:39.667862   29734 kubeadm.go:934] updating node { 192.168.39.195 8443 v1.31.1 crio true true} ...
	I0917 17:17:39.667985   29734 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-181247 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-181247 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 17:17:39.668050   29734 ssh_runner.go:195] Run: crio config
	I0917 17:17:39.720134   29734 cni.go:84] Creating CNI manager for ""
	I0917 17:17:39.720157   29734 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0917 17:17:39.720169   29734 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 17:17:39.720198   29734 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.195 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-181247 NodeName:ha-181247 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.195"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 17:17:39.720379   29734 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-181247"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.195
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.195"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 17:17:39.720405   29734 kube-vip.go:115] generating kube-vip config ...
	I0917 17:17:39.720457   29734 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 17:17:39.737470   29734 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 17:17:39.737600   29734 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0917 17:17:39.737658   29734 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 17:17:39.748502   29734 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 17:17:39.748589   29734 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 17:17:39.758725   29734 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0917 17:17:39.776655   29734 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 17:17:39.794865   29734 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0917 17:17:39.812575   29734 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0917 17:17:39.829867   29734 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0917 17:17:39.833919   29734 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 17:17:39.847376   29734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:17:39.967101   29734 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 17:17:39.985136   29734 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247 for IP: 192.168.39.195
	I0917 17:17:39.985164   29734 certs.go:194] generating shared ca certs ...
	I0917 17:17:39.985186   29734 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:17:39.985372   29734 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 17:17:39.985442   29734 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 17:17:39.985456   29734 certs.go:256] generating profile certs ...
	I0917 17:17:39.985529   29734 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/client.key
	I0917 17:17:39.985547   29734 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/client.crt with IP's: []
	I0917 17:17:40.064829   29734 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/client.crt ...
	I0917 17:17:40.064859   29734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/client.crt: {Name:mk3079f6d5b8989ce7b1764d3b37598392b2af32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:17:40.065023   29734 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/client.key ...
	I0917 17:17:40.065034   29734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/client.key: {Name:mk5b49f925f80708dafeed2ecaef8facba26de2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:17:40.065108   29734 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.14a947b9
	I0917 17:17:40.065126   29734 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.14a947b9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.195 192.168.39.254]
	I0917 17:17:40.144821   29734 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.14a947b9 ...
	I0917 17:17:40.144846   29734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.14a947b9: {Name:mk2ea83f7ca9c6e83670f0043b0246ce3797e00f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:17:40.145023   29734 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.14a947b9 ...
	I0917 17:17:40.145040   29734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.14a947b9: {Name:mk21c25b17297bed11f0801fc03553121c429b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:17:40.145138   29734 certs.go:381] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.14a947b9 -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt
	I0917 17:17:40.145266   29734 certs.go:385] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.14a947b9 -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key
	I0917 17:17:40.145343   29734 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.key
	I0917 17:17:40.145359   29734 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.crt with IP's: []
	I0917 17:17:40.498271   29734 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.crt ...
	I0917 17:17:40.498324   29734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.crt: {Name:mk54f89ba6af98139c51d15c40e430bfe59aa203 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:17:40.498507   29734 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.key ...
	I0917 17:17:40.498521   29734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.key: {Name:mk94b16c6c206654a670864f84b420720096ef6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:17:40.498625   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 17:17:40.498644   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 17:17:40.498655   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 17:17:40.498665   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 17:17:40.498681   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 17:17:40.498691   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 17:17:40.498702   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 17:17:40.498712   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 17:17:40.498758   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 17:17:40.498792   29734 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 17:17:40.498801   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 17:17:40.498820   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 17:17:40.498843   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 17:17:40.498869   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 17:17:40.498906   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 17:17:40.498931   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:17:40.498944   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem -> /usr/share/ca-certificates/18259.pem
	I0917 17:17:40.498957   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> /usr/share/ca-certificates/182592.pem
	I0917 17:17:40.499567   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 17:17:40.528588   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 17:17:40.554021   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 17:17:40.579122   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 17:17:40.604527   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0917 17:17:40.630072   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 17:17:40.655533   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 17:17:40.680429   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 17:17:40.706898   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 17:17:40.731672   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 17:17:40.759816   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 17:17:40.809801   29734 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 17:17:40.828357   29734 ssh_runner.go:195] Run: openssl version
	I0917 17:17:40.835303   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 17:17:40.847722   29734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 17:17:40.852545   29734 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 17:17:40.852597   29734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 17:17:40.859002   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 17:17:40.871260   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 17:17:40.883696   29734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:17:40.888589   29734 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:17:40.888661   29734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:17:40.894719   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 17:17:40.906835   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 17:17:40.918826   29734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 17:17:40.923634   29734 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 17:17:40.923689   29734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 17:17:40.929587   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 17:17:40.941672   29734 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 17:17:40.946142   29734 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 17:17:40.946194   29734 kubeadm.go:392] StartCluster: {Name:ha-181247 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-181247 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:17:40.946257   29734 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 17:17:40.946297   29734 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 17:17:40.987549   29734 cri.go:89] found id: ""
	I0917 17:17:40.987615   29734 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 17:17:40.999318   29734 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 17:17:41.010837   29734 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 17:17:41.022189   29734 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 17:17:41.022219   29734 kubeadm.go:157] found existing configuration files:
	
	I0917 17:17:41.022270   29734 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 17:17:41.032930   29734 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 17:17:41.032999   29734 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 17:17:41.043745   29734 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 17:17:41.053997   29734 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 17:17:41.054067   29734 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 17:17:41.064880   29734 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 17:17:41.074961   29734 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 17:17:41.075026   29734 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 17:17:41.085766   29734 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 17:17:41.096174   29734 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 17:17:41.096231   29734 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 17:17:41.107267   29734 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 17:17:41.229035   29734 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 17:17:41.229176   29734 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 17:17:41.347012   29734 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 17:17:41.347117   29734 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 17:17:41.347206   29734 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 17:17:41.358370   29734 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 17:17:41.376928   29734 out.go:235]   - Generating certificates and keys ...
	I0917 17:17:41.377051   29734 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 17:17:41.377114   29734 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 17:17:41.558413   29734 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 17:17:41.652625   29734 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 17:17:42.116063   29734 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 17:17:42.340573   29734 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 17:17:42.606864   29734 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 17:17:42.607028   29734 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-181247 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I0917 17:17:43.174935   29734 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 17:17:43.175172   29734 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-181247 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I0917 17:17:43.325108   29734 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 17:17:43.457430   29734 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 17:17:43.610259   29734 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 17:17:43.610381   29734 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 17:17:43.869331   29734 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 17:17:43.969996   29734 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 17:17:44.104548   29734 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 17:17:44.304014   29734 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 17:17:44.554355   29734 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 17:17:44.554814   29734 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 17:17:44.559120   29734 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 17:17:44.561642   29734 out.go:235]   - Booting up control plane ...
	I0917 17:17:44.561760   29734 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 17:17:44.561883   29734 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 17:17:44.562216   29734 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 17:17:44.579298   29734 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 17:17:44.588734   29734 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 17:17:44.588841   29734 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 17:17:44.733605   29734 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 17:17:44.733824   29734 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 17:17:45.734464   29734 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.004156457s
	I0917 17:17:45.734566   29734 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 17:17:51.382928   29734 kubeadm.go:310] [api-check] The API server is healthy after 5.65227271s
	I0917 17:17:51.397060   29734 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 17:17:51.428650   29734 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 17:17:51.963920   29734 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 17:17:51.964105   29734 kubeadm.go:310] [mark-control-plane] Marking the node ha-181247 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 17:17:51.977148   29734 kubeadm.go:310] [bootstrap-token] Using token: jv4hj7.gvj0gihpcecyr3ei
	I0917 17:17:51.979014   29734 out.go:235]   - Configuring RBAC rules ...
	I0917 17:17:51.979165   29734 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 17:17:51.985435   29734 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 17:17:51.995160   29734 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 17:17:52.010830   29734 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 17:17:52.015515   29734 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 17:17:52.024226   29734 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 17:17:52.040459   29734 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 17:17:52.295460   29734 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 17:17:52.794454   29734 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 17:17:52.795475   29734 kubeadm.go:310] 
	I0917 17:17:52.795579   29734 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 17:17:52.795590   29734 kubeadm.go:310] 
	I0917 17:17:52.795701   29734 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 17:17:52.795712   29734 kubeadm.go:310] 
	I0917 17:17:52.795743   29734 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 17:17:52.795812   29734 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 17:17:52.795859   29734 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 17:17:52.795868   29734 kubeadm.go:310] 
	I0917 17:17:52.795912   29734 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 17:17:52.795919   29734 kubeadm.go:310] 
	I0917 17:17:52.795982   29734 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 17:17:52.795994   29734 kubeadm.go:310] 
	I0917 17:17:52.796046   29734 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 17:17:52.796111   29734 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 17:17:52.796168   29734 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 17:17:52.796175   29734 kubeadm.go:310] 
	I0917 17:17:52.796243   29734 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 17:17:52.796307   29734 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 17:17:52.796313   29734 kubeadm.go:310] 
	I0917 17:17:52.796431   29734 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jv4hj7.gvj0gihpcecyr3ei \
	I0917 17:17:52.796570   29734 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 \
	I0917 17:17:52.796604   29734 kubeadm.go:310] 	--control-plane 
	I0917 17:17:52.796621   29734 kubeadm.go:310] 
	I0917 17:17:52.796741   29734 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 17:17:52.796750   29734 kubeadm.go:310] 
	I0917 17:17:52.796867   29734 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jv4hj7.gvj0gihpcecyr3ei \
	I0917 17:17:52.796995   29734 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 
	I0917 17:17:52.798126   29734 kubeadm.go:310] W0917 17:17:41.195410     825 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 17:17:52.798517   29734 kubeadm.go:310] W0917 17:17:41.196383     825 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 17:17:52.798621   29734 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 17:17:52.798647   29734 cni.go:84] Creating CNI manager for ""
	I0917 17:17:52.798655   29734 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0917 17:17:52.800787   29734 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0917 17:17:52.802389   29734 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0917 17:17:52.808968   29734 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0917 17:17:52.808985   29734 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0917 17:17:52.834898   29734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0917 17:17:53.248046   29734 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 17:17:53.248136   29734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 17:17:53.248163   29734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-181247 minikube.k8s.io/updated_at=2024_09_17T17_17_53_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=ha-181247 minikube.k8s.io/primary=true
	I0917 17:17:53.416678   29734 ops.go:34] apiserver oom_adj: -16
	I0917 17:17:53.416822   29734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 17:17:53.917270   29734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 17:17:54.417627   29734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 17:17:54.917758   29734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 17:17:55.417671   29734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 17:17:55.917001   29734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 17:17:56.416919   29734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 17:17:56.917728   29734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 17:17:57.048080   29734 kubeadm.go:1113] duration metric: took 3.80000476s to wait for elevateKubeSystemPrivileges
	I0917 17:17:57.048120   29734 kubeadm.go:394] duration metric: took 16.10192849s to StartCluster
	I0917 17:17:57.048141   29734 settings.go:142] acquiring lock: {Name:mk34898861b3ed534da4bd8404feab95fe690001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:17:57.048226   29734 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 17:17:57.049291   29734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:17:57.050004   29734 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 17:17:57.050022   29734 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 17:17:57.050049   29734 start.go:241] waiting for startup goroutines ...
	I0917 17:17:57.050068   29734 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 17:17:57.050151   29734 addons.go:69] Setting storage-provisioner=true in profile "ha-181247"
	I0917 17:17:57.050173   29734 addons.go:69] Setting default-storageclass=true in profile "ha-181247"
	I0917 17:17:57.050188   29734 addons.go:234] Setting addon storage-provisioner=true in "ha-181247"
	I0917 17:17:57.050222   29734 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-181247"
	I0917 17:17:57.050272   29734 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:17:57.050226   29734 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:17:57.050724   29734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:17:57.050764   29734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:17:57.050764   29734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:17:57.050804   29734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:17:57.066436   29734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44451
	I0917 17:17:57.066489   29734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37915
	I0917 17:17:57.066942   29734 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:17:57.067003   29734 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:17:57.067508   29734 main.go:141] libmachine: Using API Version  1
	I0917 17:17:57.067528   29734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:17:57.067508   29734 main.go:141] libmachine: Using API Version  1
	I0917 17:17:57.067556   29734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:17:57.067899   29734 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:17:57.067936   29734 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:17:57.068101   29734 main.go:141] libmachine: (ha-181247) Calling .GetState
	I0917 17:17:57.068544   29734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:17:57.068590   29734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:17:57.070182   29734 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 17:17:57.070452   29734 kapi.go:59] client config for ha-181247: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/client.crt", KeyFile:"/home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/client.key", CAFile:"/home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 17:17:57.070857   29734 cert_rotation.go:140] Starting client certificate rotation controller
	I0917 17:17:57.071098   29734 addons.go:234] Setting addon default-storageclass=true in "ha-181247"
	I0917 17:17:57.071132   29734 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:17:57.071433   29734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:17:57.071467   29734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:17:57.084769   29734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43341
	I0917 17:17:57.085293   29734 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:17:57.085895   29734 main.go:141] libmachine: Using API Version  1
	I0917 17:17:57.085919   29734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:17:57.086266   29734 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:17:57.086274   29734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34871
	I0917 17:17:57.086481   29734 main.go:141] libmachine: (ha-181247) Calling .GetState
	I0917 17:17:57.086812   29734 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:17:57.087296   29734 main.go:141] libmachine: Using API Version  1
	I0917 17:17:57.087318   29734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:17:57.087643   29734 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:17:57.088180   29734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:17:57.088219   29734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:17:57.088491   29734 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:17:57.091028   29734 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 17:17:57.092264   29734 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 17:17:57.092280   29734 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 17:17:57.092295   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:17:57.095831   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:57.096408   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:57.096434   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:57.096761   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:17:57.096968   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:57.097113   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:17:57.097247   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:17:57.104921   29734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39379
	I0917 17:17:57.105405   29734 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:17:57.105853   29734 main.go:141] libmachine: Using API Version  1
	I0917 17:17:57.105870   29734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:17:57.106256   29734 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:17:57.106466   29734 main.go:141] libmachine: (ha-181247) Calling .GetState
	I0917 17:17:57.108299   29734 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:17:57.108525   29734 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 17:17:57.108541   29734 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 17:17:57.108554   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:17:57.111624   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:57.112024   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:57.112053   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:57.112259   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:17:57.112403   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:57.112537   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:17:57.112639   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:17:57.160956   29734 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0917 17:17:57.227238   29734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 17:17:57.270544   29734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 17:17:57.553613   29734 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0917 17:17:57.735871   29734 main.go:141] libmachine: Making call to close driver server
	I0917 17:17:57.735899   29734 main.go:141] libmachine: (ha-181247) Calling .Close
	I0917 17:17:57.735935   29734 main.go:141] libmachine: Making call to close driver server
	I0917 17:17:57.735954   29734 main.go:141] libmachine: (ha-181247) Calling .Close
	I0917 17:17:57.736205   29734 main.go:141] libmachine: Successfully made call to close driver server
	I0917 17:17:57.736223   29734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 17:17:57.736232   29734 main.go:141] libmachine: Making call to close driver server
	I0917 17:17:57.736239   29734 main.go:141] libmachine: (ha-181247) Calling .Close
	I0917 17:17:57.736245   29734 main.go:141] libmachine: Successfully made call to close driver server
	I0917 17:17:57.736262   29734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 17:17:57.736272   29734 main.go:141] libmachine: Making call to close driver server
	I0917 17:17:57.736281   29734 main.go:141] libmachine: (ha-181247) Calling .Close
	I0917 17:17:57.736199   29734 main.go:141] libmachine: (ha-181247) DBG | Closing plugin on server side
	I0917 17:17:57.736423   29734 main.go:141] libmachine: Successfully made call to close driver server
	I0917 17:17:57.736433   29734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 17:17:57.736493   29734 main.go:141] libmachine: Successfully made call to close driver server
	I0917 17:17:57.736503   29734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 17:17:57.736578   29734 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 17:17:57.736596   29734 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 17:17:57.736718   29734 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0917 17:17:57.736731   29734 round_trippers.go:469] Request Headers:
	I0917 17:17:57.736742   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:17:57.736747   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:17:57.750672   29734 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0917 17:17:57.751432   29734 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0917 17:17:57.751452   29734 round_trippers.go:469] Request Headers:
	I0917 17:17:57.751463   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:17:57.751471   29734 round_trippers.go:473]     Content-Type: application/json
	I0917 17:17:57.751478   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:17:57.753965   29734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 17:17:57.754145   29734 main.go:141] libmachine: Making call to close driver server
	I0917 17:17:57.754162   29734 main.go:141] libmachine: (ha-181247) Calling .Close
	I0917 17:17:57.754510   29734 main.go:141] libmachine: Successfully made call to close driver server
	I0917 17:17:57.754567   29734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 17:17:57.754583   29734 main.go:141] libmachine: (ha-181247) DBG | Closing plugin on server side
	I0917 17:17:57.756623   29734 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0917 17:17:57.758034   29734 addons.go:510] duration metric: took 707.968528ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0917 17:17:57.758078   29734 start.go:246] waiting for cluster config update ...
	I0917 17:17:57.758090   29734 start.go:255] writing updated cluster config ...
	I0917 17:17:57.759731   29734 out.go:201] 
	I0917 17:17:57.761159   29734 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:17:57.761306   29734 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/config.json ...
	I0917 17:17:57.762945   29734 out.go:177] * Starting "ha-181247-m02" control-plane node in "ha-181247" cluster
	I0917 17:17:57.764198   29734 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 17:17:57.764230   29734 cache.go:56] Caching tarball of preloaded images
	I0917 17:17:57.764349   29734 preload.go:172] Found /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 17:17:57.764361   29734 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0917 17:17:57.764433   29734 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/config.json ...
	I0917 17:17:57.764626   29734 start.go:360] acquireMachinesLock for ha-181247-m02: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 17:17:57.764673   29734 start.go:364] duration metric: took 27.836µs to acquireMachinesLock for "ha-181247-m02"
	I0917 17:17:57.764693   29734 start.go:93] Provisioning new machine with config: &{Name:ha-181247 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-181247 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 17:17:57.764769   29734 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0917 17:17:57.766494   29734 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 17:17:57.766576   29734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:17:57.766611   29734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:17:57.781200   29734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33557
	I0917 17:17:57.781679   29734 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:17:57.782109   29734 main.go:141] libmachine: Using API Version  1
	I0917 17:17:57.782128   29734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:17:57.782423   29734 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:17:57.782604   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetMachineName
	I0917 17:17:57.782725   29734 main.go:141] libmachine: (ha-181247-m02) Calling .DriverName
	I0917 17:17:57.782844   29734 start.go:159] libmachine.API.Create for "ha-181247" (driver="kvm2")
	I0917 17:17:57.782876   29734 client.go:168] LocalClient.Create starting
	I0917 17:17:57.782909   29734 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem
	I0917 17:17:57.782947   29734 main.go:141] libmachine: Decoding PEM data...
	I0917 17:17:57.782970   29734 main.go:141] libmachine: Parsing certificate...
	I0917 17:17:57.783034   29734 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem
	I0917 17:17:57.783071   29734 main.go:141] libmachine: Decoding PEM data...
	I0917 17:17:57.783090   29734 main.go:141] libmachine: Parsing certificate...
	I0917 17:17:57.783124   29734 main.go:141] libmachine: Running pre-create checks...
	I0917 17:17:57.783133   29734 main.go:141] libmachine: (ha-181247-m02) Calling .PreCreateCheck
	I0917 17:17:57.783246   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetConfigRaw
	I0917 17:17:57.783600   29734 main.go:141] libmachine: Creating machine...
	I0917 17:17:57.783614   29734 main.go:141] libmachine: (ha-181247-m02) Calling .Create
	I0917 17:17:57.783705   29734 main.go:141] libmachine: (ha-181247-m02) Creating KVM machine...
	I0917 17:17:57.784871   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found existing default KVM network
	I0917 17:17:57.784944   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found existing private KVM network mk-ha-181247
	I0917 17:17:57.785123   29734 main.go:141] libmachine: (ha-181247-m02) Setting up store path in /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02 ...
	I0917 17:17:57.785140   29734 main.go:141] libmachine: (ha-181247-m02) Building disk image from file:///home/jenkins/minikube-integration/19662-11085/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0917 17:17:57.785285   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:17:57.785116   30104 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 17:17:57.785359   29734 main.go:141] libmachine: (ha-181247-m02) Downloading /home/jenkins/minikube-integration/19662-11085/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19662-11085/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0917 17:17:58.016182   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:17:58.016045   30104 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02/id_rsa...
	I0917 17:17:58.178317   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:17:58.178194   30104 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02/ha-181247-m02.rawdisk...
	I0917 17:17:58.178361   29734 main.go:141] libmachine: (ha-181247-m02) DBG | Writing magic tar header
	I0917 17:17:58.178377   29734 main.go:141] libmachine: (ha-181247-m02) DBG | Writing SSH key tar header
	I0917 17:17:58.178389   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:17:58.178302   30104 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02 ...
	I0917 17:17:58.178429   29734 main.go:141] libmachine: (ha-181247-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02
	I0917 17:17:58.178453   29734 main.go:141] libmachine: (ha-181247-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube/machines
	I0917 17:17:58.178472   29734 main.go:141] libmachine: (ha-181247-m02) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02 (perms=drwx------)
	I0917 17:17:58.178482   29734 main.go:141] libmachine: (ha-181247-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 17:17:58.178492   29734 main.go:141] libmachine: (ha-181247-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085
	I0917 17:17:58.178498   29734 main.go:141] libmachine: (ha-181247-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0917 17:17:58.178506   29734 main.go:141] libmachine: (ha-181247-m02) DBG | Checking permissions on dir: /home/jenkins
	I0917 17:17:58.178515   29734 main.go:141] libmachine: (ha-181247-m02) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube/machines (perms=drwxr-xr-x)
	I0917 17:17:58.178521   29734 main.go:141] libmachine: (ha-181247-m02) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube (perms=drwxr-xr-x)
	I0917 17:17:58.178529   29734 main.go:141] libmachine: (ha-181247-m02) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085 (perms=drwxrwxr-x)
	I0917 17:17:58.178534   29734 main.go:141] libmachine: (ha-181247-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0917 17:17:58.178572   29734 main.go:141] libmachine: (ha-181247-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0917 17:17:58.178589   29734 main.go:141] libmachine: (ha-181247-m02) DBG | Checking permissions on dir: /home
	I0917 17:17:58.178595   29734 main.go:141] libmachine: (ha-181247-m02) Creating domain...
	I0917 17:17:58.178605   29734 main.go:141] libmachine: (ha-181247-m02) DBG | Skipping /home - not owner
	I0917 17:17:58.179564   29734 main.go:141] libmachine: (ha-181247-m02) define libvirt domain using xml: 
	I0917 17:17:58.179579   29734 main.go:141] libmachine: (ha-181247-m02) <domain type='kvm'>
	I0917 17:17:58.179586   29734 main.go:141] libmachine: (ha-181247-m02)   <name>ha-181247-m02</name>
	I0917 17:17:58.179590   29734 main.go:141] libmachine: (ha-181247-m02)   <memory unit='MiB'>2200</memory>
	I0917 17:17:58.179595   29734 main.go:141] libmachine: (ha-181247-m02)   <vcpu>2</vcpu>
	I0917 17:17:58.179599   29734 main.go:141] libmachine: (ha-181247-m02)   <features>
	I0917 17:17:58.179606   29734 main.go:141] libmachine: (ha-181247-m02)     <acpi/>
	I0917 17:17:58.179612   29734 main.go:141] libmachine: (ha-181247-m02)     <apic/>
	I0917 17:17:58.179618   29734 main.go:141] libmachine: (ha-181247-m02)     <pae/>
	I0917 17:17:58.179634   29734 main.go:141] libmachine: (ha-181247-m02)     
	I0917 17:17:58.179644   29734 main.go:141] libmachine: (ha-181247-m02)   </features>
	I0917 17:17:58.179653   29734 main.go:141] libmachine: (ha-181247-m02)   <cpu mode='host-passthrough'>
	I0917 17:17:58.179658   29734 main.go:141] libmachine: (ha-181247-m02)   
	I0917 17:17:58.179664   29734 main.go:141] libmachine: (ha-181247-m02)   </cpu>
	I0917 17:17:58.179668   29734 main.go:141] libmachine: (ha-181247-m02)   <os>
	I0917 17:17:58.179673   29734 main.go:141] libmachine: (ha-181247-m02)     <type>hvm</type>
	I0917 17:17:58.179700   29734 main.go:141] libmachine: (ha-181247-m02)     <boot dev='cdrom'/>
	I0917 17:17:58.179723   29734 main.go:141] libmachine: (ha-181247-m02)     <boot dev='hd'/>
	I0917 17:17:58.179734   29734 main.go:141] libmachine: (ha-181247-m02)     <bootmenu enable='no'/>
	I0917 17:17:58.179743   29734 main.go:141] libmachine: (ha-181247-m02)   </os>
	I0917 17:17:58.179752   29734 main.go:141] libmachine: (ha-181247-m02)   <devices>
	I0917 17:17:58.179761   29734 main.go:141] libmachine: (ha-181247-m02)     <disk type='file' device='cdrom'>
	I0917 17:17:58.179783   29734 main.go:141] libmachine: (ha-181247-m02)       <source file='/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02/boot2docker.iso'/>
	I0917 17:17:58.179794   29734 main.go:141] libmachine: (ha-181247-m02)       <target dev='hdc' bus='scsi'/>
	I0917 17:17:58.179809   29734 main.go:141] libmachine: (ha-181247-m02)       <readonly/>
	I0917 17:17:58.179828   29734 main.go:141] libmachine: (ha-181247-m02)     </disk>
	I0917 17:17:58.179844   29734 main.go:141] libmachine: (ha-181247-m02)     <disk type='file' device='disk'>
	I0917 17:17:58.179861   29734 main.go:141] libmachine: (ha-181247-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0917 17:17:58.179877   29734 main.go:141] libmachine: (ha-181247-m02)       <source file='/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02/ha-181247-m02.rawdisk'/>
	I0917 17:17:58.179887   29734 main.go:141] libmachine: (ha-181247-m02)       <target dev='hda' bus='virtio'/>
	I0917 17:17:58.179893   29734 main.go:141] libmachine: (ha-181247-m02)     </disk>
	I0917 17:17:58.179898   29734 main.go:141] libmachine: (ha-181247-m02)     <interface type='network'>
	I0917 17:17:58.179903   29734 main.go:141] libmachine: (ha-181247-m02)       <source network='mk-ha-181247'/>
	I0917 17:17:58.179910   29734 main.go:141] libmachine: (ha-181247-m02)       <model type='virtio'/>
	I0917 17:17:58.179915   29734 main.go:141] libmachine: (ha-181247-m02)     </interface>
	I0917 17:17:58.179922   29734 main.go:141] libmachine: (ha-181247-m02)     <interface type='network'>
	I0917 17:17:58.179936   29734 main.go:141] libmachine: (ha-181247-m02)       <source network='default'/>
	I0917 17:17:58.179952   29734 main.go:141] libmachine: (ha-181247-m02)       <model type='virtio'/>
	I0917 17:17:58.179963   29734 main.go:141] libmachine: (ha-181247-m02)     </interface>
	I0917 17:17:58.179973   29734 main.go:141] libmachine: (ha-181247-m02)     <serial type='pty'>
	I0917 17:17:58.179982   29734 main.go:141] libmachine: (ha-181247-m02)       <target port='0'/>
	I0917 17:17:58.179991   29734 main.go:141] libmachine: (ha-181247-m02)     </serial>
	I0917 17:17:58.179999   29734 main.go:141] libmachine: (ha-181247-m02)     <console type='pty'>
	I0917 17:17:58.180009   29734 main.go:141] libmachine: (ha-181247-m02)       <target type='serial' port='0'/>
	I0917 17:17:58.180022   29734 main.go:141] libmachine: (ha-181247-m02)     </console>
	I0917 17:17:58.180034   29734 main.go:141] libmachine: (ha-181247-m02)     <rng model='virtio'>
	I0917 17:17:58.180045   29734 main.go:141] libmachine: (ha-181247-m02)       <backend model='random'>/dev/random</backend>
	I0917 17:17:58.180054   29734 main.go:141] libmachine: (ha-181247-m02)     </rng>
	I0917 17:17:58.180061   29734 main.go:141] libmachine: (ha-181247-m02)     
	I0917 17:17:58.180068   29734 main.go:141] libmachine: (ha-181247-m02)     
	I0917 17:17:58.180073   29734 main.go:141] libmachine: (ha-181247-m02)   </devices>
	I0917 17:17:58.180077   29734 main.go:141] libmachine: (ha-181247-m02) </domain>
	I0917 17:17:58.180094   29734 main.go:141] libmachine: (ha-181247-m02) 
	I0917 17:17:58.187935   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:b7:3c:7a in network default
	I0917 17:17:58.188506   29734 main.go:141] libmachine: (ha-181247-m02) Ensuring networks are active...
	I0917 17:17:58.188531   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:17:58.189188   29734 main.go:141] libmachine: (ha-181247-m02) Ensuring network default is active
	I0917 17:17:58.189474   29734 main.go:141] libmachine: (ha-181247-m02) Ensuring network mk-ha-181247 is active
	I0917 17:17:58.189796   29734 main.go:141] libmachine: (ha-181247-m02) Getting domain xml...
	I0917 17:17:58.190559   29734 main.go:141] libmachine: (ha-181247-m02) Creating domain...
	I0917 17:17:59.445602   29734 main.go:141] libmachine: (ha-181247-m02) Waiting to get IP...
	I0917 17:17:59.446507   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:17:59.446930   29734 main.go:141] libmachine: (ha-181247-m02) DBG | unable to find current IP address of domain ha-181247-m02 in network mk-ha-181247
	I0917 17:17:59.446966   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:17:59.446914   30104 retry.go:31] will retry after 263.3297ms: waiting for machine to come up
	I0917 17:17:59.712214   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:17:59.712719   29734 main.go:141] libmachine: (ha-181247-m02) DBG | unable to find current IP address of domain ha-181247-m02 in network mk-ha-181247
	I0917 17:17:59.712744   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:17:59.712669   30104 retry.go:31] will retry after 236.146897ms: waiting for machine to come up
	I0917 17:17:59.950043   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:17:59.950493   29734 main.go:141] libmachine: (ha-181247-m02) DBG | unable to find current IP address of domain ha-181247-m02 in network mk-ha-181247
	I0917 17:17:59.950513   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:17:59.950450   30104 retry.go:31] will retry after 440.967944ms: waiting for machine to come up
	I0917 17:18:00.393105   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:00.393638   29734 main.go:141] libmachine: (ha-181247-m02) DBG | unable to find current IP address of domain ha-181247-m02 in network mk-ha-181247
	I0917 17:18:00.393664   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:18:00.393579   30104 retry.go:31] will retry after 520.557465ms: waiting for machine to come up
	I0917 17:18:00.915263   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:00.915684   29734 main.go:141] libmachine: (ha-181247-m02) DBG | unable to find current IP address of domain ha-181247-m02 in network mk-ha-181247
	I0917 17:18:00.915712   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:18:00.915620   30104 retry.go:31] will retry after 655.302859ms: waiting for machine to come up
	I0917 17:18:01.572071   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:01.572499   29734 main.go:141] libmachine: (ha-181247-m02) DBG | unable to find current IP address of domain ha-181247-m02 in network mk-ha-181247
	I0917 17:18:01.572527   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:18:01.572471   30104 retry.go:31] will retry after 849.8849ms: waiting for machine to come up
	I0917 17:18:02.423434   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:02.423972   29734 main.go:141] libmachine: (ha-181247-m02) DBG | unable to find current IP address of domain ha-181247-m02 in network mk-ha-181247
	I0917 17:18:02.423997   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:18:02.423904   30104 retry.go:31] will retry after 978.609236ms: waiting for machine to come up
	I0917 17:18:03.404323   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:03.404859   29734 main.go:141] libmachine: (ha-181247-m02) DBG | unable to find current IP address of domain ha-181247-m02 in network mk-ha-181247
	I0917 17:18:03.404888   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:18:03.404806   30104 retry.go:31] will retry after 1.1479538s: waiting for machine to come up
	I0917 17:18:04.554114   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:04.554487   29734 main.go:141] libmachine: (ha-181247-m02) DBG | unable to find current IP address of domain ha-181247-m02 in network mk-ha-181247
	I0917 17:18:04.554512   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:18:04.554472   30104 retry.go:31] will retry after 1.832387096s: waiting for machine to come up
	I0917 17:18:06.389580   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:06.390011   29734 main.go:141] libmachine: (ha-181247-m02) DBG | unable to find current IP address of domain ha-181247-m02 in network mk-ha-181247
	I0917 17:18:06.390035   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:18:06.389973   30104 retry.go:31] will retry after 1.907985426s: waiting for machine to come up
	I0917 17:18:08.299652   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:08.300189   29734 main.go:141] libmachine: (ha-181247-m02) DBG | unable to find current IP address of domain ha-181247-m02 in network mk-ha-181247
	I0917 17:18:08.300211   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:18:08.300157   30104 retry.go:31] will retry after 1.842850915s: waiting for machine to come up
	I0917 17:18:10.145000   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:10.145487   29734 main.go:141] libmachine: (ha-181247-m02) DBG | unable to find current IP address of domain ha-181247-m02 in network mk-ha-181247
	I0917 17:18:10.145508   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:18:10.145448   30104 retry.go:31] will retry after 2.563514245s: waiting for machine to come up
	I0917 17:18:12.712222   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:12.712706   29734 main.go:141] libmachine: (ha-181247-m02) DBG | unable to find current IP address of domain ha-181247-m02 in network mk-ha-181247
	I0917 17:18:12.712737   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:18:12.712675   30104 retry.go:31] will retry after 3.925683535s: waiting for machine to come up
	I0917 17:18:16.642998   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:16.643406   29734 main.go:141] libmachine: (ha-181247-m02) DBG | unable to find current IP address of domain ha-181247-m02 in network mk-ha-181247
	I0917 17:18:16.643427   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:18:16.643365   30104 retry.go:31] will retry after 4.188157974s: waiting for machine to come up
	I0917 17:18:20.834295   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:20.834870   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has current primary IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:20.834901   29734 main.go:141] libmachine: (ha-181247-m02) Found IP for machine: 192.168.39.11
	I0917 17:18:20.834915   29734 main.go:141] libmachine: (ha-181247-m02) Reserving static IP address...
	I0917 17:18:20.835306   29734 main.go:141] libmachine: (ha-181247-m02) DBG | unable to find host DHCP lease matching {name: "ha-181247-m02", mac: "52:54:00:a4:df:96", ip: "192.168.39.11"} in network mk-ha-181247
	I0917 17:18:20.914631   29734 main.go:141] libmachine: (ha-181247-m02) DBG | Getting to WaitForSSH function...
	I0917 17:18:20.914671   29734 main.go:141] libmachine: (ha-181247-m02) Reserved static IP address: 192.168.39.11
	I0917 17:18:20.914694   29734 main.go:141] libmachine: (ha-181247-m02) Waiting for SSH to be available...
	I0917 17:18:20.917727   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:20.918105   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:20.918135   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:20.918256   29734 main.go:141] libmachine: (ha-181247-m02) DBG | Using SSH client type: external
	I0917 17:18:20.918287   29734 main.go:141] libmachine: (ha-181247-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02/id_rsa (-rw-------)
	I0917 17:18:20.918332   29734 main.go:141] libmachine: (ha-181247-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 17:18:20.918353   29734 main.go:141] libmachine: (ha-181247-m02) DBG | About to run SSH command:
	I0917 17:18:20.918378   29734 main.go:141] libmachine: (ha-181247-m02) DBG | exit 0
	I0917 17:18:21.045627   29734 main.go:141] libmachine: (ha-181247-m02) DBG | SSH cmd err, output: <nil>: 
	I0917 17:18:21.045884   29734 main.go:141] libmachine: (ha-181247-m02) KVM machine creation complete!
	I0917 17:18:21.046333   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetConfigRaw
	I0917 17:18:21.046946   29734 main.go:141] libmachine: (ha-181247-m02) Calling .DriverName
	I0917 17:18:21.047221   29734 main.go:141] libmachine: (ha-181247-m02) Calling .DriverName
	I0917 17:18:21.047384   29734 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0917 17:18:21.047413   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetState
	I0917 17:18:21.048705   29734 main.go:141] libmachine: Detecting operating system of created instance...
	I0917 17:18:21.048717   29734 main.go:141] libmachine: Waiting for SSH to be available...
	I0917 17:18:21.048722   29734 main.go:141] libmachine: Getting to WaitForSSH function...
	I0917 17:18:21.048728   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHHostname
	I0917 17:18:21.050992   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.051417   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:21.051442   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.051589   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHPort
	I0917 17:18:21.051758   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:21.051883   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:21.051992   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHUsername
	I0917 17:18:21.052143   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:18:21.052447   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0917 17:18:21.052463   29734 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0917 17:18:21.160757   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 17:18:21.160782   29734 main.go:141] libmachine: Detecting the provisioner...
	I0917 17:18:21.160790   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHHostname
	I0917 17:18:21.163388   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.163703   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:21.163736   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.163882   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHPort
	I0917 17:18:21.164042   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:21.164222   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:21.164343   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHUsername
	I0917 17:18:21.164539   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:18:21.164733   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0917 17:18:21.164746   29734 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0917 17:18:21.274826   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0917 17:18:21.274913   29734 main.go:141] libmachine: found compatible host: buildroot
	I0917 17:18:21.274923   29734 main.go:141] libmachine: Provisioning with buildroot...
	I0917 17:18:21.274931   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetMachineName
	I0917 17:18:21.275195   29734 buildroot.go:166] provisioning hostname "ha-181247-m02"
	I0917 17:18:21.275211   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetMachineName
	I0917 17:18:21.275418   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHHostname
	I0917 17:18:21.277879   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.278227   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:21.278256   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.278398   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHPort
	I0917 17:18:21.278590   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:21.278731   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:21.278882   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHUsername
	I0917 17:18:21.279031   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:18:21.279198   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0917 17:18:21.279210   29734 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181247-m02 && echo "ha-181247-m02" | sudo tee /etc/hostname
	I0917 17:18:21.405388   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181247-m02
	
	I0917 17:18:21.405424   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHHostname
	I0917 17:18:21.408809   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.409168   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:21.409195   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.409399   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHPort
	I0917 17:18:21.409584   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:21.409728   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:21.409851   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHUsername
	I0917 17:18:21.409983   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:18:21.410157   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0917 17:18:21.410172   29734 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181247-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181247-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181247-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 17:18:21.526487   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 17:18:21.526513   29734 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 17:18:21.526527   29734 buildroot.go:174] setting up certificates
	I0917 17:18:21.526536   29734 provision.go:84] configureAuth start
	I0917 17:18:21.526545   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetMachineName
	I0917 17:18:21.526843   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetIP
	I0917 17:18:21.529384   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.529812   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:21.529836   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.529971   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHHostname
	I0917 17:18:21.532743   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.533108   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:21.533134   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.533269   29734 provision.go:143] copyHostCerts
	I0917 17:18:21.533310   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 17:18:21.533352   29734 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 17:18:21.533361   29734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 17:18:21.533428   29734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 17:18:21.533765   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 17:18:21.533807   29734 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 17:18:21.533815   29734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 17:18:21.533864   29734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 17:18:21.534035   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 17:18:21.534074   29734 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 17:18:21.534084   29734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 17:18:21.534143   29734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 17:18:21.534238   29734 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.ha-181247-m02 san=[127.0.0.1 192.168.39.11 ha-181247-m02 localhost minikube]
	I0917 17:18:21.602336   29734 provision.go:177] copyRemoteCerts
	I0917 17:18:21.602400   29734 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 17:18:21.602427   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHHostname
	I0917 17:18:21.605998   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.606365   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:21.606406   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.606636   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHPort
	I0917 17:18:21.606839   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:21.607021   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHUsername
	I0917 17:18:21.607134   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02/id_rsa Username:docker}
	I0917 17:18:21.692171   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 17:18:21.692260   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 17:18:21.718050   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 17:18:21.718125   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 17:18:21.742902   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 17:18:21.742985   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 17:18:21.768165   29734 provision.go:87] duration metric: took 241.617875ms to configureAuth
	I0917 17:18:21.768198   29734 buildroot.go:189] setting minikube options for container-runtime
	I0917 17:18:21.768391   29734 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:18:21.768463   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHHostname
	I0917 17:18:21.771121   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.771489   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:21.771517   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.771752   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHPort
	I0917 17:18:21.771919   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:21.772101   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:21.772248   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHUsername
	I0917 17:18:21.772392   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:18:21.772545   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0917 17:18:21.772559   29734 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 17:18:22.006520   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 17:18:22.006545   29734 main.go:141] libmachine: Checking connection to Docker...
	I0917 17:18:22.006553   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetURL
	I0917 17:18:22.007874   29734 main.go:141] libmachine: (ha-181247-m02) DBG | Using libvirt version 6000000
	I0917 17:18:22.010313   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:22.010655   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:22.010682   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:22.010906   29734 main.go:141] libmachine: Docker is up and running!
	I0917 17:18:22.010921   29734 main.go:141] libmachine: Reticulating splines...
	I0917 17:18:22.010929   29734 client.go:171] duration metric: took 24.228046586s to LocalClient.Create
	I0917 17:18:22.010955   29734 start.go:167] duration metric: took 24.228112951s to libmachine.API.Create "ha-181247"
	I0917 17:18:22.010966   29734 start.go:293] postStartSetup for "ha-181247-m02" (driver="kvm2")
	I0917 17:18:22.010980   29734 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 17:18:22.011005   29734 main.go:141] libmachine: (ha-181247-m02) Calling .DriverName
	I0917 17:18:22.011239   29734 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 17:18:22.011261   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHHostname
	I0917 17:18:22.013775   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:22.014065   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:22.014092   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:22.014234   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHPort
	I0917 17:18:22.014441   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:22.014609   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHUsername
	I0917 17:18:22.014774   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02/id_rsa Username:docker}
	I0917 17:18:22.102790   29734 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 17:18:22.107538   29734 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 17:18:22.107563   29734 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 17:18:22.107640   29734 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 17:18:22.107710   29734 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 17:18:22.107719   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> /etc/ssl/certs/182592.pem
	I0917 17:18:22.107799   29734 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 17:18:22.120049   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 17:18:22.148561   29734 start.go:296] duration metric: took 137.580177ms for postStartSetup
	I0917 17:18:22.148607   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetConfigRaw
	I0917 17:18:22.149220   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetIP
	I0917 17:18:22.152005   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:22.152362   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:22.152384   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:22.152666   29734 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/config.json ...
	I0917 17:18:22.152912   29734 start.go:128] duration metric: took 24.388130663s to createHost
	I0917 17:18:22.152940   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHHostname
	I0917 17:18:22.155168   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:22.155507   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:22.155533   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:22.155714   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHPort
	I0917 17:18:22.155897   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:22.156033   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:22.156180   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHUsername
	I0917 17:18:22.156294   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:18:22.156469   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0917 17:18:22.156480   29734 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 17:18:22.266321   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726593502.221764932
	
	I0917 17:18:22.266340   29734 fix.go:216] guest clock: 1726593502.221764932
	I0917 17:18:22.266347   29734 fix.go:229] Guest: 2024-09-17 17:18:22.221764932 +0000 UTC Remote: 2024-09-17 17:18:22.152926043 +0000 UTC m=+69.893805041 (delta=68.838889ms)
	I0917 17:18:22.266364   29734 fix.go:200] guest clock delta is within tolerance: 68.838889ms
	I0917 17:18:22.266368   29734 start.go:83] releasing machines lock for "ha-181247-m02", held for 24.501686632s
	I0917 17:18:22.266384   29734 main.go:141] libmachine: (ha-181247-m02) Calling .DriverName
	I0917 17:18:22.266622   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetIP
	I0917 17:18:22.269609   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:22.270023   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:22.270058   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:22.272589   29734 out.go:177] * Found network options:
	I0917 17:18:22.274235   29734 out.go:177]   - NO_PROXY=192.168.39.195
	W0917 17:18:22.275808   29734 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 17:18:22.275838   29734 main.go:141] libmachine: (ha-181247-m02) Calling .DriverName
	I0917 17:18:22.276486   29734 main.go:141] libmachine: (ha-181247-m02) Calling .DriverName
	I0917 17:18:22.276802   29734 main.go:141] libmachine: (ha-181247-m02) Calling .DriverName
	I0917 17:18:22.276915   29734 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 17:18:22.276954   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHHostname
	W0917 17:18:22.276986   29734 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 17:18:22.277042   29734 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 17:18:22.277059   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHHostname
	I0917 17:18:22.280134   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:22.280462   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:22.280488   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:22.280508   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:22.280645   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHPort
	I0917 17:18:22.280794   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:22.280930   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHUsername
	I0917 17:18:22.280955   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:22.280994   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:22.281087   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02/id_rsa Username:docker}
	I0917 17:18:22.281162   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHPort
	I0917 17:18:22.281327   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:22.281575   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHUsername
	I0917 17:18:22.281701   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02/id_rsa Username:docker}
	I0917 17:18:22.529244   29734 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 17:18:22.535467   29734 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 17:18:22.535526   29734 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 17:18:22.552017   29734 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 17:18:22.552045   29734 start.go:495] detecting cgroup driver to use...
	I0917 17:18:22.552109   29734 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 17:18:22.569131   29734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 17:18:22.585085   29734 docker.go:217] disabling cri-docker service (if available) ...
	I0917 17:18:22.585132   29734 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 17:18:22.600389   29734 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 17:18:22.615637   29734 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 17:18:22.732209   29734 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 17:18:22.872622   29734 docker.go:233] disabling docker service ...
	I0917 17:18:22.872701   29734 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 17:18:22.888542   29734 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 17:18:22.903914   29734 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 17:18:23.051397   29734 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 17:18:23.181885   29734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 17:18:23.200187   29734 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 17:18:23.222525   29734 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 17:18:23.222579   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:18:23.235585   29734 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 17:18:23.235658   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:18:23.248584   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:18:23.261726   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:18:23.274049   29734 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 17:18:23.287882   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:18:23.300763   29734 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:18:23.320232   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:18:23.332578   29734 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 17:18:23.345466   29734 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 17:18:23.345532   29734 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 17:18:23.362760   29734 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 17:18:23.374070   29734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:18:23.505526   29734 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 17:18:23.604959   29734 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 17:18:23.605036   29734 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 17:18:23.610222   29734 start.go:563] Will wait 60s for crictl version
	I0917 17:18:23.610291   29734 ssh_runner.go:195] Run: which crictl
	I0917 17:18:23.614410   29734 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 17:18:23.658480   29734 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 17:18:23.658573   29734 ssh_runner.go:195] Run: crio --version
	I0917 17:18:23.688813   29734 ssh_runner.go:195] Run: crio --version
	I0917 17:18:23.722694   29734 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 17:18:23.724752   29734 out.go:177]   - env NO_PROXY=192.168.39.195
	I0917 17:18:23.726126   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetIP
	I0917 17:18:23.728958   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:23.729375   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:23.729394   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:23.729654   29734 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0917 17:18:23.734247   29734 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 17:18:23.748348   29734 mustload.go:65] Loading cluster: ha-181247
	I0917 17:18:23.748548   29734 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:18:23.748862   29734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:18:23.748904   29734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:18:23.763864   29734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41631
	I0917 17:18:23.764339   29734 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:18:23.764903   29734 main.go:141] libmachine: Using API Version  1
	I0917 17:18:23.764923   29734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:18:23.765213   29734 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:18:23.765412   29734 main.go:141] libmachine: (ha-181247) Calling .GetState
	I0917 17:18:23.767286   29734 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:18:23.767583   29734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:18:23.767627   29734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:18:23.783128   29734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40573
	I0917 17:18:23.783610   29734 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:18:23.784033   29734 main.go:141] libmachine: Using API Version  1
	I0917 17:18:23.784050   29734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:18:23.784454   29734 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:18:23.784638   29734 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:18:23.784792   29734 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247 for IP: 192.168.39.11
	I0917 17:18:23.784802   29734 certs.go:194] generating shared ca certs ...
	I0917 17:18:23.784820   29734 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:18:23.784957   29734 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 17:18:23.785010   29734 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 17:18:23.785024   29734 certs.go:256] generating profile certs ...
	I0917 17:18:23.785109   29734 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/client.key
	I0917 17:18:23.785142   29734 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.1273407d
	I0917 17:18:23.785163   29734 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.1273407d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.195 192.168.39.11 192.168.39.254]
	I0917 17:18:24.017669   29734 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.1273407d ...
	I0917 17:18:24.017698   29734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.1273407d: {Name:mk6fcd886260f431a2e141d60740f6e275c19e40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:18:24.017871   29734 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.1273407d ...
	I0917 17:18:24.017883   29734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.1273407d: {Name:mk928b4dd45f83731946f9df6abb001fae0c8aa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:18:24.017955   29734 certs.go:381] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.1273407d -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt
	I0917 17:18:24.018083   29734 certs.go:385] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.1273407d -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key
	I0917 17:18:24.018227   29734 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.key
	I0917 17:18:24.018250   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 17:18:24.018262   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 17:18:24.018273   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 17:18:24.018286   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 17:18:24.018296   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 17:18:24.018306   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 17:18:24.018317   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 17:18:24.018326   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 17:18:24.018375   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 17:18:24.018404   29734 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 17:18:24.018414   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 17:18:24.018434   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 17:18:24.018455   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 17:18:24.018474   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 17:18:24.018554   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 17:18:24.018581   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> /usr/share/ca-certificates/182592.pem
	I0917 17:18:24.018594   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:18:24.018608   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem -> /usr/share/ca-certificates/18259.pem
	I0917 17:18:24.018638   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:18:24.021899   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:18:24.022334   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:18:24.022366   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:18:24.022542   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:18:24.022737   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:18:24.022910   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:18:24.023032   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:18:24.097708   29734 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 17:18:24.103770   29734 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 17:18:24.115930   29734 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 17:18:24.120584   29734 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0917 17:18:24.134328   29734 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 17:18:24.139095   29734 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 17:18:24.151258   29734 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 17:18:24.155654   29734 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0917 17:18:24.165649   29734 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 17:18:24.169836   29734 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 17:18:24.180495   29734 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 17:18:24.185119   29734 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0917 17:18:24.196446   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 17:18:24.222479   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 17:18:24.247258   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 17:18:24.271758   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 17:18:24.296512   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0917 17:18:24.321219   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 17:18:24.346235   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 17:18:24.370848   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 17:18:24.396302   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 17:18:24.422386   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 17:18:24.449417   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 17:18:24.476090   29734 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 17:18:24.495010   29734 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0917 17:18:24.513069   29734 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 17:18:24.533094   29734 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0917 17:18:24.553421   29734 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 17:18:24.573290   29734 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0917 17:18:24.593016   29734 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 17:18:24.611184   29734 ssh_runner.go:195] Run: openssl version
	I0917 17:18:24.617107   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 17:18:24.629819   29734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 17:18:24.634464   29734 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 17:18:24.634518   29734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 17:18:24.640567   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 17:18:24.652501   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 17:18:24.664675   29734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 17:18:24.669605   29734 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 17:18:24.669656   29734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 17:18:24.675690   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 17:18:24.687747   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 17:18:24.701272   29734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:18:24.706122   29734 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:18:24.706198   29734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:18:24.712224   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 17:18:24.724111   29734 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 17:18:24.728943   29734 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 17:18:24.728994   29734 kubeadm.go:934] updating node {m02 192.168.39.11 8443 v1.31.1 crio true true} ...
	I0917 17:18:24.729082   29734 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-181247-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-181247 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 17:18:24.729132   29734 kube-vip.go:115] generating kube-vip config ...
	I0917 17:18:24.729169   29734 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 17:18:24.749085   29734 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 17:18:24.749220   29734 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 17:18:24.749310   29734 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 17:18:24.760483   29734 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0917 17:18:24.760564   29734 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0917 17:18:24.771164   29734 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0917 17:18:24.771197   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0917 17:18:24.771262   29734 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0917 17:18:24.771260   29734 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19662-11085/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0917 17:18:24.771263   29734 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19662-11085/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0917 17:18:24.775912   29734 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0917 17:18:24.775951   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0917 17:18:25.394171   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0917 17:18:25.394246   29734 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0917 17:18:25.400533   29734 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0917 17:18:25.400572   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0917 17:18:25.526918   29734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:18:25.558888   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0917 17:18:25.559005   29734 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0917 17:18:25.580514   29734 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0917 17:18:25.580552   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0917 17:18:26.020645   29734 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 17:18:26.031121   29734 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0917 17:18:26.049185   29734 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 17:18:26.066971   29734 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0917 17:18:26.084941   29734 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0917 17:18:26.089581   29734 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 17:18:26.104518   29734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:18:26.249535   29734 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 17:18:26.267954   29734 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:18:26.268305   29734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:18:26.268352   29734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:18:26.284171   29734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33347
	I0917 17:18:26.284750   29734 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:18:26.285379   29734 main.go:141] libmachine: Using API Version  1
	I0917 17:18:26.285410   29734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:18:26.285784   29734 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:18:26.285973   29734 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:18:26.286132   29734 start.go:317] joinCluster: &{Name:ha-181247 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-181247 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:18:26.286260   29734 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0917 17:18:26.286284   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:18:26.289193   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:18:26.289780   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:18:26.289806   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:18:26.290017   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:18:26.290229   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:18:26.290408   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:18:26.290560   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:18:26.444849   29734 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 17:18:26.444896   29734 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2a07d3.ux3juwlz64et24sq --discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-181247-m02 --control-plane --apiserver-advertise-address=192.168.39.11 --apiserver-bind-port=8443"
	I0917 17:18:49.054070   29734 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2a07d3.ux3juwlz64et24sq --discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-181247-m02 --control-plane --apiserver-advertise-address=192.168.39.11 --apiserver-bind-port=8443": (22.609145893s)
	I0917 17:18:49.054109   29734 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0917 17:18:49.583985   29734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-181247-m02 minikube.k8s.io/updated_at=2024_09_17T17_18_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=ha-181247 minikube.k8s.io/primary=false
	I0917 17:18:49.708990   29734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-181247-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0917 17:18:49.837362   29734 start.go:319] duration metric: took 23.551222749s to joinCluster
	I0917 17:18:49.837441   29734 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 17:18:49.837720   29734 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:18:49.838795   29734 out.go:177] * Verifying Kubernetes components...
	I0917 17:18:49.839889   29734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:18:50.094124   29734 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 17:18:50.126727   29734 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 17:18:50.127076   29734 kapi.go:59] client config for ha-181247: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/client.crt", KeyFile:"/home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/client.key", CAFile:"/home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 17:18:50.127174   29734 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.195:8443
	I0917 17:18:50.127450   29734 node_ready.go:35] waiting up to 6m0s for node "ha-181247-m02" to be "Ready" ...
	I0917 17:18:50.127549   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:50.127557   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:50.127564   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:50.127572   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:50.140407   29734 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0917 17:18:50.628424   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:50.628447   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:50.628457   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:50.628463   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:50.634586   29734 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 17:18:51.128653   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:51.128683   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:51.128695   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:51.128701   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:51.148397   29734 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0917 17:18:51.628341   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:51.628385   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:51.628394   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:51.628398   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:51.662216   29734 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I0917 17:18:52.128469   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:52.128496   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:52.128507   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:52.128514   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:52.133662   29734 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 17:18:52.134368   29734 node_ready.go:53] node "ha-181247-m02" has status "Ready":"False"
	I0917 17:18:52.628563   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:52.628586   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:52.628597   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:52.628602   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:52.708335   29734 round_trippers.go:574] Response Status: 200 OK in 79 milliseconds
	I0917 17:18:53.128636   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:53.128663   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:53.128672   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:53.128677   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:53.132233   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:18:53.627931   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:53.627954   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:53.627962   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:53.627970   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:53.631427   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:18:54.127631   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:54.127652   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:54.127660   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:54.127664   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:54.131464   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:18:54.628648   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:54.628679   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:54.628690   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:54.628694   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:54.632607   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:18:54.633308   29734 node_ready.go:53] node "ha-181247-m02" has status "Ready":"False"
	I0917 17:18:55.127675   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:55.127696   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:55.127706   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:55.127710   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:55.132315   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:18:55.628076   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:55.628099   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:55.628107   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:55.628113   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:55.631189   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:18:56.128064   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:56.128094   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:56.128105   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:56.128111   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:56.134365   29734 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 17:18:56.628672   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:56.628695   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:56.628704   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:56.628709   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:56.631642   29734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 17:18:57.128083   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:57.128106   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:57.128115   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:57.128119   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:57.131959   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:18:57.132534   29734 node_ready.go:53] node "ha-181247-m02" has status "Ready":"False"
	I0917 17:18:57.628498   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:57.628520   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:57.628528   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:57.628532   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:57.632525   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:18:58.128220   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:58.128248   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:58.128259   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:58.128264   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:58.131830   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:18:58.627856   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:58.627881   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:58.627892   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:58.627896   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:58.631192   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:18:59.128329   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:59.128354   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:59.128362   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:59.128366   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:59.132197   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:18:59.132757   29734 node_ready.go:53] node "ha-181247-m02" has status "Ready":"False"
	I0917 17:18:59.628125   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:59.628149   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:59.628160   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:59.628167   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:59.631656   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:00.128448   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:00.128476   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:00.128484   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:00.128489   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:00.132326   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:00.628352   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:00.628380   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:00.628388   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:00.628392   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:00.631953   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:01.127782   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:01.127807   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:01.127817   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:01.127823   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:01.131862   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:19:01.627971   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:01.627997   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:01.628005   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:01.628009   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:01.631442   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:01.632220   29734 node_ready.go:53] node "ha-181247-m02" has status "Ready":"False"
	I0917 17:19:02.128452   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:02.128476   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:02.128491   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:02.128495   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:02.132367   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:02.628547   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:02.628569   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:02.628577   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:02.628581   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:02.632883   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:19:03.128467   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:03.128491   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:03.128499   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:03.128504   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:03.131890   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:03.627748   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:03.627771   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:03.627778   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:03.627783   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:03.631871   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:19:03.632342   29734 node_ready.go:53] node "ha-181247-m02" has status "Ready":"False"
	I0917 17:19:04.128630   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:04.128656   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:04.128665   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:04.128668   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:04.133180   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:19:04.627677   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:04.627702   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:04.627713   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:04.627719   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:04.631015   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:04.631502   29734 node_ready.go:49] node "ha-181247-m02" has status "Ready":"True"
	I0917 17:19:04.631526   29734 node_ready.go:38] duration metric: took 14.504055199s for node "ha-181247-m02" to be "Ready" ...
	I0917 17:19:04.631534   29734 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 17:19:04.631615   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0917 17:19:04.631624   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:04.631631   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:04.631636   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:04.636011   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:19:04.643282   29734 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5lmg4" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:04.643389   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5lmg4
	I0917 17:19:04.643398   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:04.643409   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:04.643419   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:04.647868   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:19:04.648871   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:19:04.648884   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:04.648893   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:04.648898   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:04.652339   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:04.652851   29734 pod_ready.go:93] pod "coredns-7c65d6cfc9-5lmg4" in "kube-system" namespace has status "Ready":"True"
	I0917 17:19:04.652869   29734 pod_ready.go:82] duration metric: took 9.552348ms for pod "coredns-7c65d6cfc9-5lmg4" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:04.652878   29734 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bdthh" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:04.652932   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bdthh
	I0917 17:19:04.652940   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:04.652947   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:04.652950   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:04.657348   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:19:04.658736   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:19:04.658755   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:04.658761   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:04.658764   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:04.662294   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:04.663051   29734 pod_ready.go:93] pod "coredns-7c65d6cfc9-bdthh" in "kube-system" namespace has status "Ready":"True"
	I0917 17:19:04.663067   29734 pod_ready.go:82] duration metric: took 10.183659ms for pod "coredns-7c65d6cfc9-bdthh" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:04.663076   29734 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:04.663126   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/etcd-ha-181247
	I0917 17:19:04.663134   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:04.663140   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:04.663144   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:04.666354   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:04.667375   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:19:04.667390   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:04.667398   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:04.667401   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:04.672811   29734 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 17:19:04.673198   29734 pod_ready.go:93] pod "etcd-ha-181247" in "kube-system" namespace has status "Ready":"True"
	I0917 17:19:04.673215   29734 pod_ready.go:82] duration metric: took 10.133505ms for pod "etcd-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:04.673224   29734 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:04.673291   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/etcd-ha-181247-m02
	I0917 17:19:04.673300   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:04.673306   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:04.673309   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:04.676064   29734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 17:19:04.676574   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:04.676588   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:04.676595   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:04.676599   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:04.679297   29734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 17:19:04.680195   29734 pod_ready.go:93] pod "etcd-ha-181247-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 17:19:04.680211   29734 pod_ready.go:82] duration metric: took 6.968087ms for pod "etcd-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:04.680224   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:04.828649   29734 request.go:632] Waited for 148.367571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181247
	I0917 17:19:04.828725   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181247
	I0917 17:19:04.828731   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:04.828738   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:04.828741   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:04.833066   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:19:05.028113   29734 request.go:632] Waited for 194.320349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:19:05.028199   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:19:05.028209   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:05.028219   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:05.028229   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:05.031956   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:05.032561   29734 pod_ready.go:93] pod "kube-apiserver-ha-181247" in "kube-system" namespace has status "Ready":"True"
	I0917 17:19:05.032584   29734 pod_ready.go:82] duration metric: took 352.352224ms for pod "kube-apiserver-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:05.032596   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:05.228614   29734 request.go:632] Waited for 195.953875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181247-m02
	I0917 17:19:05.228698   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181247-m02
	I0917 17:19:05.228703   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:05.228712   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:05.228719   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:05.232270   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:05.428630   29734 request.go:632] Waited for 195.391292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:05.428712   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:05.428719   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:05.428726   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:05.428731   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:05.432195   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:05.432785   29734 pod_ready.go:93] pod "kube-apiserver-ha-181247-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 17:19:05.432806   29734 pod_ready.go:82] duration metric: took 400.203438ms for pod "kube-apiserver-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:05.432816   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:05.627771   29734 request.go:632] Waited for 194.898858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181247
	I0917 17:19:05.627856   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181247
	I0917 17:19:05.627862   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:05.627869   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:05.627874   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:05.631519   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:05.827768   29734 request.go:632] Waited for 195.295968ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:19:05.827821   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:19:05.827827   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:05.827835   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:05.827839   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:05.831185   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:05.831809   29734 pod_ready.go:93] pod "kube-controller-manager-ha-181247" in "kube-system" namespace has status "Ready":"True"
	I0917 17:19:05.831830   29734 pod_ready.go:82] duration metric: took 399.00684ms for pod "kube-controller-manager-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:05.831840   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:06.027853   29734 request.go:632] Waited for 195.925024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181247-m02
	I0917 17:19:06.027921   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181247-m02
	I0917 17:19:06.027928   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:06.027937   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:06.027944   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:06.031819   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:06.227852   29734 request.go:632] Waited for 195.333615ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:06.227914   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:06.227920   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:06.227928   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:06.227932   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:06.231334   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:06.231806   29734 pod_ready.go:93] pod "kube-controller-manager-ha-181247-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 17:19:06.231822   29734 pod_ready.go:82] duration metric: took 399.976189ms for pod "kube-controller-manager-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:06.231832   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7rrxk" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:06.428530   29734 request.go:632] Waited for 196.625704ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7rrxk
	I0917 17:19:06.428599   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7rrxk
	I0917 17:19:06.428608   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:06.428619   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:06.428628   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:06.432403   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:06.628466   29734 request.go:632] Waited for 195.4225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:19:06.628538   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:19:06.628548   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:06.628555   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:06.628561   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:06.631799   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:06.632499   29734 pod_ready.go:93] pod "kube-proxy-7rrxk" in "kube-system" namespace has status "Ready":"True"
	I0917 17:19:06.632521   29734 pod_ready.go:82] duration metric: took 400.682725ms for pod "kube-proxy-7rrxk" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:06.632533   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xmfcj" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:06.828512   29734 request.go:632] Waited for 195.904312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xmfcj
	I0917 17:19:06.828585   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xmfcj
	I0917 17:19:06.828590   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:06.828597   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:06.828609   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:06.832250   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:07.028409   29734 request.go:632] Waited for 195.37963ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:07.028509   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:07.028520   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:07.028531   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:07.028539   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:07.031632   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:07.032132   29734 pod_ready.go:93] pod "kube-proxy-xmfcj" in "kube-system" namespace has status "Ready":"True"
	I0917 17:19:07.032154   29734 pod_ready.go:82] duration metric: took 399.612352ms for pod "kube-proxy-xmfcj" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:07.032166   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:07.228285   29734 request.go:632] Waited for 196.052237ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181247
	I0917 17:19:07.228353   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181247
	I0917 17:19:07.228358   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:07.228365   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:07.228370   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:07.231898   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:07.428163   29734 request.go:632] Waited for 195.609083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:19:07.428234   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:19:07.428239   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:07.428247   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:07.428257   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:07.431921   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:07.432585   29734 pod_ready.go:93] pod "kube-scheduler-ha-181247" in "kube-system" namespace has status "Ready":"True"
	I0917 17:19:07.432606   29734 pod_ready.go:82] duration metric: took 400.431576ms for pod "kube-scheduler-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:07.432615   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:07.628703   29734 request.go:632] Waited for 196.028502ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181247-m02
	I0917 17:19:07.628784   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181247-m02
	I0917 17:19:07.628794   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:07.628801   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:07.628806   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:07.632103   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:07.828108   29734 request.go:632] Waited for 195.437367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:07.828177   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:07.828184   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:07.828193   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:07.828198   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:07.831700   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:07.832276   29734 pod_ready.go:93] pod "kube-scheduler-ha-181247-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 17:19:07.832297   29734 pod_ready.go:82] duration metric: took 399.675807ms for pod "kube-scheduler-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:07.832310   29734 pod_ready.go:39] duration metric: took 3.200765806s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 17:19:07.832330   29734 api_server.go:52] waiting for apiserver process to appear ...
	I0917 17:19:07.832384   29734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:19:07.849109   29734 api_server.go:72] duration metric: took 18.011632627s to wait for apiserver process to appear ...
	I0917 17:19:07.849139   29734 api_server.go:88] waiting for apiserver healthz status ...
	I0917 17:19:07.849160   29734 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I0917 17:19:07.853417   29734 api_server.go:279] https://192.168.39.195:8443/healthz returned 200:
	ok
	I0917 17:19:07.853492   29734 round_trippers.go:463] GET https://192.168.39.195:8443/version
	I0917 17:19:07.853502   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:07.853515   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:07.853524   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:07.854467   29734 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0917 17:19:07.854585   29734 api_server.go:141] control plane version: v1.31.1
	I0917 17:19:07.854603   29734 api_server.go:131] duration metric: took 5.457921ms to wait for apiserver health ...
	I0917 17:19:07.854613   29734 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 17:19:08.027910   29734 request.go:632] Waited for 173.234881ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0917 17:19:08.028000   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0917 17:19:08.028009   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:08.028020   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:08.028029   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:08.032889   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:19:08.038488   29734 system_pods.go:59] 17 kube-system pods found
	I0917 17:19:08.038523   29734 system_pods.go:61] "coredns-7c65d6cfc9-5lmg4" [1052e249-3530-4220-8214-0c36a02c4215] Running
	I0917 17:19:08.038531   29734 system_pods.go:61] "coredns-7c65d6cfc9-bdthh" [63ae9d00-44ce-47be-80c5-12144ff8c69b] Running
	I0917 17:19:08.038538   29734 system_pods.go:61] "etcd-ha-181247" [6e221481-8a96-432b-935d-8ed44c26ca62] Running
	I0917 17:19:08.038544   29734 system_pods.go:61] "etcd-ha-181247-m02" [a691f359-6e75-464a-9b99-6a9b91ef4907] Running
	I0917 17:19:08.038548   29734 system_pods.go:61] "kindnet-2tkbp" [882de4ca-d789-403e-a22e-22fbc776af10] Running
	I0917 17:19:08.038554   29734 system_pods.go:61] "kindnet-qqpgm" [5e8663b0-97a2-4995-951a-5fcee45c71de] Running
	I0917 17:19:08.038560   29734 system_pods.go:61] "kube-apiserver-ha-181247" [5386d1d6-820f-4f46-a379-f38cab3047ad] Running
	I0917 17:19:08.038565   29734 system_pods.go:61] "kube-apiserver-ha-181247-m02" [9611a83e-8be3-41c3-8477-f020d0494000] Running
	I0917 17:19:08.038571   29734 system_pods.go:61] "kube-controller-manager-ha-181247" [9732aff5-419d-4d8c-ba06-ec37a29cdb95] Running
	I0917 17:19:08.038576   29734 system_pods.go:61] "kube-controller-manager-ha-181247-m02" [6bc1cdbf-ef9a-420f-8250-131c7684745e] Running
	I0917 17:19:08.038581   29734 system_pods.go:61] "kube-proxy-7rrxk" [a075630a-48df-429f-98ef-49bca2d9dac5] Running
	I0917 17:19:08.038588   29734 system_pods.go:61] "kube-proxy-xmfcj" [f2eaf5d5-34e2-45b0-9aa3-5cb28b952dfa] Running
	I0917 17:19:08.038594   29734 system_pods.go:61] "kube-scheduler-ha-181247" [dc64d80c-5975-40e4-b3dd-51a43cb7d5c4] Running
	I0917 17:19:08.038602   29734 system_pods.go:61] "kube-scheduler-ha-181247-m02" [2130254c-2836-4867-b9d4-4371d7897b7f] Running
	I0917 17:19:08.038608   29734 system_pods.go:61] "kube-vip-ha-181247" [45c79311-640f-4df4-8902-e3b09f11d417] Running
	I0917 17:19:08.038614   29734 system_pods.go:61] "kube-vip-ha-181247-m02" [8de63338-cae2-4484-87f8-51d71ebd3d5a] Running
	I0917 17:19:08.038619   29734 system_pods.go:61] "storage-provisioner" [fcef4cf0-61a6-4f9f-9644-f17f7f819237] Running
	I0917 17:19:08.038630   29734 system_pods.go:74] duration metric: took 184.006064ms to wait for pod list to return data ...
	I0917 17:19:08.038642   29734 default_sa.go:34] waiting for default service account to be created ...
	I0917 17:19:08.228086   29734 request.go:632] Waited for 189.360557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/default/serviceaccounts
	I0917 17:19:08.228158   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/default/serviceaccounts
	I0917 17:19:08.228164   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:08.228171   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:08.228175   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:08.232546   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:19:08.232759   29734 default_sa.go:45] found service account: "default"
	I0917 17:19:08.232777   29734 default_sa.go:55] duration metric: took 194.128353ms for default service account to be created ...
	I0917 17:19:08.232788   29734 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 17:19:08.428219   29734 request.go:632] Waited for 195.365702ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0917 17:19:08.428285   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0917 17:19:08.428291   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:08.428298   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:08.428302   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:08.435169   29734 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 17:19:08.440681   29734 system_pods.go:86] 17 kube-system pods found
	I0917 17:19:08.440708   29734 system_pods.go:89] "coredns-7c65d6cfc9-5lmg4" [1052e249-3530-4220-8214-0c36a02c4215] Running
	I0917 17:19:08.440713   29734 system_pods.go:89] "coredns-7c65d6cfc9-bdthh" [63ae9d00-44ce-47be-80c5-12144ff8c69b] Running
	I0917 17:19:08.440718   29734 system_pods.go:89] "etcd-ha-181247" [6e221481-8a96-432b-935d-8ed44c26ca62] Running
	I0917 17:19:08.440721   29734 system_pods.go:89] "etcd-ha-181247-m02" [a691f359-6e75-464a-9b99-6a9b91ef4907] Running
	I0917 17:19:08.440725   29734 system_pods.go:89] "kindnet-2tkbp" [882de4ca-d789-403e-a22e-22fbc776af10] Running
	I0917 17:19:08.440729   29734 system_pods.go:89] "kindnet-qqpgm" [5e8663b0-97a2-4995-951a-5fcee45c71de] Running
	I0917 17:19:08.440732   29734 system_pods.go:89] "kube-apiserver-ha-181247" [5386d1d6-820f-4f46-a379-f38cab3047ad] Running
	I0917 17:19:08.440736   29734 system_pods.go:89] "kube-apiserver-ha-181247-m02" [9611a83e-8be3-41c3-8477-f020d0494000] Running
	I0917 17:19:08.440739   29734 system_pods.go:89] "kube-controller-manager-ha-181247" [9732aff5-419d-4d8c-ba06-ec37a29cdb95] Running
	I0917 17:19:08.440743   29734 system_pods.go:89] "kube-controller-manager-ha-181247-m02" [6bc1cdbf-ef9a-420f-8250-131c7684745e] Running
	I0917 17:19:08.440746   29734 system_pods.go:89] "kube-proxy-7rrxk" [a075630a-48df-429f-98ef-49bca2d9dac5] Running
	I0917 17:19:08.440749   29734 system_pods.go:89] "kube-proxy-xmfcj" [f2eaf5d5-34e2-45b0-9aa3-5cb28b952dfa] Running
	I0917 17:19:08.440753   29734 system_pods.go:89] "kube-scheduler-ha-181247" [dc64d80c-5975-40e4-b3dd-51a43cb7d5c4] Running
	I0917 17:19:08.440756   29734 system_pods.go:89] "kube-scheduler-ha-181247-m02" [2130254c-2836-4867-b9d4-4371d7897b7f] Running
	I0917 17:19:08.440759   29734 system_pods.go:89] "kube-vip-ha-181247" [45c79311-640f-4df4-8902-e3b09f11d417] Running
	I0917 17:19:08.440762   29734 system_pods.go:89] "kube-vip-ha-181247-m02" [8de63338-cae2-4484-87f8-51d71ebd3d5a] Running
	I0917 17:19:08.440765   29734 system_pods.go:89] "storage-provisioner" [fcef4cf0-61a6-4f9f-9644-f17f7f819237] Running
	I0917 17:19:08.440771   29734 system_pods.go:126] duration metric: took 207.978033ms to wait for k8s-apps to be running ...
	I0917 17:19:08.440782   29734 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 17:19:08.440838   29734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:19:08.456872   29734 system_svc.go:56] duration metric: took 16.081642ms WaitForService to wait for kubelet
	I0917 17:19:08.456902   29734 kubeadm.go:582] duration metric: took 18.619431503s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 17:19:08.456921   29734 node_conditions.go:102] verifying NodePressure condition ...
	I0917 17:19:08.628409   29734 request.go:632] Waited for 171.408526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes
	I0917 17:19:08.628469   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes
	I0917 17:19:08.628474   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:08.628482   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:08.628486   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:08.632523   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:19:08.633362   29734 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 17:19:08.633400   29734 node_conditions.go:123] node cpu capacity is 2
	I0917 17:19:08.633421   29734 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 17:19:08.633426   29734 node_conditions.go:123] node cpu capacity is 2
	I0917 17:19:08.633432   29734 node_conditions.go:105] duration metric: took 176.504883ms to run NodePressure ...
	I0917 17:19:08.633446   29734 start.go:241] waiting for startup goroutines ...
	I0917 17:19:08.633478   29734 start.go:255] writing updated cluster config ...
	I0917 17:19:08.635758   29734 out.go:201] 
	I0917 17:19:08.637200   29734 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:19:08.637324   29734 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/config.json ...
	I0917 17:19:08.638868   29734 out.go:177] * Starting "ha-181247-m03" control-plane node in "ha-181247" cluster
	I0917 17:19:08.639925   29734 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 17:19:08.639949   29734 cache.go:56] Caching tarball of preloaded images
	I0917 17:19:08.640044   29734 preload.go:172] Found /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 17:19:08.640054   29734 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0917 17:19:08.640142   29734 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/config.json ...
	I0917 17:19:08.640314   29734 start.go:360] acquireMachinesLock for ha-181247-m03: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 17:19:08.640366   29734 start.go:364] duration metric: took 33.619µs to acquireMachinesLock for "ha-181247-m03"
	I0917 17:19:08.640384   29734 start.go:93] Provisioning new machine with config: &{Name:ha-181247 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-181247 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 17:19:08.640476   29734 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0917 17:19:08.641862   29734 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 17:19:08.641944   29734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:19:08.641977   29734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:19:08.657294   29734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41971
	I0917 17:19:08.657808   29734 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:19:08.658350   29734 main.go:141] libmachine: Using API Version  1
	I0917 17:19:08.658370   29734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:19:08.658741   29734 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:19:08.658909   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetMachineName
	I0917 17:19:08.659047   29734 main.go:141] libmachine: (ha-181247-m03) Calling .DriverName
	I0917 17:19:08.659174   29734 start.go:159] libmachine.API.Create for "ha-181247" (driver="kvm2")
	I0917 17:19:08.659216   29734 client.go:168] LocalClient.Create starting
	I0917 17:19:08.659266   29734 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem
	I0917 17:19:08.659308   29734 main.go:141] libmachine: Decoding PEM data...
	I0917 17:19:08.659335   29734 main.go:141] libmachine: Parsing certificate...
	I0917 17:19:08.659406   29734 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem
	I0917 17:19:08.659432   29734 main.go:141] libmachine: Decoding PEM data...
	I0917 17:19:08.659448   29734 main.go:141] libmachine: Parsing certificate...
	I0917 17:19:08.659476   29734 main.go:141] libmachine: Running pre-create checks...
	I0917 17:19:08.659487   29734 main.go:141] libmachine: (ha-181247-m03) Calling .PreCreateCheck
	I0917 17:19:08.659660   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetConfigRaw
	I0917 17:19:08.660045   29734 main.go:141] libmachine: Creating machine...
	I0917 17:19:08.660059   29734 main.go:141] libmachine: (ha-181247-m03) Calling .Create
	I0917 17:19:08.660254   29734 main.go:141] libmachine: (ha-181247-m03) Creating KVM machine...
	I0917 17:19:08.661565   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found existing default KVM network
	I0917 17:19:08.661742   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found existing private KVM network mk-ha-181247
	I0917 17:19:08.661892   29734 main.go:141] libmachine: (ha-181247-m03) Setting up store path in /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03 ...
	I0917 17:19:08.661927   29734 main.go:141] libmachine: (ha-181247-m03) Building disk image from file:///home/jenkins/minikube-integration/19662-11085/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0917 17:19:08.662011   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:08.661912   30902 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 17:19:08.662088   29734 main.go:141] libmachine: (ha-181247-m03) Downloading /home/jenkins/minikube-integration/19662-11085/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19662-11085/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0917 17:19:08.890784   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:08.890624   30902 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03/id_rsa...
	I0917 17:19:09.187633   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:09.187519   30902 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03/ha-181247-m03.rawdisk...
	I0917 17:19:09.187663   29734 main.go:141] libmachine: (ha-181247-m03) DBG | Writing magic tar header
	I0917 17:19:09.187673   29734 main.go:141] libmachine: (ha-181247-m03) DBG | Writing SSH key tar header
	I0917 17:19:09.187681   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:09.187637   30902 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03 ...
	I0917 17:19:09.187772   29734 main.go:141] libmachine: (ha-181247-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03
	I0917 17:19:09.187787   29734 main.go:141] libmachine: (ha-181247-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube/machines
	I0917 17:19:09.187812   29734 main.go:141] libmachine: (ha-181247-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 17:19:09.187825   29734 main.go:141] libmachine: (ha-181247-m03) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03 (perms=drwx------)
	I0917 17:19:09.187835   29734 main.go:141] libmachine: (ha-181247-m03) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube/machines (perms=drwxr-xr-x)
	I0917 17:19:09.187842   29734 main.go:141] libmachine: (ha-181247-m03) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube (perms=drwxr-xr-x)
	I0917 17:19:09.187848   29734 main.go:141] libmachine: (ha-181247-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085
	I0917 17:19:09.187858   29734 main.go:141] libmachine: (ha-181247-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0917 17:19:09.187865   29734 main.go:141] libmachine: (ha-181247-m03) DBG | Checking permissions on dir: /home/jenkins
	I0917 17:19:09.187872   29734 main.go:141] libmachine: (ha-181247-m03) DBG | Checking permissions on dir: /home
	I0917 17:19:09.187877   29734 main.go:141] libmachine: (ha-181247-m03) DBG | Skipping /home - not owner
	I0917 17:19:09.187886   29734 main.go:141] libmachine: (ha-181247-m03) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085 (perms=drwxrwxr-x)
	I0917 17:19:09.187894   29734 main.go:141] libmachine: (ha-181247-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0917 17:19:09.187900   29734 main.go:141] libmachine: (ha-181247-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0917 17:19:09.187907   29734 main.go:141] libmachine: (ha-181247-m03) Creating domain...
	I0917 17:19:09.189156   29734 main.go:141] libmachine: (ha-181247-m03) define libvirt domain using xml: 
	I0917 17:19:09.189172   29734 main.go:141] libmachine: (ha-181247-m03) <domain type='kvm'>
	I0917 17:19:09.189187   29734 main.go:141] libmachine: (ha-181247-m03)   <name>ha-181247-m03</name>
	I0917 17:19:09.189198   29734 main.go:141] libmachine: (ha-181247-m03)   <memory unit='MiB'>2200</memory>
	I0917 17:19:09.189203   29734 main.go:141] libmachine: (ha-181247-m03)   <vcpu>2</vcpu>
	I0917 17:19:09.189207   29734 main.go:141] libmachine: (ha-181247-m03)   <features>
	I0917 17:19:09.189212   29734 main.go:141] libmachine: (ha-181247-m03)     <acpi/>
	I0917 17:19:09.189216   29734 main.go:141] libmachine: (ha-181247-m03)     <apic/>
	I0917 17:19:09.189220   29734 main.go:141] libmachine: (ha-181247-m03)     <pae/>
	I0917 17:19:09.189224   29734 main.go:141] libmachine: (ha-181247-m03)     
	I0917 17:19:09.189242   29734 main.go:141] libmachine: (ha-181247-m03)   </features>
	I0917 17:19:09.189249   29734 main.go:141] libmachine: (ha-181247-m03)   <cpu mode='host-passthrough'>
	I0917 17:19:09.189257   29734 main.go:141] libmachine: (ha-181247-m03)   
	I0917 17:19:09.189262   29734 main.go:141] libmachine: (ha-181247-m03)   </cpu>
	I0917 17:19:09.189271   29734 main.go:141] libmachine: (ha-181247-m03)   <os>
	I0917 17:19:09.189275   29734 main.go:141] libmachine: (ha-181247-m03)     <type>hvm</type>
	I0917 17:19:09.189282   29734 main.go:141] libmachine: (ha-181247-m03)     <boot dev='cdrom'/>
	I0917 17:19:09.189286   29734 main.go:141] libmachine: (ha-181247-m03)     <boot dev='hd'/>
	I0917 17:19:09.189293   29734 main.go:141] libmachine: (ha-181247-m03)     <bootmenu enable='no'/>
	I0917 17:19:09.189297   29734 main.go:141] libmachine: (ha-181247-m03)   </os>
	I0917 17:19:09.189302   29734 main.go:141] libmachine: (ha-181247-m03)   <devices>
	I0917 17:19:09.189309   29734 main.go:141] libmachine: (ha-181247-m03)     <disk type='file' device='cdrom'>
	I0917 17:19:09.189368   29734 main.go:141] libmachine: (ha-181247-m03)       <source file='/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03/boot2docker.iso'/>
	I0917 17:19:09.189394   29734 main.go:141] libmachine: (ha-181247-m03)       <target dev='hdc' bus='scsi'/>
	I0917 17:19:09.189406   29734 main.go:141] libmachine: (ha-181247-m03)       <readonly/>
	I0917 17:19:09.189417   29734 main.go:141] libmachine: (ha-181247-m03)     </disk>
	I0917 17:19:09.189430   29734 main.go:141] libmachine: (ha-181247-m03)     <disk type='file' device='disk'>
	I0917 17:19:09.189444   29734 main.go:141] libmachine: (ha-181247-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0917 17:19:09.189463   29734 main.go:141] libmachine: (ha-181247-m03)       <source file='/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03/ha-181247-m03.rawdisk'/>
	I0917 17:19:09.189481   29734 main.go:141] libmachine: (ha-181247-m03)       <target dev='hda' bus='virtio'/>
	I0917 17:19:09.189493   29734 main.go:141] libmachine: (ha-181247-m03)     </disk>
	I0917 17:19:09.189500   29734 main.go:141] libmachine: (ha-181247-m03)     <interface type='network'>
	I0917 17:19:09.189510   29734 main.go:141] libmachine: (ha-181247-m03)       <source network='mk-ha-181247'/>
	I0917 17:19:09.189521   29734 main.go:141] libmachine: (ha-181247-m03)       <model type='virtio'/>
	I0917 17:19:09.189533   29734 main.go:141] libmachine: (ha-181247-m03)     </interface>
	I0917 17:19:09.189541   29734 main.go:141] libmachine: (ha-181247-m03)     <interface type='network'>
	I0917 17:19:09.189577   29734 main.go:141] libmachine: (ha-181247-m03)       <source network='default'/>
	I0917 17:19:09.189603   29734 main.go:141] libmachine: (ha-181247-m03)       <model type='virtio'/>
	I0917 17:19:09.189620   29734 main.go:141] libmachine: (ha-181247-m03)     </interface>
	I0917 17:19:09.189637   29734 main.go:141] libmachine: (ha-181247-m03)     <serial type='pty'>
	I0917 17:19:09.189648   29734 main.go:141] libmachine: (ha-181247-m03)       <target port='0'/>
	I0917 17:19:09.189655   29734 main.go:141] libmachine: (ha-181247-m03)     </serial>
	I0917 17:19:09.189666   29734 main.go:141] libmachine: (ha-181247-m03)     <console type='pty'>
	I0917 17:19:09.189676   29734 main.go:141] libmachine: (ha-181247-m03)       <target type='serial' port='0'/>
	I0917 17:19:09.189681   29734 main.go:141] libmachine: (ha-181247-m03)     </console>
	I0917 17:19:09.189686   29734 main.go:141] libmachine: (ha-181247-m03)     <rng model='virtio'>
	I0917 17:19:09.189696   29734 main.go:141] libmachine: (ha-181247-m03)       <backend model='random'>/dev/random</backend>
	I0917 17:19:09.189708   29734 main.go:141] libmachine: (ha-181247-m03)     </rng>
	I0917 17:19:09.189719   29734 main.go:141] libmachine: (ha-181247-m03)     
	I0917 17:19:09.189725   29734 main.go:141] libmachine: (ha-181247-m03)     
	I0917 17:19:09.189733   29734 main.go:141] libmachine: (ha-181247-m03)   </devices>
	I0917 17:19:09.189744   29734 main.go:141] libmachine: (ha-181247-m03) </domain>
	I0917 17:19:09.189754   29734 main.go:141] libmachine: (ha-181247-m03) 
	I0917 17:19:09.196712   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:88:12:68 in network default
	I0917 17:19:09.197192   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:09.197208   29734 main.go:141] libmachine: (ha-181247-m03) Ensuring networks are active...
	I0917 17:19:09.197831   29734 main.go:141] libmachine: (ha-181247-m03) Ensuring network default is active
	I0917 17:19:09.198205   29734 main.go:141] libmachine: (ha-181247-m03) Ensuring network mk-ha-181247 is active
	I0917 17:19:09.198544   29734 main.go:141] libmachine: (ha-181247-m03) Getting domain xml...
	I0917 17:19:09.199186   29734 main.go:141] libmachine: (ha-181247-m03) Creating domain...
	I0917 17:19:10.470752   29734 main.go:141] libmachine: (ha-181247-m03) Waiting to get IP...
	I0917 17:19:10.471534   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:10.472003   29734 main.go:141] libmachine: (ha-181247-m03) DBG | unable to find current IP address of domain ha-181247-m03 in network mk-ha-181247
	I0917 17:19:10.472058   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:10.471980   30902 retry.go:31] will retry after 230.368754ms: waiting for machine to come up
	I0917 17:19:10.704673   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:10.705152   29734 main.go:141] libmachine: (ha-181247-m03) DBG | unable to find current IP address of domain ha-181247-m03 in network mk-ha-181247
	I0917 17:19:10.705180   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:10.705104   30902 retry.go:31] will retry after 344.628649ms: waiting for machine to come up
	I0917 17:19:11.051458   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:11.051952   29734 main.go:141] libmachine: (ha-181247-m03) DBG | unable to find current IP address of domain ha-181247-m03 in network mk-ha-181247
	I0917 17:19:11.051969   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:11.051922   30902 retry.go:31] will retry after 429.299996ms: waiting for machine to come up
	I0917 17:19:11.482452   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:11.482986   29734 main.go:141] libmachine: (ha-181247-m03) DBG | unable to find current IP address of domain ha-181247-m03 in network mk-ha-181247
	I0917 17:19:11.483018   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:11.482928   30902 retry.go:31] will retry after 445.767937ms: waiting for machine to come up
	I0917 17:19:11.930607   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:11.931010   29734 main.go:141] libmachine: (ha-181247-m03) DBG | unable to find current IP address of domain ha-181247-m03 in network mk-ha-181247
	I0917 17:19:11.931032   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:11.930985   30902 retry.go:31] will retry after 522.333996ms: waiting for machine to come up
	I0917 17:19:12.455383   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:12.455913   29734 main.go:141] libmachine: (ha-181247-m03) DBG | unable to find current IP address of domain ha-181247-m03 in network mk-ha-181247
	I0917 17:19:12.455960   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:12.455891   30902 retry.go:31] will retry after 687.049109ms: waiting for machine to come up
	I0917 17:19:13.144894   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:13.145357   29734 main.go:141] libmachine: (ha-181247-m03) DBG | unable to find current IP address of domain ha-181247-m03 in network mk-ha-181247
	I0917 17:19:13.145382   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:13.145313   30902 retry.go:31] will retry after 1.171486205s: waiting for machine to come up
	I0917 17:19:14.317844   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:14.318370   29734 main.go:141] libmachine: (ha-181247-m03) DBG | unable to find current IP address of domain ha-181247-m03 in network mk-ha-181247
	I0917 17:19:14.318397   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:14.318330   30902 retry.go:31] will retry after 1.218607108s: waiting for machine to come up
	I0917 17:19:15.539487   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:15.540058   29734 main.go:141] libmachine: (ha-181247-m03) DBG | unable to find current IP address of domain ha-181247-m03 in network mk-ha-181247
	I0917 17:19:15.540083   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:15.540017   30902 retry.go:31] will retry after 1.749617094s: waiting for machine to come up
	I0917 17:19:17.290964   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:17.291439   29734 main.go:141] libmachine: (ha-181247-m03) DBG | unable to find current IP address of domain ha-181247-m03 in network mk-ha-181247
	I0917 17:19:17.291474   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:17.291380   30902 retry.go:31] will retry after 2.306914749s: waiting for machine to come up
	I0917 17:19:19.599499   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:19.599990   29734 main.go:141] libmachine: (ha-181247-m03) DBG | unable to find current IP address of domain ha-181247-m03 in network mk-ha-181247
	I0917 17:19:19.600020   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:19.599937   30902 retry.go:31] will retry after 2.681763013s: waiting for machine to come up
	I0917 17:19:22.284617   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:22.284998   29734 main.go:141] libmachine: (ha-181247-m03) DBG | unable to find current IP address of domain ha-181247-m03 in network mk-ha-181247
	I0917 17:19:22.285015   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:22.284962   30902 retry.go:31] will retry after 3.378188576s: waiting for machine to come up
	I0917 17:19:25.665734   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:25.666176   29734 main.go:141] libmachine: (ha-181247-m03) DBG | unable to find current IP address of domain ha-181247-m03 in network mk-ha-181247
	I0917 17:19:25.666198   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:25.666147   30902 retry.go:31] will retry after 2.801526949s: waiting for machine to come up
	I0917 17:19:28.471310   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:28.471831   29734 main.go:141] libmachine: (ha-181247-m03) DBG | unable to find current IP address of domain ha-181247-m03 in network mk-ha-181247
	I0917 17:19:28.471868   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:28.471800   30902 retry.go:31] will retry after 4.266119746s: waiting for machine to come up
	I0917 17:19:32.742334   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:32.742918   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has current primary IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:32.742940   29734 main.go:141] libmachine: (ha-181247-m03) Found IP for machine: 192.168.39.122
	I0917 17:19:32.742954   29734 main.go:141] libmachine: (ha-181247-m03) Reserving static IP address...
	I0917 17:19:32.743333   29734 main.go:141] libmachine: (ha-181247-m03) DBG | unable to find host DHCP lease matching {name: "ha-181247-m03", mac: "52:54:00:48:b5:33", ip: "192.168.39.122"} in network mk-ha-181247
	I0917 17:19:32.819277   29734 main.go:141] libmachine: (ha-181247-m03) Reserved static IP address: 192.168.39.122
	I0917 17:19:32.819306   29734 main.go:141] libmachine: (ha-181247-m03) Waiting for SSH to be available...
	I0917 17:19:32.819316   29734 main.go:141] libmachine: (ha-181247-m03) DBG | Getting to WaitForSSH function...
	I0917 17:19:32.821761   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:32.822169   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:minikube Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:32.822190   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:32.822367   29734 main.go:141] libmachine: (ha-181247-m03) DBG | Using SSH client type: external
	I0917 17:19:32.822395   29734 main.go:141] libmachine: (ha-181247-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03/id_rsa (-rw-------)
	I0917 17:19:32.822427   29734 main.go:141] libmachine: (ha-181247-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.122 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 17:19:32.822453   29734 main.go:141] libmachine: (ha-181247-m03) DBG | About to run SSH command:
	I0917 17:19:32.822467   29734 main.go:141] libmachine: (ha-181247-m03) DBG | exit 0
	I0917 17:19:32.953457   29734 main.go:141] libmachine: (ha-181247-m03) DBG | SSH cmd err, output: <nil>: 
	I0917 17:19:32.953739   29734 main.go:141] libmachine: (ha-181247-m03) KVM machine creation complete!
	I0917 17:19:32.954036   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetConfigRaw
	I0917 17:19:32.954714   29734 main.go:141] libmachine: (ha-181247-m03) Calling .DriverName
	I0917 17:19:32.954923   29734 main.go:141] libmachine: (ha-181247-m03) Calling .DriverName
	I0917 17:19:32.955073   29734 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0917 17:19:32.955089   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetState
	I0917 17:19:32.956240   29734 main.go:141] libmachine: Detecting operating system of created instance...
	I0917 17:19:32.956256   29734 main.go:141] libmachine: Waiting for SSH to be available...
	I0917 17:19:32.956263   29734 main.go:141] libmachine: Getting to WaitForSSH function...
	I0917 17:19:32.956278   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	I0917 17:19:32.958371   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:32.958730   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:32.958751   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:32.958900   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHPort
	I0917 17:19:32.959056   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:32.959167   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:32.959266   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHUsername
	I0917 17:19:32.959385   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:19:32.959602   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0917 17:19:32.959614   29734 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0917 17:19:33.068826   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 17:19:33.068847   29734 main.go:141] libmachine: Detecting the provisioner...
	I0917 17:19:33.068856   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	I0917 17:19:33.071615   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:33.072011   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:33.072039   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:33.072171   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHPort
	I0917 17:19:33.072367   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:33.072508   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:33.072621   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHUsername
	I0917 17:19:33.072787   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:19:33.072944   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0917 17:19:33.072953   29734 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0917 17:19:33.186697   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0917 17:19:33.186760   29734 main.go:141] libmachine: found compatible host: buildroot
	I0917 17:19:33.186770   29734 main.go:141] libmachine: Provisioning with buildroot...
	I0917 17:19:33.186781   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetMachineName
	I0917 17:19:33.187034   29734 buildroot.go:166] provisioning hostname "ha-181247-m03"
	I0917 17:19:33.187063   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetMachineName
	I0917 17:19:33.187269   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	I0917 17:19:33.189788   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:33.190166   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:33.190198   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:33.190387   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHPort
	I0917 17:19:33.190562   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:33.190695   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:33.190795   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHUsername
	I0917 17:19:33.190937   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:19:33.191097   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0917 17:19:33.191108   29734 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181247-m03 && echo "ha-181247-m03" | sudo tee /etc/hostname
	I0917 17:19:33.316880   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181247-m03
	
	I0917 17:19:33.316904   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	I0917 17:19:33.319374   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:33.319803   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:33.319837   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:33.319999   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHPort
	I0917 17:19:33.320190   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:33.320329   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:33.320437   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHUsername
	I0917 17:19:33.320568   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:19:33.320768   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0917 17:19:33.320792   29734 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181247-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181247-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181247-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 17:19:33.445343   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 17:19:33.445372   29734 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 17:19:33.445395   29734 buildroot.go:174] setting up certificates
	I0917 17:19:33.445411   29734 provision.go:84] configureAuth start
	I0917 17:19:33.445420   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetMachineName
	I0917 17:19:33.445691   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetIP
	I0917 17:19:33.448403   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:33.448827   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:33.448855   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:33.449004   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	I0917 17:19:33.451416   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:33.451797   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:33.451824   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:33.451990   29734 provision.go:143] copyHostCerts
	I0917 17:19:33.452021   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 17:19:33.452060   29734 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 17:19:33.452073   29734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 17:19:33.452157   29734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 17:19:33.452252   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 17:19:33.452277   29734 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 17:19:33.452287   29734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 17:19:33.452342   29734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 17:19:33.452417   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 17:19:33.452440   29734 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 17:19:33.452450   29734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 17:19:33.452487   29734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 17:19:33.452551   29734 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.ha-181247-m03 san=[127.0.0.1 192.168.39.122 ha-181247-m03 localhost minikube]
	I0917 17:19:33.590042   29734 provision.go:177] copyRemoteCerts
	I0917 17:19:33.590093   29734 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 17:19:33.590120   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	I0917 17:19:33.592691   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:33.593024   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:33.593067   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:33.593247   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHPort
	I0917 17:19:33.593427   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:33.593600   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHUsername
	I0917 17:19:33.593736   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03/id_rsa Username:docker}
	I0917 17:19:33.681307   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 17:19:33.681385   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 17:19:33.708421   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 17:19:33.708517   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 17:19:33.735759   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 17:19:33.735833   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 17:19:33.761523   29734 provision.go:87] duration metric: took 316.098149ms to configureAuth
	I0917 17:19:33.761555   29734 buildroot.go:189] setting minikube options for container-runtime
	I0917 17:19:33.761848   29734 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:19:33.761935   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	I0917 17:19:33.764433   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:33.764922   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:33.764961   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:33.765242   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHPort
	I0917 17:19:33.765475   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:33.765667   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:33.765834   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHUsername
	I0917 17:19:33.766032   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:19:33.766257   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0917 17:19:33.766281   29734 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 17:19:34.007705   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 17:19:34.007740   29734 main.go:141] libmachine: Checking connection to Docker...
	I0917 17:19:34.007752   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetURL
	I0917 17:19:34.009192   29734 main.go:141] libmachine: (ha-181247-m03) DBG | Using libvirt version 6000000
	I0917 17:19:34.011683   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:34.012061   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:34.012101   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:34.012253   29734 main.go:141] libmachine: Docker is up and running!
	I0917 17:19:34.012267   29734 main.go:141] libmachine: Reticulating splines...
	I0917 17:19:34.012274   29734 client.go:171] duration metric: took 25.353048014s to LocalClient.Create
	I0917 17:19:34.012303   29734 start.go:167] duration metric: took 25.35312837s to libmachine.API.Create "ha-181247"
	I0917 17:19:34.012316   29734 start.go:293] postStartSetup for "ha-181247-m03" (driver="kvm2")
	I0917 17:19:34.012329   29734 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 17:19:34.012362   29734 main.go:141] libmachine: (ha-181247-m03) Calling .DriverName
	I0917 17:19:34.012602   29734 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 17:19:34.012626   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	I0917 17:19:34.015389   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:34.015790   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:34.015816   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:34.016029   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHPort
	I0917 17:19:34.016197   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:34.016319   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHUsername
	I0917 17:19:34.016473   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03/id_rsa Username:docker}
	I0917 17:19:34.104236   29734 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 17:19:34.108602   29734 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 17:19:34.108625   29734 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 17:19:34.108692   29734 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 17:19:34.108762   29734 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 17:19:34.108774   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> /etc/ssl/certs/182592.pem
	I0917 17:19:34.108863   29734 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 17:19:34.118497   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 17:19:34.143886   29734 start.go:296] duration metric: took 131.555198ms for postStartSetup
	I0917 17:19:34.143930   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetConfigRaw
	I0917 17:19:34.144583   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetIP
	I0917 17:19:34.147117   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:34.147484   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:34.147515   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:34.147804   29734 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/config.json ...
	I0917 17:19:34.148012   29734 start.go:128] duration metric: took 25.507526501s to createHost
	I0917 17:19:34.148037   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	I0917 17:19:34.150418   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:34.150758   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:34.150785   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:34.150996   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHPort
	I0917 17:19:34.151166   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:34.151307   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:34.151445   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHUsername
	I0917 17:19:34.151606   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:19:34.151799   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0917 17:19:34.151814   29734 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 17:19:34.262347   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726593574.236617709
	
	I0917 17:19:34.262366   29734 fix.go:216] guest clock: 1726593574.236617709
	I0917 17:19:34.262375   29734 fix.go:229] Guest: 2024-09-17 17:19:34.236617709 +0000 UTC Remote: 2024-09-17 17:19:34.148025415 +0000 UTC m=+141.888904346 (delta=88.592294ms)
	I0917 17:19:34.262395   29734 fix.go:200] guest clock delta is within tolerance: 88.592294ms
	I0917 17:19:34.262400   29734 start.go:83] releasing machines lock for "ha-181247-m03", held for 25.622025247s
	I0917 17:19:34.262422   29734 main.go:141] libmachine: (ha-181247-m03) Calling .DriverName
	I0917 17:19:34.262684   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetIP
	I0917 17:19:34.265426   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:34.265760   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:34.265794   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:34.268093   29734 out.go:177] * Found network options:
	I0917 17:19:34.269521   29734 out.go:177]   - NO_PROXY=192.168.39.195,192.168.39.11
	W0917 17:19:34.270946   29734 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 17:19:34.270971   29734 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 17:19:34.270990   29734 main.go:141] libmachine: (ha-181247-m03) Calling .DriverName
	I0917 17:19:34.271522   29734 main.go:141] libmachine: (ha-181247-m03) Calling .DriverName
	I0917 17:19:34.271710   29734 main.go:141] libmachine: (ha-181247-m03) Calling .DriverName
	I0917 17:19:34.271824   29734 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 17:19:34.271864   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	W0917 17:19:34.271881   29734 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 17:19:34.271901   29734 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 17:19:34.271971   29734 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 17:19:34.271987   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	I0917 17:19:34.274729   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:34.274812   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:34.275145   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:34.275165   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:34.275213   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:34.275228   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:34.275325   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHPort
	I0917 17:19:34.275470   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHPort
	I0917 17:19:34.275551   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:34.275611   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:34.275729   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHUsername
	I0917 17:19:34.275738   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHUsername
	I0917 17:19:34.275876   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03/id_rsa Username:docker}
	I0917 17:19:34.275939   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03/id_rsa Username:docker}
	I0917 17:19:34.531860   29734 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 17:19:34.538856   29734 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 17:19:34.538991   29734 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 17:19:34.556557   29734 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 17:19:34.556582   29734 start.go:495] detecting cgroup driver to use...
	I0917 17:19:34.556664   29734 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 17:19:34.574233   29734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 17:19:34.590846   29734 docker.go:217] disabling cri-docker service (if available) ...
	I0917 17:19:34.590914   29734 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 17:19:34.606281   29734 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 17:19:34.620682   29734 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 17:19:34.740105   29734 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 17:19:34.889013   29734 docker.go:233] disabling docker service ...
	I0917 17:19:34.889085   29734 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 17:19:34.904179   29734 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 17:19:34.918084   29734 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 17:19:35.067080   29734 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 17:19:35.213525   29734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 17:19:35.228510   29734 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 17:19:35.249534   29734 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 17:19:35.249615   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:19:35.261455   29734 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 17:19:35.261533   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:19:35.273150   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:19:35.284319   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:19:35.296139   29734 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 17:19:35.307727   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:19:35.318993   29734 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:19:35.338300   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:19:35.350602   29734 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 17:19:35.360817   29734 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 17:19:35.360880   29734 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 17:19:35.375350   29734 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 17:19:35.385443   29734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:19:35.508400   29734 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 17:19:35.609779   29734 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 17:19:35.609860   29734 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 17:19:35.614634   29734 start.go:563] Will wait 60s for crictl version
	I0917 17:19:35.614701   29734 ssh_runner.go:195] Run: which crictl
	I0917 17:19:35.618547   29734 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 17:19:35.659190   29734 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 17:19:35.659274   29734 ssh_runner.go:195] Run: crio --version
	I0917 17:19:35.689203   29734 ssh_runner.go:195] Run: crio --version
	I0917 17:19:35.721078   29734 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 17:19:35.722575   29734 out.go:177]   - env NO_PROXY=192.168.39.195
	I0917 17:19:35.724058   29734 out.go:177]   - env NO_PROXY=192.168.39.195,192.168.39.11
	I0917 17:19:35.725224   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetIP
	I0917 17:19:35.728092   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:35.728468   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:35.728496   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:35.728708   29734 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0917 17:19:35.733137   29734 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 17:19:35.746300   29734 mustload.go:65] Loading cluster: ha-181247
	I0917 17:19:35.746532   29734 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:19:35.746838   29734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:19:35.746876   29734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:19:35.762171   29734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38843
	I0917 17:19:35.762651   29734 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:19:35.763150   29734 main.go:141] libmachine: Using API Version  1
	I0917 17:19:35.763183   29734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:19:35.763541   29734 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:19:35.763747   29734 main.go:141] libmachine: (ha-181247) Calling .GetState
	I0917 17:19:35.765372   29734 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:19:35.765673   29734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:19:35.765714   29734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:19:35.781734   29734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46653
	I0917 17:19:35.782089   29734 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:19:35.782536   29734 main.go:141] libmachine: Using API Version  1
	I0917 17:19:35.782558   29734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:19:35.782909   29734 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:19:35.783101   29734 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:19:35.783265   29734 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247 for IP: 192.168.39.122
	I0917 17:19:35.783277   29734 certs.go:194] generating shared ca certs ...
	I0917 17:19:35.783294   29734 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:19:35.783429   29734 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 17:19:35.783466   29734 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 17:19:35.783476   29734 certs.go:256] generating profile certs ...
	I0917 17:19:35.783540   29734 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/client.key
	I0917 17:19:35.783565   29734 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.661a4327
	I0917 17:19:35.783578   29734 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.661a4327 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.195 192.168.39.11 192.168.39.122 192.168.39.254]
	I0917 17:19:35.857068   29734 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.661a4327 ...
	I0917 17:19:35.857099   29734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.661a4327: {Name:mkaec4fe728dbd262613238450879676d5138a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:19:35.857295   29734 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.661a4327 ...
	I0917 17:19:35.857310   29734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.661a4327: {Name:mkae136412c99dae36859e1e80126c8d56b77cf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:19:35.857389   29734 certs.go:381] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.661a4327 -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt
	I0917 17:19:35.857574   29734 certs.go:385] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.661a4327 -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key
	I0917 17:19:35.857700   29734 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.key
	I0917 17:19:35.857715   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 17:19:35.857734   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 17:19:35.857746   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 17:19:35.857759   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 17:19:35.857771   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 17:19:35.857784   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 17:19:35.857796   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 17:19:35.873337   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 17:19:35.873452   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 17:19:35.873484   29734 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 17:19:35.873494   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 17:19:35.873520   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 17:19:35.873544   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 17:19:35.873572   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 17:19:35.873611   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 17:19:35.873640   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem -> /usr/share/ca-certificates/18259.pem
	I0917 17:19:35.873663   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> /usr/share/ca-certificates/182592.pem
	I0917 17:19:35.873674   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:19:35.873707   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:19:35.876770   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:19:35.877171   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:19:35.877197   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:19:35.877480   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:19:35.877668   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:19:35.877831   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:19:35.877940   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:19:35.953578   29734 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 17:19:35.959804   29734 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 17:19:35.980063   29734 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 17:19:35.986792   29734 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0917 17:19:35.999691   29734 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 17:19:36.005080   29734 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 17:19:36.019905   29734 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 17:19:36.025075   29734 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0917 17:19:36.039517   29734 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 17:19:36.044604   29734 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 17:19:36.056083   29734 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 17:19:36.061370   29734 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0917 17:19:36.074689   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 17:19:36.101573   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 17:19:36.127141   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 17:19:36.153027   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 17:19:36.178358   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0917 17:19:36.203619   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 17:19:36.228855   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 17:19:36.254491   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 17:19:36.280182   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 17:19:36.305547   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 17:19:36.331470   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 17:19:36.358264   29734 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 17:19:36.377242   29734 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0917 17:19:36.395522   29734 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 17:19:36.413957   29734 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0917 17:19:36.432410   29734 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 17:19:36.450293   29734 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0917 17:19:36.467354   29734 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 17:19:36.488029   29734 ssh_runner.go:195] Run: openssl version
	I0917 17:19:36.494263   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 17:19:36.505981   29734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 17:19:36.510479   29734 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 17:19:36.510526   29734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 17:19:36.516347   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 17:19:36.527975   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 17:19:36.539870   29734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:19:36.545269   29734 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:19:36.545333   29734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:19:36.551325   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 17:19:36.562948   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 17:19:36.574691   29734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 17:19:36.579551   29734 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 17:19:36.579620   29734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 17:19:36.585554   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 17:19:36.598719   29734 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 17:19:36.604510   29734 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 17:19:36.604566   29734 kubeadm.go:934] updating node {m03 192.168.39.122 8443 v1.31.1 crio true true} ...
	I0917 17:19:36.604637   29734 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-181247-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.122
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-181247 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 17:19:36.604662   29734 kube-vip.go:115] generating kube-vip config ...
	I0917 17:19:36.604697   29734 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 17:19:36.622296   29734 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 17:19:36.622381   29734 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 17:19:36.622452   29734 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 17:19:36.632840   29734 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0917 17:19:36.632903   29734 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0917 17:19:36.644101   29734 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0917 17:19:36.644127   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0917 17:19:36.644149   29734 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0917 17:19:36.644170   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0917 17:19:36.644172   29734 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0917 17:19:36.644181   29734 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0917 17:19:36.644215   29734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:19:36.644230   29734 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0917 17:19:36.664028   29734 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0917 17:19:36.664070   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0917 17:19:36.664139   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0917 17:19:36.664192   29734 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0917 17:19:36.664220   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0917 17:19:36.664236   29734 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0917 17:19:36.700097   29734 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0917 17:19:36.700145   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0917 17:19:37.652846   29734 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 17:19:37.663720   29734 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0917 17:19:37.681818   29734 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 17:19:37.700412   29734 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0917 17:19:37.720111   29734 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0917 17:19:37.724467   29734 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 17:19:37.738316   29734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:19:37.877851   29734 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 17:19:37.897444   29734 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:19:37.897909   29734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:19:37.897966   29734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:19:37.916204   29734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37233
	I0917 17:19:37.916645   29734 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:19:37.917142   29734 main.go:141] libmachine: Using API Version  1
	I0917 17:19:37.917169   29734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:19:37.917548   29734 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:19:37.917750   29734 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:19:37.917882   29734 start.go:317] joinCluster: &{Name:ha-181247 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-181247 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:19:37.918049   29734 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0917 17:19:37.918073   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:19:37.921635   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:19:37.922220   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:19:37.922248   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:19:37.922463   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:19:37.922813   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:19:37.923000   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:19:37.923167   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:19:38.097518   29734 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 17:19:38.097574   29734 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4ed067.7lqtjmb7q7q1uvw2 --discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-181247-m03 --control-plane --apiserver-advertise-address=192.168.39.122 --apiserver-bind-port=8443"
	I0917 17:20:01.587638   29734 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4ed067.7lqtjmb7q7q1uvw2 --discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-181247-m03 --control-plane --apiserver-advertise-address=192.168.39.122 --apiserver-bind-port=8443": (23.490043145s)
	I0917 17:20:01.587678   29734 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0917 17:20:02.179280   29734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-181247-m03 minikube.k8s.io/updated_at=2024_09_17T17_20_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=ha-181247 minikube.k8s.io/primary=false
	I0917 17:20:02.344849   29734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-181247-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0917 17:20:02.506759   29734 start.go:319] duration metric: took 24.58887463s to joinCluster
	I0917 17:20:02.506838   29734 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 17:20:02.507278   29734 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:20:02.508859   29734 out.go:177] * Verifying Kubernetes components...
	I0917 17:20:02.511078   29734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:20:02.768010   29734 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 17:20:02.800525   29734 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 17:20:02.800769   29734 kapi.go:59] client config for ha-181247: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/client.crt", KeyFile:"/home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/client.key", CAFile:"/home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 17:20:02.800825   29734 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.195:8443
	I0917 17:20:02.801006   29734 node_ready.go:35] waiting up to 6m0s for node "ha-181247-m03" to be "Ready" ...
	I0917 17:20:02.801066   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:02.801074   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:02.801081   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:02.801086   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:02.805370   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:03.301972   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:03.301996   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:03.302008   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:03.302015   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:03.305841   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:03.801643   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:03.801673   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:03.801684   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:03.801690   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:03.806263   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:04.301828   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:04.301851   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:04.301864   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:04.301873   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:04.305853   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:04.802066   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:04.802092   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:04.802101   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:04.802104   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:04.806363   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:04.806930   29734 node_ready.go:53] node "ha-181247-m03" has status "Ready":"False"
	I0917 17:20:05.302264   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:05.302290   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:05.302302   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:05.302308   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:05.306375   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:05.801380   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:05.801411   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:05.801422   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:05.801427   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:05.805349   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:06.301374   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:06.301407   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:06.301422   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:06.301432   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:06.304898   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:06.801207   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:06.801264   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:06.801274   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:06.801277   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:06.804783   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:07.302189   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:07.302210   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:07.302221   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:07.302227   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:07.305561   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:07.306249   29734 node_ready.go:53] node "ha-181247-m03" has status "Ready":"False"
	I0917 17:20:07.802160   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:07.802186   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:07.802198   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:07.802205   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:07.806023   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:08.301810   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:08.301834   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:08.301847   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:08.301851   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:08.309265   29734 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0917 17:20:08.801195   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:08.801217   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:08.801240   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:08.801245   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:08.804983   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:09.301155   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:09.301179   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:09.301187   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:09.301190   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:09.304767   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:09.801398   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:09.801421   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:09.801429   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:09.801433   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:09.805173   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:09.806007   29734 node_ready.go:53] node "ha-181247-m03" has status "Ready":"False"
	I0917 17:20:10.301421   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:10.301445   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:10.301453   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:10.301458   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:10.304752   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:10.801766   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:10.801787   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:10.801795   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:10.801799   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:10.805910   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:11.301250   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:11.301272   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:11.301283   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:11.301287   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:11.305087   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:11.801381   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:11.801404   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:11.801414   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:11.801418   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:11.805431   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:11.806115   29734 node_ready.go:53] node "ha-181247-m03" has status "Ready":"False"
	I0917 17:20:12.301979   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:12.302001   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:12.302011   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:12.302018   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:12.306005   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:12.802217   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:12.802239   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:12.802247   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:12.802252   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:12.805899   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:13.301283   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:13.301321   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:13.301330   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:13.301336   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:13.305773   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:13.801647   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:13.801669   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:13.801677   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:13.801683   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:13.805088   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:14.302183   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:14.302209   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:14.302221   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:14.302227   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:14.305690   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:14.306309   29734 node_ready.go:53] node "ha-181247-m03" has status "Ready":"False"
	I0917 17:20:14.801430   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:14.801456   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:14.801466   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:14.801472   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:14.806457   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:15.301422   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:15.301449   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:15.301461   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:15.301469   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:15.305063   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:15.802100   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:15.802121   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:15.802129   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:15.802136   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:15.805547   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:16.301923   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:16.301945   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:16.301953   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:16.301957   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:16.305406   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:16.801782   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:16.801804   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:16.801813   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:16.801817   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:16.805309   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:16.806034   29734 node_ready.go:53] node "ha-181247-m03" has status "Ready":"False"
	I0917 17:20:17.301706   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:17.301732   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:17.301743   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:17.301751   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:17.305245   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:17.305731   29734 node_ready.go:49] node "ha-181247-m03" has status "Ready":"True"
	I0917 17:20:17.305749   29734 node_ready.go:38] duration metric: took 14.504731184s for node "ha-181247-m03" to be "Ready" ...
	I0917 17:20:17.305757   29734 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 17:20:17.305816   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0917 17:20:17.305825   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:17.305832   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:17.305837   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:17.312471   29734 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 17:20:17.319460   29734 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5lmg4" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:17.319541   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5lmg4
	I0917 17:20:17.319549   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:17.319556   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:17.319560   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:17.326614   29734 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0917 17:20:17.327955   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:20:17.327969   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:17.327977   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:17.327981   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:17.336447   29734 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0917 17:20:17.337223   29734 pod_ready.go:93] pod "coredns-7c65d6cfc9-5lmg4" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:17.337266   29734 pod_ready.go:82] duration metric: took 17.77938ms for pod "coredns-7c65d6cfc9-5lmg4" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:17.337278   29734 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bdthh" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:17.337334   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bdthh
	I0917 17:20:17.337341   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:17.337348   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:17.337355   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:17.340474   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:17.341148   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:20:17.341166   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:17.341174   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:17.341178   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:17.343927   29734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 17:20:17.344502   29734 pod_ready.go:93] pod "coredns-7c65d6cfc9-bdthh" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:17.344520   29734 pod_ready.go:82] duration metric: took 7.234573ms for pod "coredns-7c65d6cfc9-bdthh" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:17.344533   29734 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:17.344596   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/etcd-ha-181247
	I0917 17:20:17.344606   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:17.344616   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:17.344623   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:17.348107   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:17.348913   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:20:17.348924   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:17.348931   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:17.348937   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:17.351861   29734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 17:20:17.352464   29734 pod_ready.go:93] pod "etcd-ha-181247" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:17.352484   29734 pod_ready.go:82] duration metric: took 7.943434ms for pod "etcd-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:17.352498   29734 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:17.352551   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/etcd-ha-181247-m02
	I0917 17:20:17.352559   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:17.352566   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:17.352576   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:17.355372   29734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 17:20:17.355924   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:20:17.355937   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:17.355944   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:17.355948   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:17.358721   29734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 17:20:17.359152   29734 pod_ready.go:93] pod "etcd-ha-181247-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:17.359167   29734 pod_ready.go:82] duration metric: took 6.66316ms for pod "etcd-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:17.359179   29734 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-181247-m03" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:17.502636   29734 request.go:632] Waited for 143.380911ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/etcd-ha-181247-m03
	I0917 17:20:17.502720   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/etcd-ha-181247-m03
	I0917 17:20:17.502729   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:17.502741   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:17.502747   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:17.506289   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:17.702257   29734 request.go:632] Waited for 195.390906ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:17.702343   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:17.702351   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:17.702360   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:17.702370   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:17.705911   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:17.706599   29734 pod_ready.go:93] pod "etcd-ha-181247-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:17.706622   29734 pod_ready.go:82] duration metric: took 347.432415ms for pod "etcd-ha-181247-m03" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:17.706639   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:17.902395   29734 request.go:632] Waited for 195.682205ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181247
	I0917 17:20:17.902475   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181247
	I0917 17:20:17.902483   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:17.902494   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:17.902505   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:17.906384   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:18.102577   29734 request.go:632] Waited for 195.384056ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:20:18.102628   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:20:18.102633   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:18.102643   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:18.102651   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:18.107608   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:18.108198   29734 pod_ready.go:93] pod "kube-apiserver-ha-181247" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:18.108225   29734 pod_ready.go:82] duration metric: took 401.578528ms for pod "kube-apiserver-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:18.108239   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:18.302202   29734 request.go:632] Waited for 193.888108ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181247-m02
	I0917 17:20:18.302259   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181247-m02
	I0917 17:20:18.302266   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:18.302276   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:18.302282   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:18.306431   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:18.502397   29734 request.go:632] Waited for 195.211721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:20:18.502464   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:20:18.502469   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:18.502477   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:18.502485   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:18.506567   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:18.507076   29734 pod_ready.go:93] pod "kube-apiserver-ha-181247-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:18.507093   29734 pod_ready.go:82] duration metric: took 398.84232ms for pod "kube-apiserver-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:18.507105   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-181247-m03" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:18.702660   29734 request.go:632] Waited for 195.459967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181247-m03
	I0917 17:20:18.702724   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181247-m03
	I0917 17:20:18.702731   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:18.702742   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:18.702752   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:18.706494   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:18.902093   29734 request.go:632] Waited for 194.812702ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:18.902157   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:18.902162   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:18.902170   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:18.902175   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:18.905661   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:18.906182   29734 pod_ready.go:93] pod "kube-apiserver-ha-181247-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:18.906202   29734 pod_ready.go:82] duration metric: took 399.08599ms for pod "kube-apiserver-ha-181247-m03" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:18.906213   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:19.102266   29734 request.go:632] Waited for 195.989867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181247
	I0917 17:20:19.102334   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181247
	I0917 17:20:19.102339   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:19.102346   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:19.102350   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:19.105958   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:19.302064   29734 request.go:632] Waited for 195.397143ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:20:19.302136   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:20:19.302147   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:19.302159   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:19.302169   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:19.305615   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:19.306389   29734 pod_ready.go:93] pod "kube-controller-manager-ha-181247" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:19.306409   29734 pod_ready.go:82] duration metric: took 400.188287ms for pod "kube-controller-manager-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:19.306422   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:19.502428   29734 request.go:632] Waited for 195.912747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181247-m02
	I0917 17:20:19.502485   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181247-m02
	I0917 17:20:19.502491   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:19.502498   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:19.502503   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:19.506085   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:19.702442   29734 request.go:632] Waited for 195.383611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:20:19.702502   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:20:19.702509   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:19.702519   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:19.702535   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:19.705984   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:19.706637   29734 pod_ready.go:93] pod "kube-controller-manager-ha-181247-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:19.706660   29734 pod_ready.go:82] duration metric: took 400.225093ms for pod "kube-controller-manager-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:19.706669   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-181247-m03" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:19.901720   29734 request.go:632] Waited for 194.990972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181247-m03
	I0917 17:20:19.901798   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181247-m03
	I0917 17:20:19.901806   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:19.901815   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:19.901824   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:19.905444   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:20.102496   29734 request.go:632] Waited for 196.368768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:20.102579   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:20.102586   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:20.102600   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:20.102608   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:20.106315   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:20.106730   29734 pod_ready.go:93] pod "kube-controller-manager-ha-181247-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:20.106749   29734 pod_ready.go:82] duration metric: took 400.070285ms for pod "kube-controller-manager-ha-181247-m03" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:20.106758   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-42gpk" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:20.301796   29734 request.go:632] Waited for 194.972487ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-42gpk
	I0917 17:20:20.301870   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-42gpk
	I0917 17:20:20.301877   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:20.301887   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:20.301892   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:20.305925   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:20.501826   29734 request.go:632] Waited for 195.291541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:20.501896   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:20.501910   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:20.501921   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:20.501931   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:20.506082   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:20.506756   29734 pod_ready.go:93] pod "kube-proxy-42gpk" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:20.506789   29734 pod_ready.go:82] duration metric: took 400.024002ms for pod "kube-proxy-42gpk" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:20.506800   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7rrxk" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:20.701779   29734 request.go:632] Waited for 194.912668ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7rrxk
	I0917 17:20:20.701868   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7rrxk
	I0917 17:20:20.701879   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:20.701887   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:20.701893   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:20.705311   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:20.901827   29734 request.go:632] Waited for 195.713484ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:20:20.901907   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:20:20.901922   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:20.901933   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:20.901939   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:20.905569   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:20.906149   29734 pod_ready.go:93] pod "kube-proxy-7rrxk" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:20.906171   29734 pod_ready.go:82] duration metric: took 399.363425ms for pod "kube-proxy-7rrxk" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:20.906183   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xmfcj" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:21.102209   29734 request.go:632] Waited for 195.95697ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xmfcj
	I0917 17:20:21.102264   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xmfcj
	I0917 17:20:21.102269   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:21.102277   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:21.102280   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:21.105937   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:21.302138   29734 request.go:632] Waited for 195.366412ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:20:21.302216   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:20:21.302222   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:21.302231   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:21.302238   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:21.305707   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:21.306269   29734 pod_ready.go:93] pod "kube-proxy-xmfcj" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:21.306286   29734 pod_ready.go:82] duration metric: took 400.091414ms for pod "kube-proxy-xmfcj" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:21.306296   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:21.501862   29734 request.go:632] Waited for 195.510489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181247
	I0917 17:20:21.501916   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181247
	I0917 17:20:21.501947   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:21.501960   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:21.501971   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:21.505337   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:21.702381   29734 request.go:632] Waited for 196.386954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:20:21.702453   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:20:21.702462   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:21.702469   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:21.702473   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:21.706002   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:21.706592   29734 pod_ready.go:93] pod "kube-scheduler-ha-181247" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:21.706614   29734 pod_ready.go:82] duration metric: took 400.31163ms for pod "kube-scheduler-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:21.706623   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:21.902664   29734 request.go:632] Waited for 195.968567ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181247-m02
	I0917 17:20:21.902728   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181247-m02
	I0917 17:20:21.902734   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:21.902742   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:21.902748   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:21.906255   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:22.102348   29734 request.go:632] Waited for 195.386611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:20:22.102411   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:20:22.102417   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:22.102425   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:22.102429   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:22.108294   29734 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 17:20:22.109362   29734 pod_ready.go:93] pod "kube-scheduler-ha-181247-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:22.109389   29734 pod_ready.go:82] duration metric: took 402.758186ms for pod "kube-scheduler-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:22.109403   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-181247-m03" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:22.301913   29734 request.go:632] Waited for 192.42907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181247-m03
	I0917 17:20:22.301971   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181247-m03
	I0917 17:20:22.301976   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:22.301999   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:22.302006   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:22.306135   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:22.502041   29734 request.go:632] Waited for 195.243772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:22.502115   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:22.502124   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:22.502131   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:22.502137   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:22.506991   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:22.507512   29734 pod_ready.go:93] pod "kube-scheduler-ha-181247-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:22.507534   29734 pod_ready.go:82] duration metric: took 398.122459ms for pod "kube-scheduler-ha-181247-m03" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:22.507548   29734 pod_ready.go:39] duration metric: took 5.201782079s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 17:20:22.507564   29734 api_server.go:52] waiting for apiserver process to appear ...
	I0917 17:20:22.507650   29734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:20:22.526178   29734 api_server.go:72] duration metric: took 20.019308385s to wait for apiserver process to appear ...
	I0917 17:20:22.526212   29734 api_server.go:88] waiting for apiserver healthz status ...
	I0917 17:20:22.526234   29734 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I0917 17:20:22.531460   29734 api_server.go:279] https://192.168.39.195:8443/healthz returned 200:
	ok
	I0917 17:20:22.531521   29734 round_trippers.go:463] GET https://192.168.39.195:8443/version
	I0917 17:20:22.531526   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:22.531534   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:22.531541   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:22.532521   29734 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0917 17:20:22.532592   29734 api_server.go:141] control plane version: v1.31.1
	I0917 17:20:22.532610   29734 api_server.go:131] duration metric: took 6.39045ms to wait for apiserver health ...
	I0917 17:20:22.532619   29734 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 17:20:22.702015   29734 request.go:632] Waited for 169.322514ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0917 17:20:22.702074   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0917 17:20:22.702080   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:22.702101   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:22.702110   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:22.708463   29734 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 17:20:22.715445   29734 system_pods.go:59] 24 kube-system pods found
	I0917 17:20:22.715473   29734 system_pods.go:61] "coredns-7c65d6cfc9-5lmg4" [1052e249-3530-4220-8214-0c36a02c4215] Running
	I0917 17:20:22.715477   29734 system_pods.go:61] "coredns-7c65d6cfc9-bdthh" [63ae9d00-44ce-47be-80c5-12144ff8c69b] Running
	I0917 17:20:22.715481   29734 system_pods.go:61] "etcd-ha-181247" [6e221481-8a96-432b-935d-8ed44c26ca62] Running
	I0917 17:20:22.715485   29734 system_pods.go:61] "etcd-ha-181247-m02" [a691f359-6e75-464a-9b99-6a9b91ef4907] Running
	I0917 17:20:22.715488   29734 system_pods.go:61] "etcd-ha-181247-m03" [793159b6-0236-4b2b-b5a4-ed2f0c9219c2] Running
	I0917 17:20:22.715491   29734 system_pods.go:61] "kindnet-2tkbp" [882de4ca-d789-403e-a22e-22fbc776af10] Running
	I0917 17:20:22.715495   29734 system_pods.go:61] "kindnet-qqpgm" [5e8663b0-97a2-4995-951a-5fcee45c71de] Running
	I0917 17:20:22.715498   29734 system_pods.go:61] "kindnet-tkbmg" [62acea1a-4ee4-475b-9a04-6b8d50d7f1a0] Running
	I0917 17:20:22.715501   29734 system_pods.go:61] "kube-apiserver-ha-181247" [5386d1d6-820f-4f46-a379-f38cab3047ad] Running
	I0917 17:20:22.715507   29734 system_pods.go:61] "kube-apiserver-ha-181247-m02" [9611a83e-8be3-41c3-8477-f020d0494000] Running
	I0917 17:20:22.715511   29734 system_pods.go:61] "kube-apiserver-ha-181247-m03" [7cdb7a90-1646-4bcf-9665-46ce3c679990] Running
	I0917 17:20:22.715517   29734 system_pods.go:61] "kube-controller-manager-ha-181247" [9732aff5-419d-4d8c-ba06-ec37a29cdb95] Running
	I0917 17:20:22.715522   29734 system_pods.go:61] "kube-controller-manager-ha-181247-m02" [6bc1cdbf-ef9a-420f-8250-131c7684745e] Running
	I0917 17:20:22.715529   29734 system_pods.go:61] "kube-controller-manager-ha-181247-m03" [65f2d1cf-4862-4325-afd6-746cc48d2d7f] Running
	I0917 17:20:22.715534   29734 system_pods.go:61] "kube-proxy-42gpk" [7bb2338a-c1fd-4f7e-8981-57b7319cb457] Running
	I0917 17:20:22.715542   29734 system_pods.go:61] "kube-proxy-7rrxk" [a075630a-48df-429f-98ef-49bca2d9dac5] Running
	I0917 17:20:22.715547   29734 system_pods.go:61] "kube-proxy-xmfcj" [f2eaf5d5-34e2-45b0-9aa3-5cb28b952dfa] Running
	I0917 17:20:22.715554   29734 system_pods.go:61] "kube-scheduler-ha-181247" [dc64d80c-5975-40e4-b3dd-51a43cb7d5c4] Running
	I0917 17:20:22.715560   29734 system_pods.go:61] "kube-scheduler-ha-181247-m02" [2130254c-2836-4867-b9d4-4371d7897b7f] Running
	I0917 17:20:22.715566   29734 system_pods.go:61] "kube-scheduler-ha-181247-m03" [fc544b16-e876-4966-a423-d52ff9041059] Running
	I0917 17:20:22.715569   29734 system_pods.go:61] "kube-vip-ha-181247" [45c79311-640f-4df4-8902-e3b09f11d417] Running
	I0917 17:20:22.715575   29734 system_pods.go:61] "kube-vip-ha-181247-m02" [8de63338-cae2-4484-87f8-51d71ebd3d5a] Running
	I0917 17:20:22.715579   29734 system_pods.go:61] "kube-vip-ha-181247-m03" [44816f72-d64d-4989-8719-b340c1b854d2] Running
	I0917 17:20:22.715584   29734 system_pods.go:61] "storage-provisioner" [fcef4cf0-61a6-4f9f-9644-f17f7f819237] Running
	I0917 17:20:22.715590   29734 system_pods.go:74] duration metric: took 182.963459ms to wait for pod list to return data ...
	I0917 17:20:22.715600   29734 default_sa.go:34] waiting for default service account to be created ...
	I0917 17:20:22.902113   29734 request.go:632] Waited for 186.424159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/default/serviceaccounts
	I0917 17:20:22.902163   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/default/serviceaccounts
	I0917 17:20:22.902169   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:22.902177   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:22.902186   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:22.906212   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:22.906329   29734 default_sa.go:45] found service account: "default"
	I0917 17:20:22.906343   29734 default_sa.go:55] duration metric: took 190.733459ms for default service account to be created ...
	I0917 17:20:22.906352   29734 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 17:20:23.102138   29734 request.go:632] Waited for 195.70342ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0917 17:20:23.102207   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0917 17:20:23.102215   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:23.102225   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:23.102236   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:23.116110   29734 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0917 17:20:23.123175   29734 system_pods.go:86] 24 kube-system pods found
	I0917 17:20:23.123206   29734 system_pods.go:89] "coredns-7c65d6cfc9-5lmg4" [1052e249-3530-4220-8214-0c36a02c4215] Running
	I0917 17:20:23.123214   29734 system_pods.go:89] "coredns-7c65d6cfc9-bdthh" [63ae9d00-44ce-47be-80c5-12144ff8c69b] Running
	I0917 17:20:23.123219   29734 system_pods.go:89] "etcd-ha-181247" [6e221481-8a96-432b-935d-8ed44c26ca62] Running
	I0917 17:20:23.123225   29734 system_pods.go:89] "etcd-ha-181247-m02" [a691f359-6e75-464a-9b99-6a9b91ef4907] Running
	I0917 17:20:23.123232   29734 system_pods.go:89] "etcd-ha-181247-m03" [793159b6-0236-4b2b-b5a4-ed2f0c9219c2] Running
	I0917 17:20:23.123237   29734 system_pods.go:89] "kindnet-2tkbp" [882de4ca-d789-403e-a22e-22fbc776af10] Running
	I0917 17:20:23.123242   29734 system_pods.go:89] "kindnet-qqpgm" [5e8663b0-97a2-4995-951a-5fcee45c71de] Running
	I0917 17:20:23.123248   29734 system_pods.go:89] "kindnet-tkbmg" [62acea1a-4ee4-475b-9a04-6b8d50d7f1a0] Running
	I0917 17:20:23.123253   29734 system_pods.go:89] "kube-apiserver-ha-181247" [5386d1d6-820f-4f46-a379-f38cab3047ad] Running
	I0917 17:20:23.123260   29734 system_pods.go:89] "kube-apiserver-ha-181247-m02" [9611a83e-8be3-41c3-8477-f020d0494000] Running
	I0917 17:20:23.123266   29734 system_pods.go:89] "kube-apiserver-ha-181247-m03" [7cdb7a90-1646-4bcf-9665-46ce3c679990] Running
	I0917 17:20:23.123272   29734 system_pods.go:89] "kube-controller-manager-ha-181247" [9732aff5-419d-4d8c-ba06-ec37a29cdb95] Running
	I0917 17:20:23.123278   29734 system_pods.go:89] "kube-controller-manager-ha-181247-m02" [6bc1cdbf-ef9a-420f-8250-131c7684745e] Running
	I0917 17:20:23.123287   29734 system_pods.go:89] "kube-controller-manager-ha-181247-m03" [65f2d1cf-4862-4325-afd6-746cc48d2d7f] Running
	I0917 17:20:23.123293   29734 system_pods.go:89] "kube-proxy-42gpk" [7bb2338a-c1fd-4f7e-8981-57b7319cb457] Running
	I0917 17:20:23.123302   29734 system_pods.go:89] "kube-proxy-7rrxk" [a075630a-48df-429f-98ef-49bca2d9dac5] Running
	I0917 17:20:23.123308   29734 system_pods.go:89] "kube-proxy-xmfcj" [f2eaf5d5-34e2-45b0-9aa3-5cb28b952dfa] Running
	I0917 17:20:23.123316   29734 system_pods.go:89] "kube-scheduler-ha-181247" [dc64d80c-5975-40e4-b3dd-51a43cb7d5c4] Running
	I0917 17:20:23.123323   29734 system_pods.go:89] "kube-scheduler-ha-181247-m02" [2130254c-2836-4867-b9d4-4371d7897b7f] Running
	I0917 17:20:23.123332   29734 system_pods.go:89] "kube-scheduler-ha-181247-m03" [fc544b16-e876-4966-a423-d52ff9041059] Running
	I0917 17:20:23.123338   29734 system_pods.go:89] "kube-vip-ha-181247" [45c79311-640f-4df4-8902-e3b09f11d417] Running
	I0917 17:20:23.123346   29734 system_pods.go:89] "kube-vip-ha-181247-m02" [8de63338-cae2-4484-87f8-51d71ebd3d5a] Running
	I0917 17:20:23.123351   29734 system_pods.go:89] "kube-vip-ha-181247-m03" [44816f72-d64d-4989-8719-b340c1b854d2] Running
	I0917 17:20:23.123359   29734 system_pods.go:89] "storage-provisioner" [fcef4cf0-61a6-4f9f-9644-f17f7f819237] Running
	I0917 17:20:23.123367   29734 system_pods.go:126] duration metric: took 217.004917ms to wait for k8s-apps to be running ...
	I0917 17:20:23.123379   29734 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 17:20:23.123429   29734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:20:23.143737   29734 system_svc.go:56] duration metric: took 20.348178ms WaitForService to wait for kubelet
	I0917 17:20:23.143773   29734 kubeadm.go:582] duration metric: took 20.636908487s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 17:20:23.143796   29734 node_conditions.go:102] verifying NodePressure condition ...
	I0917 17:20:23.302126   29734 request.go:632] Waited for 158.259398ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes
	I0917 17:20:23.302204   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes
	I0917 17:20:23.302215   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:23.302225   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:23.302232   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:23.306459   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:23.307628   29734 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 17:20:23.307648   29734 node_conditions.go:123] node cpu capacity is 2
	I0917 17:20:23.307657   29734 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 17:20:23.307661   29734 node_conditions.go:123] node cpu capacity is 2
	I0917 17:20:23.307664   29734 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 17:20:23.307667   29734 node_conditions.go:123] node cpu capacity is 2
	I0917 17:20:23.307671   29734 node_conditions.go:105] duration metric: took 163.870275ms to run NodePressure ...
	I0917 17:20:23.307684   29734 start.go:241] waiting for startup goroutines ...
	I0917 17:20:23.307702   29734 start.go:255] writing updated cluster config ...
	I0917 17:20:23.307971   29734 ssh_runner.go:195] Run: rm -f paused
	I0917 17:20:23.365174   29734 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 17:20:23.367722   29734 out.go:177] * Done! kubectl is now configured to use "ha-181247" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 17 17:24:02 ha-181247 crio[655]: time="2024-09-17 17:24:02.136820779Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593842136798383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=08f5354d-18d8-43cd-9181-196d03a36c75 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:24:02 ha-181247 crio[655]: time="2024-09-17 17:24:02.137339172Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=23079744-7a96-4f2e-974f-1713285a65fd name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:24:02 ha-181247 crio[655]: time="2024-09-17 17:24:02.137421036Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=23079744-7a96-4f2e-974f-1713285a65fd name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:24:02 ha-181247 crio[655]: time="2024-09-17 17:24:02.137672336Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1e590e905eaba85f23f252703d2acaae800370c02af50dfc79d700256e17f2f,PodSandboxId:032ab62b0ab6812b12acf668d671cc659557b22a8790aa1fc2cfa4494ec55dd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726593627164258619,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w8wxj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 681ace64-6c78-437e-9e9d-46edd2b4a8c4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f192df08c3590e7bfd79aae969f3dd8f66f4d35701d32e3420611215f63553e5,PodSandboxId:4564f117340894dd0f7a94fcff0ee0f5325c4046162909def7023cd62482149e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726593490994398983,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bdthh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ae9d00-44ce-47be-80c5-12144ff8c69b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595bdaca307f1441f309d1bcf4b34051887f8d890e48b076244fe646ebd3d242,PodSandboxId:251b80e9b641b04b92ef1676da10ff73b6530adcb89c413d79c9dc235a3e2707,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726593490936131858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lmg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1052e249-3530-4220-8214-0c36a02c4215,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c6e5f75c94800e99ebcedfae5efd792c106959333def0368e33a21ce4b57dba,PodSandboxId:e0668ebeee0ff4dc748c04ce37a44def6862e23adab72b158cf4c851639e98aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726593490906209238,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcef4cf0-61a6-4f9f-9644-f17f7f819237,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3e79172e8670acd510e33bf9c5da16c5d3534ce4ffe595a8e325c11657359d,PodSandboxId:2c5d3e765b253f81a3020f932abb56db9dbde7085ba378402b5ad0ce2468bd4e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172659347
8702505406,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a075630a-48df-429f-98ef-49bca2d9dac5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d41e134288854fec45bd7f2346f9652bdd3a1e807f2fc2cbcd942906f8f16d2,PodSandboxId:030199bb820c578808c1be3b2896b11a2b2fd10c184eb736f1a637e365377055,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726593478343285876,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 882de4ca-d789-403e-a22e-22fbc776af10,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe133e1d0be653fbf2459fedd510646aaea8936b333247d26b813696efb08ff5,PodSandboxId:8f4315718476a4a22030583c6b103725a1824c70e7a4b1bbdf37dd9efa472fc5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726593467932726338,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3d93f4585a8cac4e1cf6a8c7c6b68d,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e131e7c4af3fcc40108c8700b714abff6a932db62448fb92d9eced1c2ada9a91,PodSandboxId:6c0fc2dc035f99358aade2656ae8ecd22f042954413615bd3314f4560fe0f22b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726593466010393514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5748534d2a3a40ee72c6688e8f4f184d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bd357b39ecdb7b929a836991acf48843872ad86112b5035be1a2d9f29d4256a,PodSandboxId:3a00f7ab2aec732ef2fbdd6f7b9f0b60cd84c2f9b06306821707f58c3602fbc6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726593465986273254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad9722f4b7cb935efee60829f463e82,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48764653b9795d2cba4178792c492a672b05306c9e7af677049f5a787ecc32d,PodSandboxId:ad3ec6af7c9472fc4d0b392ea77e8488ace601ffa41f53e6d0309bbd19491f62,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726593465857888053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afa944276158c101e1b388244401851,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b77bc3ea3167ff26e92f87afb0c5a5816aa025b692fb78393cc147efa615fb4,PodSandboxId:64083dac55fed338c25d3bc30e0f7fabdd26345b8ad9d8098355f4740009de72,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726593465866557307,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-181247,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48e891f6ff7af12b13f4bafa92c7341b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=23079744-7a96-4f2e-974f-1713285a65fd name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:24:02 ha-181247 crio[655]: time="2024-09-17 17:24:02.175719612Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9f3668fc-d064-452c-83d7-8b999be21a24 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:24:02 ha-181247 crio[655]: time="2024-09-17 17:24:02.175818545Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9f3668fc-d064-452c-83d7-8b999be21a24 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:24:02 ha-181247 crio[655]: time="2024-09-17 17:24:02.177320300Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=907316de-8261-4464-9b07-936b9d0c9c10 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:24:02 ha-181247 crio[655]: time="2024-09-17 17:24:02.177802696Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593842177775135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=907316de-8261-4464-9b07-936b9d0c9c10 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:24:02 ha-181247 crio[655]: time="2024-09-17 17:24:02.178547208Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bfaad070-6a38-44cd-a2e6-853f5f39050a name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:24:02 ha-181247 crio[655]: time="2024-09-17 17:24:02.178633041Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bfaad070-6a38-44cd-a2e6-853f5f39050a name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:24:02 ha-181247 crio[655]: time="2024-09-17 17:24:02.178906999Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1e590e905eaba85f23f252703d2acaae800370c02af50dfc79d700256e17f2f,PodSandboxId:032ab62b0ab6812b12acf668d671cc659557b22a8790aa1fc2cfa4494ec55dd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726593627164258619,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w8wxj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 681ace64-6c78-437e-9e9d-46edd2b4a8c4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f192df08c3590e7bfd79aae969f3dd8f66f4d35701d32e3420611215f63553e5,PodSandboxId:4564f117340894dd0f7a94fcff0ee0f5325c4046162909def7023cd62482149e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726593490994398983,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bdthh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ae9d00-44ce-47be-80c5-12144ff8c69b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595bdaca307f1441f309d1bcf4b34051887f8d890e48b076244fe646ebd3d242,PodSandboxId:251b80e9b641b04b92ef1676da10ff73b6530adcb89c413d79c9dc235a3e2707,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726593490936131858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lmg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1052e249-3530-4220-8214-0c36a02c4215,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c6e5f75c94800e99ebcedfae5efd792c106959333def0368e33a21ce4b57dba,PodSandboxId:e0668ebeee0ff4dc748c04ce37a44def6862e23adab72b158cf4c851639e98aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726593490906209238,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcef4cf0-61a6-4f9f-9644-f17f7f819237,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3e79172e8670acd510e33bf9c5da16c5d3534ce4ffe595a8e325c11657359d,PodSandboxId:2c5d3e765b253f81a3020f932abb56db9dbde7085ba378402b5ad0ce2468bd4e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172659347
8702505406,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a075630a-48df-429f-98ef-49bca2d9dac5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d41e134288854fec45bd7f2346f9652bdd3a1e807f2fc2cbcd942906f8f16d2,PodSandboxId:030199bb820c578808c1be3b2896b11a2b2fd10c184eb736f1a637e365377055,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726593478343285876,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 882de4ca-d789-403e-a22e-22fbc776af10,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe133e1d0be653fbf2459fedd510646aaea8936b333247d26b813696efb08ff5,PodSandboxId:8f4315718476a4a22030583c6b103725a1824c70e7a4b1bbdf37dd9efa472fc5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726593467932726338,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3d93f4585a8cac4e1cf6a8c7c6b68d,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e131e7c4af3fcc40108c8700b714abff6a932db62448fb92d9eced1c2ada9a91,PodSandboxId:6c0fc2dc035f99358aade2656ae8ecd22f042954413615bd3314f4560fe0f22b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726593466010393514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5748534d2a3a40ee72c6688e8f4f184d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bd357b39ecdb7b929a836991acf48843872ad86112b5035be1a2d9f29d4256a,PodSandboxId:3a00f7ab2aec732ef2fbdd6f7b9f0b60cd84c2f9b06306821707f58c3602fbc6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726593465986273254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad9722f4b7cb935efee60829f463e82,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48764653b9795d2cba4178792c492a672b05306c9e7af677049f5a787ecc32d,PodSandboxId:ad3ec6af7c9472fc4d0b392ea77e8488ace601ffa41f53e6d0309bbd19491f62,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726593465857888053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afa944276158c101e1b388244401851,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b77bc3ea3167ff26e92f87afb0c5a5816aa025b692fb78393cc147efa615fb4,PodSandboxId:64083dac55fed338c25d3bc30e0f7fabdd26345b8ad9d8098355f4740009de72,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726593465866557307,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-181247,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48e891f6ff7af12b13f4bafa92c7341b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bfaad070-6a38-44cd-a2e6-853f5f39050a name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:24:02 ha-181247 crio[655]: time="2024-09-17 17:24:02.223094788Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=36dddc3b-f869-4c06-96ee-039cc1e2be34 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:24:02 ha-181247 crio[655]: time="2024-09-17 17:24:02.223222245Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=36dddc3b-f869-4c06-96ee-039cc1e2be34 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:24:02 ha-181247 crio[655]: time="2024-09-17 17:24:02.224565931Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e12eb54-e27e-4013-98af-2a46bbd80794 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:24:02 ha-181247 crio[655]: time="2024-09-17 17:24:02.224985306Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593842224963539,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e12eb54-e27e-4013-98af-2a46bbd80794 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:24:02 ha-181247 crio[655]: time="2024-09-17 17:24:02.225933025Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e6ec81d7-d4d8-41a0-9606-a2bb8a8e1591 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:24:02 ha-181247 crio[655]: time="2024-09-17 17:24:02.226004533Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e6ec81d7-d4d8-41a0-9606-a2bb8a8e1591 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:24:02 ha-181247 crio[655]: time="2024-09-17 17:24:02.226310892Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1e590e905eaba85f23f252703d2acaae800370c02af50dfc79d700256e17f2f,PodSandboxId:032ab62b0ab6812b12acf668d671cc659557b22a8790aa1fc2cfa4494ec55dd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726593627164258619,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w8wxj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 681ace64-6c78-437e-9e9d-46edd2b4a8c4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f192df08c3590e7bfd79aae969f3dd8f66f4d35701d32e3420611215f63553e5,PodSandboxId:4564f117340894dd0f7a94fcff0ee0f5325c4046162909def7023cd62482149e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726593490994398983,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bdthh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ae9d00-44ce-47be-80c5-12144ff8c69b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595bdaca307f1441f309d1bcf4b34051887f8d890e48b076244fe646ebd3d242,PodSandboxId:251b80e9b641b04b92ef1676da10ff73b6530adcb89c413d79c9dc235a3e2707,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726593490936131858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lmg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1052e249-3530-4220-8214-0c36a02c4215,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c6e5f75c94800e99ebcedfae5efd792c106959333def0368e33a21ce4b57dba,PodSandboxId:e0668ebeee0ff4dc748c04ce37a44def6862e23adab72b158cf4c851639e98aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726593490906209238,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcef4cf0-61a6-4f9f-9644-f17f7f819237,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3e79172e8670acd510e33bf9c5da16c5d3534ce4ffe595a8e325c11657359d,PodSandboxId:2c5d3e765b253f81a3020f932abb56db9dbde7085ba378402b5ad0ce2468bd4e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172659347
8702505406,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a075630a-48df-429f-98ef-49bca2d9dac5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d41e134288854fec45bd7f2346f9652bdd3a1e807f2fc2cbcd942906f8f16d2,PodSandboxId:030199bb820c578808c1be3b2896b11a2b2fd10c184eb736f1a637e365377055,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726593478343285876,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 882de4ca-d789-403e-a22e-22fbc776af10,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe133e1d0be653fbf2459fedd510646aaea8936b333247d26b813696efb08ff5,PodSandboxId:8f4315718476a4a22030583c6b103725a1824c70e7a4b1bbdf37dd9efa472fc5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726593467932726338,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3d93f4585a8cac4e1cf6a8c7c6b68d,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e131e7c4af3fcc40108c8700b714abff6a932db62448fb92d9eced1c2ada9a91,PodSandboxId:6c0fc2dc035f99358aade2656ae8ecd22f042954413615bd3314f4560fe0f22b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726593466010393514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5748534d2a3a40ee72c6688e8f4f184d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bd357b39ecdb7b929a836991acf48843872ad86112b5035be1a2d9f29d4256a,PodSandboxId:3a00f7ab2aec732ef2fbdd6f7b9f0b60cd84c2f9b06306821707f58c3602fbc6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726593465986273254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad9722f4b7cb935efee60829f463e82,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48764653b9795d2cba4178792c492a672b05306c9e7af677049f5a787ecc32d,PodSandboxId:ad3ec6af7c9472fc4d0b392ea77e8488ace601ffa41f53e6d0309bbd19491f62,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726593465857888053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afa944276158c101e1b388244401851,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b77bc3ea3167ff26e92f87afb0c5a5816aa025b692fb78393cc147efa615fb4,PodSandboxId:64083dac55fed338c25d3bc30e0f7fabdd26345b8ad9d8098355f4740009de72,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726593465866557307,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-181247,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48e891f6ff7af12b13f4bafa92c7341b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e6ec81d7-d4d8-41a0-9606-a2bb8a8e1591 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:24:02 ha-181247 crio[655]: time="2024-09-17 17:24:02.265228567Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=10976060-6211-4c5e-90a2-7a9dddc4cb78 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:24:02 ha-181247 crio[655]: time="2024-09-17 17:24:02.265319991Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=10976060-6211-4c5e-90a2-7a9dddc4cb78 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:24:02 ha-181247 crio[655]: time="2024-09-17 17:24:02.266318761Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a7decfae-ff14-4cf4-af32-5d603fe09195 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:24:02 ha-181247 crio[655]: time="2024-09-17 17:24:02.266771031Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593842266746425,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a7decfae-ff14-4cf4-af32-5d603fe09195 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:24:02 ha-181247 crio[655]: time="2024-09-17 17:24:02.268357401Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=020be686-1479-4dd3-8d0e-c09e10107d79 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:24:02 ha-181247 crio[655]: time="2024-09-17 17:24:02.268441449Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=020be686-1479-4dd3-8d0e-c09e10107d79 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:24:02 ha-181247 crio[655]: time="2024-09-17 17:24:02.268696359Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1e590e905eaba85f23f252703d2acaae800370c02af50dfc79d700256e17f2f,PodSandboxId:032ab62b0ab6812b12acf668d671cc659557b22a8790aa1fc2cfa4494ec55dd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726593627164258619,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w8wxj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 681ace64-6c78-437e-9e9d-46edd2b4a8c4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f192df08c3590e7bfd79aae969f3dd8f66f4d35701d32e3420611215f63553e5,PodSandboxId:4564f117340894dd0f7a94fcff0ee0f5325c4046162909def7023cd62482149e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726593490994398983,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bdthh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ae9d00-44ce-47be-80c5-12144ff8c69b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595bdaca307f1441f309d1bcf4b34051887f8d890e48b076244fe646ebd3d242,PodSandboxId:251b80e9b641b04b92ef1676da10ff73b6530adcb89c413d79c9dc235a3e2707,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726593490936131858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lmg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1052e249-3530-4220-8214-0c36a02c4215,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c6e5f75c94800e99ebcedfae5efd792c106959333def0368e33a21ce4b57dba,PodSandboxId:e0668ebeee0ff4dc748c04ce37a44def6862e23adab72b158cf4c851639e98aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726593490906209238,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcef4cf0-61a6-4f9f-9644-f17f7f819237,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3e79172e8670acd510e33bf9c5da16c5d3534ce4ffe595a8e325c11657359d,PodSandboxId:2c5d3e765b253f81a3020f932abb56db9dbde7085ba378402b5ad0ce2468bd4e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172659347
8702505406,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a075630a-48df-429f-98ef-49bca2d9dac5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d41e134288854fec45bd7f2346f9652bdd3a1e807f2fc2cbcd942906f8f16d2,PodSandboxId:030199bb820c578808c1be3b2896b11a2b2fd10c184eb736f1a637e365377055,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726593478343285876,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 882de4ca-d789-403e-a22e-22fbc776af10,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe133e1d0be653fbf2459fedd510646aaea8936b333247d26b813696efb08ff5,PodSandboxId:8f4315718476a4a22030583c6b103725a1824c70e7a4b1bbdf37dd9efa472fc5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726593467932726338,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3d93f4585a8cac4e1cf6a8c7c6b68d,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e131e7c4af3fcc40108c8700b714abff6a932db62448fb92d9eced1c2ada9a91,PodSandboxId:6c0fc2dc035f99358aade2656ae8ecd22f042954413615bd3314f4560fe0f22b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726593466010393514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5748534d2a3a40ee72c6688e8f4f184d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bd357b39ecdb7b929a836991acf48843872ad86112b5035be1a2d9f29d4256a,PodSandboxId:3a00f7ab2aec732ef2fbdd6f7b9f0b60cd84c2f9b06306821707f58c3602fbc6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726593465986273254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad9722f4b7cb935efee60829f463e82,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48764653b9795d2cba4178792c492a672b05306c9e7af677049f5a787ecc32d,PodSandboxId:ad3ec6af7c9472fc4d0b392ea77e8488ace601ffa41f53e6d0309bbd19491f62,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726593465857888053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afa944276158c101e1b388244401851,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b77bc3ea3167ff26e92f87afb0c5a5816aa025b692fb78393cc147efa615fb4,PodSandboxId:64083dac55fed338c25d3bc30e0f7fabdd26345b8ad9d8098355f4740009de72,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726593465866557307,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-181247,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48e891f6ff7af12b13f4bafa92c7341b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=020be686-1479-4dd3-8d0e-c09e10107d79 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c1e590e905eab       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   032ab62b0ab68       busybox-7dff88458-w8wxj
	f192df08c3590       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   4564f11734089       coredns-7c65d6cfc9-bdthh
	595bdaca307f1       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   251b80e9b641b       coredns-7c65d6cfc9-5lmg4
	4c6e5f75c9480       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   e0668ebeee0ff       storage-provisioner
	aa3e79172e867       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   2c5d3e765b253       kube-proxy-7rrxk
	8d41e13428885       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   030199bb820c5       kindnet-2tkbp
	fe133e1d0be65       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   8f4315718476a       kube-vip-ha-181247
	e131e7c4af3fc       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   6c0fc2dc035f9       etcd-ha-181247
	1bd357b39ecdb       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   3a00f7ab2aec7       kube-controller-manager-ha-181247
	2b77bc3ea3167       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   64083dac55fed       kube-scheduler-ha-181247
	c48764653b979       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   ad3ec6af7c947       kube-apiserver-ha-181247
	
	
	==> coredns [595bdaca307f1441f309d1bcf4b34051887f8d890e48b076244fe646ebd3d242] <==
	[INFO] 127.0.0.1:49564 - 56083 "HINFO IN 7646535878500117191.6117038551668512559. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015772836s
	[INFO] 10.244.1.2:46481 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.022391474s
	[INFO] 10.244.2.2:58475 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000197137s
	[INFO] 10.244.0.4:47160 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000262396s
	[INFO] 10.244.0.4:43644 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000098798s
	[INFO] 10.244.0.4:58082 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.00083865s
	[INFO] 10.244.1.2:33599 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003427065s
	[INFO] 10.244.1.2:48415 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000218431s
	[INFO] 10.244.1.2:36800 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000118274s
	[INFO] 10.244.2.2:43997 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000248398s
	[INFO] 10.244.2.2:35973 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001811s
	[INFO] 10.244.2.2:49572 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000172284s
	[INFO] 10.244.0.4:47826 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002065843s
	[INFO] 10.244.0.4:36193 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000199582s
	[INFO] 10.244.0.4:50628 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110526s
	[INFO] 10.244.0.4:44724 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114759s
	[INFO] 10.244.0.4:42511 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083739s
	[INFO] 10.244.2.2:46937 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116808s
	[INFO] 10.244.2.2:44451 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000173075s
	[INFO] 10.244.0.4:40459 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064325s
	[INFO] 10.244.1.2:49457 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184596s
	[INFO] 10.244.1.2:38498 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000205346s
	[INFO] 10.244.2.2:59967 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130934s
	[INFO] 10.244.2.2:41589 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000130541s
	[INFO] 10.244.0.4:45130 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000138569s
	
	
	==> coredns [f192df08c3590e7bfd79aae969f3dd8f66f4d35701d32e3420611215f63553e5] <==
	[INFO] 10.244.1.2:48013 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000120798s
	[INFO] 10.244.2.2:52666 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00190695s
	[INFO] 10.244.2.2:46125 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000169465s
	[INFO] 10.244.2.2:56262 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001604819s
	[INFO] 10.244.2.2:50732 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00016708s
	[INFO] 10.244.2.2:42284 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000132466s
	[INFO] 10.244.0.4:37678 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000213338s
	[INFO] 10.244.0.4:44751 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000122274s
	[INFO] 10.244.0.4:56988 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001602111s
	[INFO] 10.244.1.2:42868 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00026031s
	[INFO] 10.244.1.2:40978 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000206846s
	[INFO] 10.244.1.2:41313 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097139s
	[INFO] 10.244.1.2:50208 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000151609s
	[INFO] 10.244.2.2:49264 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143158s
	[INFO] 10.244.2.2:54921 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000162093s
	[INFO] 10.244.0.4:54768 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000211558s
	[INFO] 10.244.0.4:47021 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000048005s
	[INFO] 10.244.0.4:52698 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004567s
	[INFO] 10.244.1.2:39357 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000237795s
	[INFO] 10.244.1.2:48172 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000183611s
	[INFO] 10.244.2.2:56434 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000125357s
	[INFO] 10.244.2.2:37159 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000179695s
	[INFO] 10.244.0.4:40381 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150761s
	[INFO] 10.244.0.4:39726 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074302s
	[INFO] 10.244.0.4:39990 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097242s
	
	
	==> describe nodes <==
	Name:               ha-181247
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-181247
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=ha-181247
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T17_17_53_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 17:17:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181247
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:24:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:20:56 +0000   Tue, 17 Sep 2024 17:17:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:20:56 +0000   Tue, 17 Sep 2024 17:17:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:20:56 +0000   Tue, 17 Sep 2024 17:17:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:20:56 +0000   Tue, 17 Sep 2024 17:18:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    ha-181247
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fef45c02f40245a0a3ede964289ca350
	  System UUID:                fef45c02-f402-45a0-a3ed-e964289ca350
	  Boot ID:                    3253b46a-acef-407f-8fd6-3d5cae46a6bb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-w8wxj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 coredns-7c65d6cfc9-5lmg4             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m5s
	  kube-system                 coredns-7c65d6cfc9-bdthh             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m6s
	  kube-system                 etcd-ha-181247                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m10s
	  kube-system                 kindnet-2tkbp                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m6s
	  kube-system                 kube-apiserver-ha-181247             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-controller-manager-ha-181247    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-proxy-7rrxk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 kube-scheduler-ha-181247             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-vip-ha-181247                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m3s   kube-proxy       
	  Normal  Starting                 6m10s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m10s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m10s  kubelet          Node ha-181247 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m10s  kubelet          Node ha-181247 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m10s  kubelet          Node ha-181247 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m7s   node-controller  Node ha-181247 event: Registered Node ha-181247 in Controller
	  Normal  NodeReady                5m52s  kubelet          Node ha-181247 status is now: NodeReady
	  Normal  RegisteredNode           5m6s   node-controller  Node ha-181247 event: Registered Node ha-181247 in Controller
	  Normal  RegisteredNode           3m55s  node-controller  Node ha-181247 event: Registered Node ha-181247 in Controller
	
	
	Name:               ha-181247-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-181247-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=ha-181247
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T17_18_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 17:18:46 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181247-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:21:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 17 Sep 2024 17:20:48 +0000   Tue, 17 Sep 2024 17:22:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 17 Sep 2024 17:20:48 +0000   Tue, 17 Sep 2024 17:22:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 17 Sep 2024 17:20:48 +0000   Tue, 17 Sep 2024 17:22:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 17 Sep 2024 17:20:48 +0000   Tue, 17 Sep 2024 17:22:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.11
	  Hostname:    ha-181247-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2585a68084874db38baf46d679282ed1
	  System UUID:                2585a680-8487-4db3-8baf-46d679282ed1
	  Boot ID:                    5bfdf389-469b-42f9-975f-6c72da7743b0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-96b8c                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 etcd-ha-181247-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m15s
	  kube-system                 kindnet-qqpgm                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m16s
	  kube-system                 kube-apiserver-ha-181247-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 kube-controller-manager-ha-181247-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 kube-proxy-xmfcj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 kube-scheduler-ha-181247-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 kube-vip-ha-181247-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m10s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m17s (x8 over 5m17s)  kubelet          Node ha-181247-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m17s (x8 over 5m17s)  kubelet          Node ha-181247-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m17s (x7 over 5m17s)  kubelet          Node ha-181247-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m12s                  node-controller  Node ha-181247-m02 event: Registered Node ha-181247-m02 in Controller
	  Normal  RegisteredNode           5m6s                   node-controller  Node ha-181247-m02 event: Registered Node ha-181247-m02 in Controller
	  Normal  RegisteredNode           3m55s                  node-controller  Node ha-181247-m02 event: Registered Node ha-181247-m02 in Controller
	  Normal  NodeNotReady             101s                   node-controller  Node ha-181247-m02 status is now: NodeNotReady
	
	
	Name:               ha-181247-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-181247-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=ha-181247
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T17_20_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 17:19:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181247-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:23:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:20:28 +0000   Tue, 17 Sep 2024 17:19:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:20:28 +0000   Tue, 17 Sep 2024 17:19:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:20:28 +0000   Tue, 17 Sep 2024 17:19:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:20:28 +0000   Tue, 17 Sep 2024 17:20:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.122
	  Hostname:    ha-181247-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2b42e80aa44a47aebcfcad072d252d58
	  System UUID:                2b42e80a-a44a-47ae-bcfc-ad072d252d58
	  Boot ID:                    dd80ee86-310e-4a32-94de-53cde30919d8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-mxrbl                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 etcd-ha-181247-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m2s
	  kube-system                 kindnet-tkbmg                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m4s
	  kube-system                 kube-apiserver-ha-181247-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-controller-manager-ha-181247-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-proxy-42gpk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-ha-181247-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-vip-ha-181247-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m59s                kube-proxy       
	  Normal  NodeAllocatableEnforced  4m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m4s (x8 over 4m5s)  kubelet          Node ha-181247-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m4s (x8 over 4m5s)  kubelet          Node ha-181247-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m4s (x7 over 4m5s)  kubelet          Node ha-181247-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m2s                 node-controller  Node ha-181247-m03 event: Registered Node ha-181247-m03 in Controller
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-181247-m03 event: Registered Node ha-181247-m03 in Controller
	  Normal  RegisteredNode           3m55s                node-controller  Node ha-181247-m03 event: Registered Node ha-181247-m03 in Controller
	
	
	Name:               ha-181247-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-181247-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=ha-181247
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T17_21_01_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 17:21:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181247-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:23:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:21:32 +0000   Tue, 17 Sep 2024 17:21:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:21:32 +0000   Tue, 17 Sep 2024 17:21:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:21:32 +0000   Tue, 17 Sep 2024 17:21:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:21:32 +0000   Tue, 17 Sep 2024 17:21:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.63
	  Hostname:    ha-181247-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6b33a6f0f712480eacc4183b870e9eb2
	  System UUID:                6b33a6f0-f712-480e-acc4-183b870e9eb2
	  Boot ID:                    85fce420-4742-47df-a8ae-66c460bcd5eb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-ntzg5       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m1s
	  kube-system                 kube-proxy-shlht    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m1s (x2 over 3m2s)  kubelet          Node ha-181247-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x2 over 3m2s)  kubelet          Node ha-181247-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x2 over 3m2s)  kubelet          Node ha-181247-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m                   node-controller  Node ha-181247-m04 event: Registered Node ha-181247-m04 in Controller
	  Normal  RegisteredNode           2m57s                node-controller  Node ha-181247-m04 event: Registered Node ha-181247-m04 in Controller
	  Normal  RegisteredNode           2m56s                node-controller  Node ha-181247-m04 event: Registered Node ha-181247-m04 in Controller
	  Normal  NodeReady                2m42s                kubelet          Node ha-181247-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep17 17:17] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051348] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040260] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.884229] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.613872] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.611337] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.348569] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.067341] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053278] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.203985] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.135345] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.305138] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +4.213157] systemd-fstab-generator[739]: Ignoring "noauto" option for root device
	[  +4.747183] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.062031] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.385150] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	[  +0.092383] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.405737] kauditd_printk_skb: 18 callbacks suppressed
	[Sep17 17:18] kauditd_printk_skb: 41 callbacks suppressed
	[ +43.240359] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [e131e7c4af3fcc40108c8700b714abff6a932db62448fb92d9eced1c2ada9a91] <==
	{"level":"warn","ts":"2024-09-17T17:24:02.515515Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:24:02.561573Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:24:02.571386Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:24:02.577432Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:24:02.589166Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:24:02.598032Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:24:02.605590Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:24:02.611217Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:24:02.614947Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:24:02.615227Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:24:02.622135Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:24:02.629724Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:24:02.636980Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:24:02.640945Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:24:02.645888Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:24:02.651931Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:24:02.659363Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:24:02.667450Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:24:02.672308Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:24:02.676294Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:24:02.680921Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:24:02.693001Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:24:02.699995Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:24:02.715405Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:24:02.742407Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:24:02 up 6 min,  0 users,  load average: 0.43, 0.56, 0.28
	Linux ha-181247 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8d41e134288854fec45bd7f2346f9652bdd3a1e807f2fc2cbcd942906f8f16d2] <==
	I0917 17:23:29.801356       1 main.go:322] Node ha-181247-m03 has CIDR [10.244.2.0/24] 
	I0917 17:23:39.802483       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0917 17:23:39.802623       1 main.go:322] Node ha-181247-m04 has CIDR [10.244.3.0/24] 
	I0917 17:23:39.802776       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0917 17:23:39.802804       1 main.go:299] handling current node
	I0917 17:23:39.802829       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0917 17:23:39.802845       1 main.go:322] Node ha-181247-m02 has CIDR [10.244.1.0/24] 
	I0917 17:23:39.802913       1 main.go:295] Handling node with IPs: map[192.168.39.122:{}]
	I0917 17:23:39.802932       1 main.go:322] Node ha-181247-m03 has CIDR [10.244.2.0/24] 
	I0917 17:23:49.809032       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0917 17:23:49.809261       1 main.go:299] handling current node
	I0917 17:23:49.809316       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0917 17:23:49.809335       1 main.go:322] Node ha-181247-m02 has CIDR [10.244.1.0/24] 
	I0917 17:23:49.809490       1 main.go:295] Handling node with IPs: map[192.168.39.122:{}]
	I0917 17:23:49.809511       1 main.go:322] Node ha-181247-m03 has CIDR [10.244.2.0/24] 
	I0917 17:23:49.809609       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0917 17:23:49.809640       1 main.go:322] Node ha-181247-m04 has CIDR [10.244.3.0/24] 
	I0917 17:23:59.801406       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0917 17:23:59.801562       1 main.go:299] handling current node
	I0917 17:23:59.801626       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0917 17:23:59.801645       1 main.go:322] Node ha-181247-m02 has CIDR [10.244.1.0/24] 
	I0917 17:23:59.801812       1 main.go:295] Handling node with IPs: map[192.168.39.122:{}]
	I0917 17:23:59.801836       1 main.go:322] Node ha-181247-m03 has CIDR [10.244.2.0/24] 
	I0917 17:23:59.801906       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0917 17:23:59.801925       1 main.go:322] Node ha-181247-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [c48764653b9795d2cba4178792c492a672b05306c9e7af677049f5a787ecc32d] <==
	W0917 17:17:51.094671       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.195]
	I0917 17:17:51.095771       1 controller.go:615] quota admission added evaluator for: endpoints
	I0917 17:17:51.109019       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0917 17:17:51.111941       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0917 17:17:52.254623       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0917 17:17:52.271476       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0917 17:17:52.389757       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0917 17:17:56.707987       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0917 17:17:56.873233       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0917 17:20:28.669364       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58950: use of closed network connection
	E0917 17:20:28.870825       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58974: use of closed network connection
	E0917 17:20:29.150547       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59002: use of closed network connection
	E0917 17:20:29.343681       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59024: use of closed network connection
	E0917 17:20:29.545545       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59054: use of closed network connection
	E0917 17:20:29.731828       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59064: use of closed network connection
	E0917 17:20:29.914034       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59080: use of closed network connection
	E0917 17:20:30.105963       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59096: use of closed network connection
	E0917 17:20:30.297412       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59114: use of closed network connection
	E0917 17:20:30.622591       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59136: use of closed network connection
	E0917 17:20:30.804754       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59144: use of closed network connection
	E0917 17:20:30.991519       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59150: use of closed network connection
	E0917 17:20:31.181489       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59176: use of closed network connection
	E0917 17:20:31.364445       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59192: use of closed network connection
	E0917 17:20:31.551192       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59214: use of closed network connection
	W0917 17:21:51.109604       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.122 192.168.39.195]
	
	
	==> kube-controller-manager [1bd357b39ecdb7b929a836991acf48843872ad86112b5035be1a2d9f29d4256a] <==
	I0917 17:20:56.915161       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247"
	I0917 17:21:01.359253       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-181247-m04\" does not exist"
	I0917 17:21:01.388525       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-181247-m04" podCIDRs=["10.244.3.0/24"]
	I0917 17:21:01.388577       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:21:01.388609       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:21:01.395586       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:21:01.527720       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:21:01.946412       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:21:02.689396       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:21:06.070265       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:21:06.070709       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181247-m04"
	I0917 17:21:06.274495       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:21:11.519267       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:21:20.728668       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:21:20.728824       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-181247-m04"
	I0917 17:21:20.745462       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:21:21.087006       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:21:32.209669       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:22:21.116786       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m02"
	I0917 17:22:21.117029       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-181247-m04"
	I0917 17:22:21.150207       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m02"
	I0917 17:22:21.323712       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="99.682964ms"
	I0917 17:22:21.323825       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="58.393µs"
	I0917 17:22:22.714436       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m02"
	I0917 17:22:26.437332       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m02"
	
	
	==> kube-proxy [aa3e79172e8670acd510e33bf9c5da16c5d3534ce4ffe595a8e325c11657359d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 17:17:58.949022       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 17:17:58.972251       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.195"]
	E0917 17:17:58.972399       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 17:17:59.022186       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 17:17:59.022253       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 17:17:59.022279       1 server_linux.go:169] "Using iptables Proxier"
	I0917 17:17:59.025845       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 17:17:59.026705       1 server.go:483] "Version info" version="v1.31.1"
	I0917 17:17:59.026735       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:17:59.028836       1 config.go:105] "Starting endpoint slice config controller"
	I0917 17:17:59.028842       1 config.go:199] "Starting service config controller"
	I0917 17:17:59.029428       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 17:17:59.031271       1 config.go:328] "Starting node config controller"
	I0917 17:17:59.029496       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 17:17:59.038640       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 17:17:59.038313       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 17:17:59.039243       1 shared_informer.go:320] Caches are synced for node config
	I0917 17:17:59.132284       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [2b77bc3ea3167ff26e92f87afb0c5a5816aa025b692fb78393cc147efa615fb4] <==
	I0917 17:20:24.375688       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-96b8c" node="ha-181247-m02"
	E0917 17:20:24.377314       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-w8wxj\": pod busybox-7dff88458-w8wxj is already assigned to node \"ha-181247\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-w8wxj" node="ha-181247"
	E0917 17:20:24.377407       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 681ace64-6c78-437e-9e9d-46edd2b4a8c4(default/busybox-7dff88458-w8wxj) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-w8wxj"
	E0917 17:20:24.377434       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-w8wxj\": pod busybox-7dff88458-w8wxj is already assigned to node \"ha-181247\"" pod="default/busybox-7dff88458-w8wxj"
	I0917 17:20:24.377472       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-w8wxj" node="ha-181247"
	E0917 17:21:01.463474       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-w5j8r\": pod kindnet-w5j8r is already assigned to node \"ha-181247-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-w5j8r" node="ha-181247-m04"
	E0917 17:21:01.463579       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d9366aa1-9205-4967-a75b-641916ad7d21(kube-system/kindnet-w5j8r) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-w5j8r"
	E0917 17:21:01.463614       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-w5j8r\": pod kindnet-w5j8r is already assigned to node \"ha-181247-m04\"" pod="kube-system/kindnet-w5j8r"
	I0917 17:21:01.463649       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-w5j8r" node="ha-181247-m04"
	E0917 17:21:01.480551       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-shlht\": pod kube-proxy-shlht is already assigned to node \"ha-181247-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-shlht" node="ha-181247-m04"
	E0917 17:21:01.480634       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod af3ec07d-a374-46d8-b9ab-ac02aa23bb0f(kube-system/kube-proxy-shlht) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-shlht"
	E0917 17:21:01.480653       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-shlht\": pod kube-proxy-shlht is already assigned to node \"ha-181247-m04\"" pod="kube-system/kube-proxy-shlht"
	I0917 17:21:01.480686       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-shlht" node="ha-181247-m04"
	E0917 17:21:01.481212       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-ntzg5\": pod kindnet-ntzg5 is already assigned to node \"ha-181247-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-ntzg5" node="ha-181247-m04"
	E0917 17:21:01.481272       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8c3a39fb-fa0a-4e5d-ae4e-7c468cf8cc54(kube-system/kindnet-ntzg5) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-ntzg5"
	E0917 17:21:01.481288       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-ntzg5\": pod kindnet-ntzg5 is already assigned to node \"ha-181247-m04\"" pod="kube-system/kindnet-ntzg5"
	I0917 17:21:01.481324       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ntzg5" node="ha-181247-m04"
	E0917 17:21:01.481718       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-wxx9b\": pod kube-proxy-wxx9b is already assigned to node \"ha-181247-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-wxx9b" node="ha-181247-m04"
	E0917 17:21:01.481771       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod be89da91-3d03-49d5-9c40-8f0a10a29dc4(kube-system/kube-proxy-wxx9b) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-wxx9b"
	E0917 17:21:01.481794       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-wxx9b\": pod kube-proxy-wxx9b is already assigned to node \"ha-181247-m04\"" pod="kube-system/kube-proxy-wxx9b"
	I0917 17:21:01.481828       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-wxx9b" node="ha-181247-m04"
	E0917 17:21:01.598636       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rjzts\": pod kindnet-rjzts is already assigned to node \"ha-181247-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-rjzts" node="ha-181247-m04"
	E0917 17:21:01.598783       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod df1f81cf-787e-4442-b864-71023978df35(kube-system/kindnet-rjzts) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-rjzts"
	E0917 17:21:01.598965       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rjzts\": pod kindnet-rjzts is already assigned to node \"ha-181247-m04\"" pod="kube-system/kindnet-rjzts"
	I0917 17:21:01.599124       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rjzts" node="ha-181247-m04"
	
	
	==> kubelet <==
	Sep 17 17:22:52 ha-181247 kubelet[1302]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 17 17:22:52 ha-181247 kubelet[1302]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 17 17:22:52 ha-181247 kubelet[1302]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 17 17:22:52 ha-181247 kubelet[1302]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 17 17:22:52 ha-181247 kubelet[1302]: E0917 17:22:52.527231    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593772526851350,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:22:52 ha-181247 kubelet[1302]: E0917 17:22:52.527257    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593772526851350,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:23:02 ha-181247 kubelet[1302]: E0917 17:23:02.529919    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593782529402277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:23:02 ha-181247 kubelet[1302]: E0917 17:23:02.530408    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593782529402277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:23:12 ha-181247 kubelet[1302]: E0917 17:23:12.533937    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593792532832636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:23:12 ha-181247 kubelet[1302]: E0917 17:23:12.533964    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593792532832636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:23:22 ha-181247 kubelet[1302]: E0917 17:23:22.536598    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593802536184597,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:23:22 ha-181247 kubelet[1302]: E0917 17:23:22.536639    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593802536184597,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:23:32 ha-181247 kubelet[1302]: E0917 17:23:32.539139    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593812538377501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:23:32 ha-181247 kubelet[1302]: E0917 17:23:32.539188    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593812538377501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:23:42 ha-181247 kubelet[1302]: E0917 17:23:42.544088    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593822542967074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:23:42 ha-181247 kubelet[1302]: E0917 17:23:42.544161    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593822542967074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:23:52 ha-181247 kubelet[1302]: E0917 17:23:52.457301    1302 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 17 17:23:52 ha-181247 kubelet[1302]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 17 17:23:52 ha-181247 kubelet[1302]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 17 17:23:52 ha-181247 kubelet[1302]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 17 17:23:52 ha-181247 kubelet[1302]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 17 17:23:52 ha-181247 kubelet[1302]: E0917 17:23:52.546397    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593832545993478,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:23:52 ha-181247 kubelet[1302]: E0917 17:23:52.546438    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593832545993478,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:24:02 ha-181247 kubelet[1302]: E0917 17:24:02.550589    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593842548800763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:24:02 ha-181247 kubelet[1302]: E0917 17:24:02.550620    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593842548800763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-181247 -n ha-181247
helpers_test.go:261: (dbg) Run:  kubectl --context ha-181247 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (57.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 status -v=7 --alsologtostderr
E0917 17:24:08.843162   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-181247 status -v=7 --alsologtostderr: exit status 3 (3.222661062s)

                                                
                                                
-- stdout --
	ha-181247
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-181247-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-181247-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-181247-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:24:07.358733   34900 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:24:07.358994   34900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:24:07.359002   34900 out.go:358] Setting ErrFile to fd 2...
	I0917 17:24:07.359007   34900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:24:07.359196   34900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 17:24:07.359404   34900 out.go:352] Setting JSON to false
	I0917 17:24:07.359437   34900 mustload.go:65] Loading cluster: ha-181247
	I0917 17:24:07.359493   34900 notify.go:220] Checking for updates...
	I0917 17:24:07.359899   34900 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:24:07.359916   34900 status.go:255] checking status of ha-181247 ...
	I0917 17:24:07.360304   34900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:07.360354   34900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:07.376733   34900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41283
	I0917 17:24:07.377168   34900 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:07.377794   34900 main.go:141] libmachine: Using API Version  1
	I0917 17:24:07.377823   34900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:07.378206   34900 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:07.378403   34900 main.go:141] libmachine: (ha-181247) Calling .GetState
	I0917 17:24:07.380032   34900 status.go:330] ha-181247 host status = "Running" (err=<nil>)
	I0917 17:24:07.380048   34900 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:24:07.380380   34900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:07.380422   34900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:07.395580   34900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38941
	I0917 17:24:07.395991   34900 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:07.396617   34900 main.go:141] libmachine: Using API Version  1
	I0917 17:24:07.396637   34900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:07.396906   34900 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:07.397099   34900 main.go:141] libmachine: (ha-181247) Calling .GetIP
	I0917 17:24:07.400089   34900 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:24:07.400509   34900 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:24:07.400536   34900 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:24:07.400698   34900 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:24:07.400991   34900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:07.401037   34900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:07.416719   34900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34453
	I0917 17:24:07.417212   34900 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:07.417674   34900 main.go:141] libmachine: Using API Version  1
	I0917 17:24:07.417696   34900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:07.418028   34900 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:07.418156   34900 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:24:07.418345   34900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:24:07.418383   34900 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:24:07.421109   34900 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:24:07.421601   34900 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:24:07.421627   34900 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:24:07.421827   34900 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:24:07.421995   34900 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:24:07.422108   34900 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:24:07.422197   34900 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:24:07.505354   34900 ssh_runner.go:195] Run: systemctl --version
	I0917 17:24:07.512333   34900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:24:07.528932   34900 kubeconfig.go:125] found "ha-181247" server: "https://192.168.39.254:8443"
	I0917 17:24:07.528966   34900 api_server.go:166] Checking apiserver status ...
	I0917 17:24:07.528996   34900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:24:07.545738   34900 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1072/cgroup
	W0917 17:24:07.557943   34900 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1072/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 17:24:07.558028   34900 ssh_runner.go:195] Run: ls
	I0917 17:24:07.568661   34900 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0917 17:24:07.572898   34900 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0917 17:24:07.572928   34900 status.go:422] ha-181247 apiserver status = Running (err=<nil>)
	I0917 17:24:07.572939   34900 status.go:257] ha-181247 status: &{Name:ha-181247 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:24:07.572964   34900 status.go:255] checking status of ha-181247-m02 ...
	I0917 17:24:07.573409   34900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:07.573455   34900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:07.589716   34900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41585
	I0917 17:24:07.590209   34900 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:07.590731   34900 main.go:141] libmachine: Using API Version  1
	I0917 17:24:07.590758   34900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:07.591096   34900 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:07.591298   34900 main.go:141] libmachine: (ha-181247-m02) Calling .GetState
	I0917 17:24:07.593089   34900 status.go:330] ha-181247-m02 host status = "Running" (err=<nil>)
	I0917 17:24:07.593104   34900 host.go:66] Checking if "ha-181247-m02" exists ...
	I0917 17:24:07.593416   34900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:07.593455   34900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:07.609885   34900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45699
	I0917 17:24:07.610289   34900 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:07.610731   34900 main.go:141] libmachine: Using API Version  1
	I0917 17:24:07.610751   34900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:07.611120   34900 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:07.611325   34900 main.go:141] libmachine: (ha-181247-m02) Calling .GetIP
	I0917 17:24:07.614328   34900 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:24:07.614834   34900 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:24:07.614861   34900 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:24:07.615008   34900 host.go:66] Checking if "ha-181247-m02" exists ...
	I0917 17:24:07.615322   34900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:07.615365   34900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:07.630646   34900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40385
	I0917 17:24:07.631110   34900 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:07.631584   34900 main.go:141] libmachine: Using API Version  1
	I0917 17:24:07.631605   34900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:07.631948   34900 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:07.632167   34900 main.go:141] libmachine: (ha-181247-m02) Calling .DriverName
	I0917 17:24:07.632358   34900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:24:07.632390   34900 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHHostname
	I0917 17:24:07.635107   34900 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:24:07.635510   34900 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:24:07.635534   34900 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:24:07.635688   34900 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHPort
	I0917 17:24:07.635839   34900 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:24:07.635952   34900 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHUsername
	I0917 17:24:07.636092   34900 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02/id_rsa Username:docker}
	W0917 17:24:10.157463   34900 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.11:22: connect: no route to host
	W0917 17:24:10.157541   34900 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.11:22: connect: no route to host
	E0917 17:24:10.157576   34900 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.11:22: connect: no route to host
	I0917 17:24:10.157584   34900 status.go:257] ha-181247-m02 status: &{Name:ha-181247-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0917 17:24:10.157607   34900 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.11:22: connect: no route to host
	I0917 17:24:10.157616   34900 status.go:255] checking status of ha-181247-m03 ...
	I0917 17:24:10.157927   34900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:10.157979   34900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:10.173860   34900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46551
	I0917 17:24:10.174329   34900 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:10.174792   34900 main.go:141] libmachine: Using API Version  1
	I0917 17:24:10.174813   34900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:10.175116   34900 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:10.175277   34900 main.go:141] libmachine: (ha-181247-m03) Calling .GetState
	I0917 17:24:10.176714   34900 status.go:330] ha-181247-m03 host status = "Running" (err=<nil>)
	I0917 17:24:10.176729   34900 host.go:66] Checking if "ha-181247-m03" exists ...
	I0917 17:24:10.177012   34900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:10.177047   34900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:10.192554   34900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33653
	I0917 17:24:10.193078   34900 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:10.193618   34900 main.go:141] libmachine: Using API Version  1
	I0917 17:24:10.193646   34900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:10.193963   34900 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:10.194167   34900 main.go:141] libmachine: (ha-181247-m03) Calling .GetIP
	I0917 17:24:10.197205   34900 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:10.197592   34900 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:24:10.197621   34900 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:10.197750   34900 host.go:66] Checking if "ha-181247-m03" exists ...
	I0917 17:24:10.198055   34900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:10.198091   34900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:10.213113   34900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35277
	I0917 17:24:10.213640   34900 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:10.214208   34900 main.go:141] libmachine: Using API Version  1
	I0917 17:24:10.214233   34900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:10.214598   34900 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:10.214782   34900 main.go:141] libmachine: (ha-181247-m03) Calling .DriverName
	I0917 17:24:10.215023   34900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:24:10.215062   34900 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	I0917 17:24:10.218431   34900 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:10.218933   34900 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:24:10.218960   34900 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:10.219098   34900 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHPort
	I0917 17:24:10.219292   34900 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:24:10.219462   34900 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHUsername
	I0917 17:24:10.219632   34900 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03/id_rsa Username:docker}
	I0917 17:24:10.310676   34900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:24:10.327303   34900 kubeconfig.go:125] found "ha-181247" server: "https://192.168.39.254:8443"
	I0917 17:24:10.327331   34900 api_server.go:166] Checking apiserver status ...
	I0917 17:24:10.327370   34900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:24:10.346112   34900 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1374/cgroup
	W0917 17:24:10.359312   34900 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1374/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 17:24:10.359386   34900 ssh_runner.go:195] Run: ls
	I0917 17:24:10.364623   34900 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0917 17:24:10.368903   34900 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0917 17:24:10.368925   34900 status.go:422] ha-181247-m03 apiserver status = Running (err=<nil>)
	I0917 17:24:10.368933   34900 status.go:257] ha-181247-m03 status: &{Name:ha-181247-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:24:10.368949   34900 status.go:255] checking status of ha-181247-m04 ...
	I0917 17:24:10.369285   34900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:10.369327   34900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:10.385031   34900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42729
	I0917 17:24:10.385450   34900 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:10.385954   34900 main.go:141] libmachine: Using API Version  1
	I0917 17:24:10.385973   34900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:10.386295   34900 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:10.386476   34900 main.go:141] libmachine: (ha-181247-m04) Calling .GetState
	I0917 17:24:10.387954   34900 status.go:330] ha-181247-m04 host status = "Running" (err=<nil>)
	I0917 17:24:10.387969   34900 host.go:66] Checking if "ha-181247-m04" exists ...
	I0917 17:24:10.388259   34900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:10.388315   34900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:10.403801   34900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35449
	I0917 17:24:10.404265   34900 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:10.404764   34900 main.go:141] libmachine: Using API Version  1
	I0917 17:24:10.404790   34900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:10.405168   34900 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:10.405368   34900 main.go:141] libmachine: (ha-181247-m04) Calling .GetIP
	I0917 17:24:10.408058   34900 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:10.408514   34900 main.go:141] libmachine: (ha-181247-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0a:d0", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:20:47 +0000 UTC Type:0 Mac:52:54:00:e5:0a:d0 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-181247-m04 Clientid:01:52:54:00:e5:0a:d0}
	I0917 17:24:10.408534   34900 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined IP address 192.168.39.63 and MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:10.408689   34900 host.go:66] Checking if "ha-181247-m04" exists ...
	I0917 17:24:10.408992   34900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:10.409026   34900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:10.424569   34900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45705
	I0917 17:24:10.424974   34900 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:10.425420   34900 main.go:141] libmachine: Using API Version  1
	I0917 17:24:10.425545   34900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:10.425901   34900 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:10.426059   34900 main.go:141] libmachine: (ha-181247-m04) Calling .DriverName
	I0917 17:24:10.426222   34900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:24:10.426242   34900 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHHostname
	I0917 17:24:10.429251   34900 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:10.429672   34900 main.go:141] libmachine: (ha-181247-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0a:d0", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:20:47 +0000 UTC Type:0 Mac:52:54:00:e5:0a:d0 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-181247-m04 Clientid:01:52:54:00:e5:0a:d0}
	I0917 17:24:10.429703   34900 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined IP address 192.168.39.63 and MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:10.429835   34900 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHPort
	I0917 17:24:10.430029   34900 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHKeyPath
	I0917 17:24:10.430173   34900 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHUsername
	I0917 17:24:10.430300   34900 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m04/id_rsa Username:docker}
	I0917 17:24:10.517782   34900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:24:10.535757   34900 status.go:257] ha-181247-m04 status: &{Name:ha-181247-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-181247 status -v=7 --alsologtostderr: exit status 3 (4.842047139s)

                                                
                                                
-- stdout --
	ha-181247
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-181247-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-181247-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-181247-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:24:12.072294   35000 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:24:12.072418   35000 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:24:12.072427   35000 out.go:358] Setting ErrFile to fd 2...
	I0917 17:24:12.072431   35000 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:24:12.072617   35000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 17:24:12.072778   35000 out.go:352] Setting JSON to false
	I0917 17:24:12.072803   35000 mustload.go:65] Loading cluster: ha-181247
	I0917 17:24:12.072903   35000 notify.go:220] Checking for updates...
	I0917 17:24:12.073175   35000 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:24:12.073189   35000 status.go:255] checking status of ha-181247 ...
	I0917 17:24:12.073661   35000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:12.073717   35000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:12.091045   35000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46553
	I0917 17:24:12.091554   35000 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:12.092200   35000 main.go:141] libmachine: Using API Version  1
	I0917 17:24:12.092221   35000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:12.092555   35000 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:12.092767   35000 main.go:141] libmachine: (ha-181247) Calling .GetState
	I0917 17:24:12.094423   35000 status.go:330] ha-181247 host status = "Running" (err=<nil>)
	I0917 17:24:12.094437   35000 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:24:12.094860   35000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:12.094905   35000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:12.111259   35000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40465
	I0917 17:24:12.111754   35000 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:12.112193   35000 main.go:141] libmachine: Using API Version  1
	I0917 17:24:12.112242   35000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:12.112555   35000 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:12.112758   35000 main.go:141] libmachine: (ha-181247) Calling .GetIP
	I0917 17:24:12.115484   35000 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:24:12.115862   35000 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:24:12.115891   35000 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:24:12.115996   35000 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:24:12.116298   35000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:12.116362   35000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:12.132922   35000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43093
	I0917 17:24:12.133444   35000 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:12.133943   35000 main.go:141] libmachine: Using API Version  1
	I0917 17:24:12.133964   35000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:12.134252   35000 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:12.134405   35000 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:24:12.134602   35000 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:24:12.134644   35000 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:24:12.137687   35000 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:24:12.138155   35000 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:24:12.138208   35000 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:24:12.138325   35000 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:24:12.138477   35000 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:24:12.138634   35000 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:24:12.138738   35000 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:24:12.227006   35000 ssh_runner.go:195] Run: systemctl --version
	I0917 17:24:12.234575   35000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:24:12.250308   35000 kubeconfig.go:125] found "ha-181247" server: "https://192.168.39.254:8443"
	I0917 17:24:12.250343   35000 api_server.go:166] Checking apiserver status ...
	I0917 17:24:12.250375   35000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:24:12.266338   35000 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1072/cgroup
	W0917 17:24:12.276561   35000 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1072/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 17:24:12.276629   35000 ssh_runner.go:195] Run: ls
	I0917 17:24:12.281471   35000 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0917 17:24:12.286038   35000 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0917 17:24:12.286059   35000 status.go:422] ha-181247 apiserver status = Running (err=<nil>)
	I0917 17:24:12.286069   35000 status.go:257] ha-181247 status: &{Name:ha-181247 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:24:12.286084   35000 status.go:255] checking status of ha-181247-m02 ...
	I0917 17:24:12.286365   35000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:12.286403   35000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:12.303554   35000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43335
	I0917 17:24:12.304118   35000 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:12.304600   35000 main.go:141] libmachine: Using API Version  1
	I0917 17:24:12.304635   35000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:12.304975   35000 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:12.305156   35000 main.go:141] libmachine: (ha-181247-m02) Calling .GetState
	I0917 17:24:12.306935   35000 status.go:330] ha-181247-m02 host status = "Running" (err=<nil>)
	I0917 17:24:12.306955   35000 host.go:66] Checking if "ha-181247-m02" exists ...
	I0917 17:24:12.307301   35000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:12.307339   35000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:12.322793   35000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35603
	I0917 17:24:12.323278   35000 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:12.323806   35000 main.go:141] libmachine: Using API Version  1
	I0917 17:24:12.323825   35000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:12.324100   35000 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:12.324330   35000 main.go:141] libmachine: (ha-181247-m02) Calling .GetIP
	I0917 17:24:12.327363   35000 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:24:12.327887   35000 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:24:12.327920   35000 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:24:12.327990   35000 host.go:66] Checking if "ha-181247-m02" exists ...
	I0917 17:24:12.328279   35000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:12.328315   35000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:12.345486   35000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39465
	I0917 17:24:12.346018   35000 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:12.346558   35000 main.go:141] libmachine: Using API Version  1
	I0917 17:24:12.346585   35000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:12.346927   35000 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:12.347110   35000 main.go:141] libmachine: (ha-181247-m02) Calling .DriverName
	I0917 17:24:12.347294   35000 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:24:12.347315   35000 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHHostname
	I0917 17:24:12.350500   35000 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:24:12.350945   35000 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:24:12.350978   35000 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:24:12.351159   35000 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHPort
	I0917 17:24:12.351355   35000 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:24:12.351533   35000 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHUsername
	I0917 17:24:12.351704   35000 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02/id_rsa Username:docker}
	W0917 17:24:13.229499   35000 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.11:22: connect: no route to host
	I0917 17:24:13.229564   35000 retry.go:31] will retry after 189.430524ms: dial tcp 192.168.39.11:22: connect: no route to host
	W0917 17:24:16.497493   35000 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.11:22: connect: no route to host
	W0917 17:24:16.497569   35000 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.11:22: connect: no route to host
	E0917 17:24:16.497584   35000 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.11:22: connect: no route to host
	I0917 17:24:16.497593   35000 status.go:257] ha-181247-m02 status: &{Name:ha-181247-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0917 17:24:16.497632   35000 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.11:22: connect: no route to host
	I0917 17:24:16.497640   35000 status.go:255] checking status of ha-181247-m03 ...
	I0917 17:24:16.497945   35000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:16.497986   35000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:16.513666   35000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38517
	I0917 17:24:16.514178   35000 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:16.514651   35000 main.go:141] libmachine: Using API Version  1
	I0917 17:24:16.514674   35000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:16.515047   35000 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:16.515260   35000 main.go:141] libmachine: (ha-181247-m03) Calling .GetState
	I0917 17:24:16.516914   35000 status.go:330] ha-181247-m03 host status = "Running" (err=<nil>)
	I0917 17:24:16.516928   35000 host.go:66] Checking if "ha-181247-m03" exists ...
	I0917 17:24:16.517248   35000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:16.517296   35000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:16.532541   35000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35983
	I0917 17:24:16.532958   35000 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:16.533520   35000 main.go:141] libmachine: Using API Version  1
	I0917 17:24:16.533547   35000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:16.533865   35000 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:16.534090   35000 main.go:141] libmachine: (ha-181247-m03) Calling .GetIP
	I0917 17:24:16.536915   35000 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:16.537260   35000 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:24:16.537279   35000 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:16.537433   35000 host.go:66] Checking if "ha-181247-m03" exists ...
	I0917 17:24:16.537741   35000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:16.537780   35000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:16.552907   35000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35689
	I0917 17:24:16.553405   35000 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:16.553856   35000 main.go:141] libmachine: Using API Version  1
	I0917 17:24:16.553879   35000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:16.554196   35000 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:16.554367   35000 main.go:141] libmachine: (ha-181247-m03) Calling .DriverName
	I0917 17:24:16.554547   35000 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:24:16.554573   35000 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	I0917 17:24:16.557580   35000 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:16.558014   35000 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:24:16.558044   35000 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:16.558191   35000 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHPort
	I0917 17:24:16.558375   35000 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:24:16.558514   35000 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHUsername
	I0917 17:24:16.558624   35000 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03/id_rsa Username:docker}
	I0917 17:24:16.641431   35000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:24:16.658227   35000 kubeconfig.go:125] found "ha-181247" server: "https://192.168.39.254:8443"
	I0917 17:24:16.658253   35000 api_server.go:166] Checking apiserver status ...
	I0917 17:24:16.658290   35000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:24:16.676409   35000 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1374/cgroup
	W0917 17:24:16.689317   35000 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1374/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 17:24:16.689368   35000 ssh_runner.go:195] Run: ls
	I0917 17:24:16.694605   35000 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0917 17:24:16.701339   35000 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0917 17:24:16.701367   35000 status.go:422] ha-181247-m03 apiserver status = Running (err=<nil>)
	I0917 17:24:16.701375   35000 status.go:257] ha-181247-m03 status: &{Name:ha-181247-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:24:16.701391   35000 status.go:255] checking status of ha-181247-m04 ...
	I0917 17:24:16.701838   35000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:16.701877   35000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:16.717955   35000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44439
	I0917 17:24:16.718539   35000 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:16.719067   35000 main.go:141] libmachine: Using API Version  1
	I0917 17:24:16.719106   35000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:16.719455   35000 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:16.719667   35000 main.go:141] libmachine: (ha-181247-m04) Calling .GetState
	I0917 17:24:16.721460   35000 status.go:330] ha-181247-m04 host status = "Running" (err=<nil>)
	I0917 17:24:16.721479   35000 host.go:66] Checking if "ha-181247-m04" exists ...
	I0917 17:24:16.721775   35000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:16.721813   35000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:16.737375   35000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39735
	I0917 17:24:16.737866   35000 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:16.738388   35000 main.go:141] libmachine: Using API Version  1
	I0917 17:24:16.738418   35000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:16.738758   35000 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:16.738954   35000 main.go:141] libmachine: (ha-181247-m04) Calling .GetIP
	I0917 17:24:16.742106   35000 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:16.742712   35000 main.go:141] libmachine: (ha-181247-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0a:d0", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:20:47 +0000 UTC Type:0 Mac:52:54:00:e5:0a:d0 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-181247-m04 Clientid:01:52:54:00:e5:0a:d0}
	I0917 17:24:16.742743   35000 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined IP address 192.168.39.63 and MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:16.742923   35000 host.go:66] Checking if "ha-181247-m04" exists ...
	I0917 17:24:16.743333   35000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:16.743392   35000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:16.759090   35000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33671
	I0917 17:24:16.759583   35000 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:16.760091   35000 main.go:141] libmachine: Using API Version  1
	I0917 17:24:16.760111   35000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:16.760416   35000 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:16.760631   35000 main.go:141] libmachine: (ha-181247-m04) Calling .DriverName
	I0917 17:24:16.760842   35000 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:24:16.760863   35000 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHHostname
	I0917 17:24:16.763960   35000 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:16.764508   35000 main.go:141] libmachine: (ha-181247-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0a:d0", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:20:47 +0000 UTC Type:0 Mac:52:54:00:e5:0a:d0 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-181247-m04 Clientid:01:52:54:00:e5:0a:d0}
	I0917 17:24:16.764565   35000 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined IP address 192.168.39.63 and MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:16.764755   35000 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHPort
	I0917 17:24:16.764965   35000 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHKeyPath
	I0917 17:24:16.765146   35000 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHUsername
	I0917 17:24:16.765306   35000 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m04/id_rsa Username:docker}
	I0917 17:24:16.853799   35000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:24:16.869654   35000 status.go:257] ha-181247-m04 status: &{Name:ha-181247-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-181247 status -v=7 --alsologtostderr: exit status 3 (4.893759603s)

                                                
                                                
-- stdout --
	ha-181247
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-181247-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-181247-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-181247-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:24:18.516557   35116 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:24:18.516670   35116 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:24:18.516678   35116 out.go:358] Setting ErrFile to fd 2...
	I0917 17:24:18.516681   35116 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:24:18.516860   35116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 17:24:18.517017   35116 out.go:352] Setting JSON to false
	I0917 17:24:18.517043   35116 mustload.go:65] Loading cluster: ha-181247
	I0917 17:24:18.517146   35116 notify.go:220] Checking for updates...
	I0917 17:24:18.517504   35116 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:24:18.517520   35116 status.go:255] checking status of ha-181247 ...
	I0917 17:24:18.517964   35116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:18.518086   35116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:18.534040   35116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44871
	I0917 17:24:18.534487   35116 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:18.535304   35116 main.go:141] libmachine: Using API Version  1
	I0917 17:24:18.535353   35116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:18.535693   35116 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:18.535886   35116 main.go:141] libmachine: (ha-181247) Calling .GetState
	I0917 17:24:18.537703   35116 status.go:330] ha-181247 host status = "Running" (err=<nil>)
	I0917 17:24:18.537721   35116 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:24:18.538043   35116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:18.538097   35116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:18.553470   35116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41255
	I0917 17:24:18.553988   35116 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:18.554470   35116 main.go:141] libmachine: Using API Version  1
	I0917 17:24:18.554497   35116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:18.554820   35116 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:18.555034   35116 main.go:141] libmachine: (ha-181247) Calling .GetIP
	I0917 17:24:18.558440   35116 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:24:18.558948   35116 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:24:18.558981   35116 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:24:18.559125   35116 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:24:18.559402   35116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:18.559442   35116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:18.575226   35116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43103
	I0917 17:24:18.575843   35116 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:18.576462   35116 main.go:141] libmachine: Using API Version  1
	I0917 17:24:18.576498   35116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:18.576815   35116 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:18.577082   35116 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:24:18.577316   35116 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:24:18.577369   35116 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:24:18.580343   35116 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:24:18.580791   35116 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:24:18.580817   35116 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:24:18.581004   35116 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:24:18.581162   35116 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:24:18.581349   35116 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:24:18.581474   35116 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:24:18.669244   35116 ssh_runner.go:195] Run: systemctl --version
	I0917 17:24:18.675826   35116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:24:18.697678   35116 kubeconfig.go:125] found "ha-181247" server: "https://192.168.39.254:8443"
	I0917 17:24:18.697723   35116 api_server.go:166] Checking apiserver status ...
	I0917 17:24:18.697769   35116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:24:18.714707   35116 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1072/cgroup
	W0917 17:24:18.727037   35116 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1072/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 17:24:18.727088   35116 ssh_runner.go:195] Run: ls
	I0917 17:24:18.732756   35116 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0917 17:24:18.737136   35116 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0917 17:24:18.737163   35116 status.go:422] ha-181247 apiserver status = Running (err=<nil>)
	I0917 17:24:18.737175   35116 status.go:257] ha-181247 status: &{Name:ha-181247 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:24:18.737192   35116 status.go:255] checking status of ha-181247-m02 ...
	I0917 17:24:18.737622   35116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:18.737664   35116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:18.753988   35116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34939
	I0917 17:24:18.754414   35116 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:18.754864   35116 main.go:141] libmachine: Using API Version  1
	I0917 17:24:18.754889   35116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:18.755172   35116 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:18.755396   35116 main.go:141] libmachine: (ha-181247-m02) Calling .GetState
	I0917 17:24:18.757159   35116 status.go:330] ha-181247-m02 host status = "Running" (err=<nil>)
	I0917 17:24:18.757174   35116 host.go:66] Checking if "ha-181247-m02" exists ...
	I0917 17:24:18.757515   35116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:18.757553   35116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:18.772580   35116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43569
	I0917 17:24:18.773057   35116 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:18.773584   35116 main.go:141] libmachine: Using API Version  1
	I0917 17:24:18.773609   35116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:18.773921   35116 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:18.774098   35116 main.go:141] libmachine: (ha-181247-m02) Calling .GetIP
	I0917 17:24:18.776975   35116 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:24:18.777428   35116 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:24:18.777452   35116 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:24:18.777592   35116 host.go:66] Checking if "ha-181247-m02" exists ...
	I0917 17:24:18.777899   35116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:18.777938   35116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:18.793498   35116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36499
	I0917 17:24:18.793974   35116 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:18.794489   35116 main.go:141] libmachine: Using API Version  1
	I0917 17:24:18.794509   35116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:18.794859   35116 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:18.795065   35116 main.go:141] libmachine: (ha-181247-m02) Calling .DriverName
	I0917 17:24:18.795258   35116 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:24:18.795286   35116 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHHostname
	I0917 17:24:18.797984   35116 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:24:18.798384   35116 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:24:18.798413   35116 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:24:18.798546   35116 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHPort
	I0917 17:24:18.798735   35116 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:24:18.798883   35116 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHUsername
	I0917 17:24:18.799037   35116 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02/id_rsa Username:docker}
	W0917 17:24:19.565440   35116 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.11:22: connect: no route to host
	I0917 17:24:19.565504   35116 retry.go:31] will retry after 356.395328ms: dial tcp 192.168.39.11:22: connect: no route to host
	W0917 17:24:22.993500   35116 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.11:22: connect: no route to host
	W0917 17:24:22.993596   35116 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.11:22: connect: no route to host
	E0917 17:24:22.993617   35116 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.11:22: connect: no route to host
	I0917 17:24:22.993629   35116 status.go:257] ha-181247-m02 status: &{Name:ha-181247-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0917 17:24:22.993665   35116 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.11:22: connect: no route to host
	I0917 17:24:22.993680   35116 status.go:255] checking status of ha-181247-m03 ...
	I0917 17:24:22.994103   35116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:22.994160   35116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:23.010157   35116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32967
	I0917 17:24:23.010748   35116 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:23.011232   35116 main.go:141] libmachine: Using API Version  1
	I0917 17:24:23.011261   35116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:23.011573   35116 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:23.011774   35116 main.go:141] libmachine: (ha-181247-m03) Calling .GetState
	I0917 17:24:23.013519   35116 status.go:330] ha-181247-m03 host status = "Running" (err=<nil>)
	I0917 17:24:23.013535   35116 host.go:66] Checking if "ha-181247-m03" exists ...
	I0917 17:24:23.013883   35116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:23.013925   35116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:23.029115   35116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41377
	I0917 17:24:23.029640   35116 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:23.030147   35116 main.go:141] libmachine: Using API Version  1
	I0917 17:24:23.030169   35116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:23.030486   35116 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:23.030708   35116 main.go:141] libmachine: (ha-181247-m03) Calling .GetIP
	I0917 17:24:23.034208   35116 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:23.034741   35116 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:24:23.034761   35116 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:23.034987   35116 host.go:66] Checking if "ha-181247-m03" exists ...
	I0917 17:24:23.035410   35116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:23.035464   35116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:23.051291   35116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41545
	I0917 17:24:23.051769   35116 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:23.052271   35116 main.go:141] libmachine: Using API Version  1
	I0917 17:24:23.052290   35116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:23.052575   35116 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:23.052762   35116 main.go:141] libmachine: (ha-181247-m03) Calling .DriverName
	I0917 17:24:23.052952   35116 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:24:23.052974   35116 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	I0917 17:24:23.055569   35116 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:23.055996   35116 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:24:23.056026   35116 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:23.056203   35116 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHPort
	I0917 17:24:23.056387   35116 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:24:23.056544   35116 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHUsername
	I0917 17:24:23.056656   35116 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03/id_rsa Username:docker}
	I0917 17:24:23.141567   35116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:24:23.157073   35116 kubeconfig.go:125] found "ha-181247" server: "https://192.168.39.254:8443"
	I0917 17:24:23.157107   35116 api_server.go:166] Checking apiserver status ...
	I0917 17:24:23.157146   35116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:24:23.171956   35116 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1374/cgroup
	W0917 17:24:23.182797   35116 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1374/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 17:24:23.182874   35116 ssh_runner.go:195] Run: ls
	I0917 17:24:23.188288   35116 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0917 17:24:23.195064   35116 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0917 17:24:23.195101   35116 status.go:422] ha-181247-m03 apiserver status = Running (err=<nil>)
	I0917 17:24:23.195112   35116 status.go:257] ha-181247-m03 status: &{Name:ha-181247-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:24:23.195133   35116 status.go:255] checking status of ha-181247-m04 ...
	I0917 17:24:23.195455   35116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:23.195493   35116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:23.211911   35116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34575
	I0917 17:24:23.212421   35116 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:23.212937   35116 main.go:141] libmachine: Using API Version  1
	I0917 17:24:23.212958   35116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:23.213328   35116 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:23.213556   35116 main.go:141] libmachine: (ha-181247-m04) Calling .GetState
	I0917 17:24:23.215350   35116 status.go:330] ha-181247-m04 host status = "Running" (err=<nil>)
	I0917 17:24:23.215379   35116 host.go:66] Checking if "ha-181247-m04" exists ...
	I0917 17:24:23.215679   35116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:23.215717   35116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:23.232188   35116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41541
	I0917 17:24:23.232723   35116 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:23.233295   35116 main.go:141] libmachine: Using API Version  1
	I0917 17:24:23.233321   35116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:23.233730   35116 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:23.233940   35116 main.go:141] libmachine: (ha-181247-m04) Calling .GetIP
	I0917 17:24:23.237002   35116 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:23.237436   35116 main.go:141] libmachine: (ha-181247-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0a:d0", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:20:47 +0000 UTC Type:0 Mac:52:54:00:e5:0a:d0 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-181247-m04 Clientid:01:52:54:00:e5:0a:d0}
	I0917 17:24:23.237476   35116 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined IP address 192.168.39.63 and MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:23.237604   35116 host.go:66] Checking if "ha-181247-m04" exists ...
	I0917 17:24:23.238008   35116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:23.238183   35116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:23.254717   35116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42679
	I0917 17:24:23.255182   35116 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:23.255711   35116 main.go:141] libmachine: Using API Version  1
	I0917 17:24:23.255739   35116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:23.256071   35116 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:23.256277   35116 main.go:141] libmachine: (ha-181247-m04) Calling .DriverName
	I0917 17:24:23.256498   35116 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:24:23.256523   35116 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHHostname
	I0917 17:24:23.259860   35116 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:23.260275   35116 main.go:141] libmachine: (ha-181247-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0a:d0", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:20:47 +0000 UTC Type:0 Mac:52:54:00:e5:0a:d0 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-181247-m04 Clientid:01:52:54:00:e5:0a:d0}
	I0917 17:24:23.260319   35116 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined IP address 192.168.39.63 and MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:23.260418   35116 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHPort
	I0917 17:24:23.260597   35116 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHKeyPath
	I0917 17:24:23.260748   35116 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHUsername
	I0917 17:24:23.260872   35116 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m04/id_rsa Username:docker}
	I0917 17:24:23.350363   35116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:24:23.366778   35116 status.go:257] ha-181247-m04 status: &{Name:ha-181247-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-181247 status -v=7 --alsologtostderr: exit status 3 (3.75901544s)

                                                
                                                
-- stdout --
	ha-181247
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-181247-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-181247-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-181247-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:24:25.998953   35216 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:24:25.999074   35216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:24:25.999082   35216 out.go:358] Setting ErrFile to fd 2...
	I0917 17:24:25.999086   35216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:24:25.999266   35216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 17:24:25.999464   35216 out.go:352] Setting JSON to false
	I0917 17:24:25.999497   35216 mustload.go:65] Loading cluster: ha-181247
	I0917 17:24:25.999602   35216 notify.go:220] Checking for updates...
	I0917 17:24:25.999969   35216 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:24:25.999986   35216 status.go:255] checking status of ha-181247 ...
	I0917 17:24:26.000414   35216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:26.000472   35216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:26.016423   35216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43815
	I0917 17:24:26.016872   35216 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:26.017405   35216 main.go:141] libmachine: Using API Version  1
	I0917 17:24:26.017426   35216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:26.017773   35216 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:26.017949   35216 main.go:141] libmachine: (ha-181247) Calling .GetState
	I0917 17:24:26.019526   35216 status.go:330] ha-181247 host status = "Running" (err=<nil>)
	I0917 17:24:26.019553   35216 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:24:26.019888   35216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:26.019935   35216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:26.035130   35216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44657
	I0917 17:24:26.035652   35216 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:26.036156   35216 main.go:141] libmachine: Using API Version  1
	I0917 17:24:26.036180   35216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:26.036546   35216 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:26.036765   35216 main.go:141] libmachine: (ha-181247) Calling .GetIP
	I0917 17:24:26.039900   35216 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:24:26.040366   35216 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:24:26.040402   35216 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:24:26.040558   35216 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:24:26.040951   35216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:26.041001   35216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:26.057597   35216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44621
	I0917 17:24:26.058130   35216 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:26.058630   35216 main.go:141] libmachine: Using API Version  1
	I0917 17:24:26.058655   35216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:26.058976   35216 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:26.059165   35216 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:24:26.059360   35216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:24:26.059388   35216 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:24:26.062189   35216 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:24:26.062671   35216 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:24:26.062706   35216 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:24:26.062848   35216 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:24:26.063019   35216 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:24:26.063163   35216 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:24:26.063290   35216 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:24:26.150422   35216 ssh_runner.go:195] Run: systemctl --version
	I0917 17:24:26.159315   35216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:24:26.180807   35216 kubeconfig.go:125] found "ha-181247" server: "https://192.168.39.254:8443"
	I0917 17:24:26.180853   35216 api_server.go:166] Checking apiserver status ...
	I0917 17:24:26.180904   35216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:24:26.198323   35216 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1072/cgroup
	W0917 17:24:26.210859   35216 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1072/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 17:24:26.210908   35216 ssh_runner.go:195] Run: ls
	I0917 17:24:26.216032   35216 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0917 17:24:26.221244   35216 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0917 17:24:26.221272   35216 status.go:422] ha-181247 apiserver status = Running (err=<nil>)
	I0917 17:24:26.221283   35216 status.go:257] ha-181247 status: &{Name:ha-181247 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:24:26.221297   35216 status.go:255] checking status of ha-181247-m02 ...
	I0917 17:24:26.221605   35216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:26.221646   35216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:26.236707   35216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I0917 17:24:26.237254   35216 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:26.237827   35216 main.go:141] libmachine: Using API Version  1
	I0917 17:24:26.237848   35216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:26.238256   35216 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:26.238438   35216 main.go:141] libmachine: (ha-181247-m02) Calling .GetState
	I0917 17:24:26.240218   35216 status.go:330] ha-181247-m02 host status = "Running" (err=<nil>)
	I0917 17:24:26.240236   35216 host.go:66] Checking if "ha-181247-m02" exists ...
	I0917 17:24:26.240538   35216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:26.240570   35216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:26.256570   35216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42145
	I0917 17:24:26.257076   35216 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:26.257649   35216 main.go:141] libmachine: Using API Version  1
	I0917 17:24:26.257671   35216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:26.258063   35216 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:26.258243   35216 main.go:141] libmachine: (ha-181247-m02) Calling .GetIP
	I0917 17:24:26.261435   35216 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:24:26.261939   35216 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:24:26.261970   35216 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:24:26.262135   35216 host.go:66] Checking if "ha-181247-m02" exists ...
	I0917 17:24:26.262506   35216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:26.262571   35216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:26.278828   35216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33991
	I0917 17:24:26.279292   35216 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:26.279852   35216 main.go:141] libmachine: Using API Version  1
	I0917 17:24:26.279874   35216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:26.280229   35216 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:26.280410   35216 main.go:141] libmachine: (ha-181247-m02) Calling .DriverName
	I0917 17:24:26.280589   35216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:24:26.280605   35216 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHHostname
	I0917 17:24:26.283844   35216 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:24:26.284286   35216 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:24:26.284321   35216 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:24:26.284500   35216 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHPort
	I0917 17:24:26.284719   35216 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:24:26.284882   35216 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHUsername
	I0917 17:24:26.285021   35216 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02/id_rsa Username:docker}
	W0917 17:24:29.357551   35216 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.11:22: connect: no route to host
	W0917 17:24:29.357656   35216 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.11:22: connect: no route to host
	E0917 17:24:29.357674   35216 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.11:22: connect: no route to host
	I0917 17:24:29.357707   35216 status.go:257] ha-181247-m02 status: &{Name:ha-181247-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0917 17:24:29.357728   35216 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.11:22: connect: no route to host
	I0917 17:24:29.357737   35216 status.go:255] checking status of ha-181247-m03 ...
	I0917 17:24:29.358183   35216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:29.358241   35216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:29.373575   35216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39207
	I0917 17:24:29.374017   35216 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:29.374501   35216 main.go:141] libmachine: Using API Version  1
	I0917 17:24:29.374526   35216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:29.374859   35216 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:29.375059   35216 main.go:141] libmachine: (ha-181247-m03) Calling .GetState
	I0917 17:24:29.376833   35216 status.go:330] ha-181247-m03 host status = "Running" (err=<nil>)
	I0917 17:24:29.376851   35216 host.go:66] Checking if "ha-181247-m03" exists ...
	I0917 17:24:29.377176   35216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:29.377220   35216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:29.392353   35216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37893
	I0917 17:24:29.392811   35216 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:29.393313   35216 main.go:141] libmachine: Using API Version  1
	I0917 17:24:29.393331   35216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:29.393627   35216 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:29.393842   35216 main.go:141] libmachine: (ha-181247-m03) Calling .GetIP
	I0917 17:24:29.396567   35216 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:29.396970   35216 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:24:29.396994   35216 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:29.397168   35216 host.go:66] Checking if "ha-181247-m03" exists ...
	I0917 17:24:29.397543   35216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:29.397589   35216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:29.412827   35216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33033
	I0917 17:24:29.413307   35216 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:29.413848   35216 main.go:141] libmachine: Using API Version  1
	I0917 17:24:29.413870   35216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:29.414168   35216 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:29.414349   35216 main.go:141] libmachine: (ha-181247-m03) Calling .DriverName
	I0917 17:24:29.414556   35216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:24:29.414585   35216 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	I0917 17:24:29.417380   35216 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:29.417826   35216 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:24:29.417848   35216 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:29.417994   35216 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHPort
	I0917 17:24:29.418211   35216 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:24:29.418354   35216 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHUsername
	I0917 17:24:29.418502   35216 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03/id_rsa Username:docker}
	I0917 17:24:29.501825   35216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:24:29.517553   35216 kubeconfig.go:125] found "ha-181247" server: "https://192.168.39.254:8443"
	I0917 17:24:29.517578   35216 api_server.go:166] Checking apiserver status ...
	I0917 17:24:29.517607   35216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:24:29.531400   35216 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1374/cgroup
	W0917 17:24:29.541110   35216 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1374/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 17:24:29.541175   35216 ssh_runner.go:195] Run: ls
	I0917 17:24:29.546303   35216 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0917 17:24:29.550914   35216 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0917 17:24:29.550944   35216 status.go:422] ha-181247-m03 apiserver status = Running (err=<nil>)
	I0917 17:24:29.550955   35216 status.go:257] ha-181247-m03 status: &{Name:ha-181247-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:24:29.550974   35216 status.go:255] checking status of ha-181247-m04 ...
	I0917 17:24:29.551291   35216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:29.551338   35216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:29.566581   35216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40943
	I0917 17:24:29.567059   35216 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:29.567583   35216 main.go:141] libmachine: Using API Version  1
	I0917 17:24:29.567608   35216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:29.567930   35216 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:29.568097   35216 main.go:141] libmachine: (ha-181247-m04) Calling .GetState
	I0917 17:24:29.569558   35216 status.go:330] ha-181247-m04 host status = "Running" (err=<nil>)
	I0917 17:24:29.569574   35216 host.go:66] Checking if "ha-181247-m04" exists ...
	I0917 17:24:29.569894   35216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:29.569932   35216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:29.585152   35216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36835
	I0917 17:24:29.585656   35216 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:29.586118   35216 main.go:141] libmachine: Using API Version  1
	I0917 17:24:29.586145   35216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:29.586448   35216 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:29.586624   35216 main.go:141] libmachine: (ha-181247-m04) Calling .GetIP
	I0917 17:24:29.589598   35216 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:29.590073   35216 main.go:141] libmachine: (ha-181247-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0a:d0", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:20:47 +0000 UTC Type:0 Mac:52:54:00:e5:0a:d0 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-181247-m04 Clientid:01:52:54:00:e5:0a:d0}
	I0917 17:24:29.590117   35216 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined IP address 192.168.39.63 and MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:29.590270   35216 host.go:66] Checking if "ha-181247-m04" exists ...
	I0917 17:24:29.590600   35216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:29.590639   35216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:29.605974   35216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42161
	I0917 17:24:29.606488   35216 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:29.607080   35216 main.go:141] libmachine: Using API Version  1
	I0917 17:24:29.607099   35216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:29.607394   35216 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:29.607592   35216 main.go:141] libmachine: (ha-181247-m04) Calling .DriverName
	I0917 17:24:29.607802   35216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:24:29.607824   35216 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHHostname
	I0917 17:24:29.610868   35216 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:29.611317   35216 main.go:141] libmachine: (ha-181247-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0a:d0", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:20:47 +0000 UTC Type:0 Mac:52:54:00:e5:0a:d0 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-181247-m04 Clientid:01:52:54:00:e5:0a:d0}
	I0917 17:24:29.611368   35216 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined IP address 192.168.39.63 and MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:29.611556   35216 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHPort
	I0917 17:24:29.611749   35216 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHKeyPath
	I0917 17:24:29.611873   35216 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHUsername
	I0917 17:24:29.612008   35216 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m04/id_rsa Username:docker}
	I0917 17:24:29.697081   35216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:24:29.712625   35216 status.go:257] ha-181247-m04 status: &{Name:ha-181247-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-181247 status -v=7 --alsologtostderr: exit status 3 (3.773882791s)

                                                
                                                
-- stdout --
	ha-181247
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-181247-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-181247-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-181247-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:24:33.900251   35332 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:24:33.900377   35332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:24:33.900387   35332 out.go:358] Setting ErrFile to fd 2...
	I0917 17:24:33.900391   35332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:24:33.900609   35332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 17:24:33.900775   35332 out.go:352] Setting JSON to false
	I0917 17:24:33.900805   35332 mustload.go:65] Loading cluster: ha-181247
	I0917 17:24:33.900859   35332 notify.go:220] Checking for updates...
	I0917 17:24:33.901426   35332 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:24:33.901447   35332 status.go:255] checking status of ha-181247 ...
	I0917 17:24:33.901911   35332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:33.901987   35332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:33.923200   35332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43707
	I0917 17:24:33.923766   35332 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:33.924433   35332 main.go:141] libmachine: Using API Version  1
	I0917 17:24:33.924459   35332 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:33.924896   35332 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:33.925090   35332 main.go:141] libmachine: (ha-181247) Calling .GetState
	I0917 17:24:33.926500   35332 status.go:330] ha-181247 host status = "Running" (err=<nil>)
	I0917 17:24:33.926518   35332 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:24:33.926793   35332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:33.926826   35332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:33.942923   35332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45679
	I0917 17:24:33.943466   35332 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:33.943980   35332 main.go:141] libmachine: Using API Version  1
	I0917 17:24:33.944007   35332 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:33.944377   35332 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:33.944578   35332 main.go:141] libmachine: (ha-181247) Calling .GetIP
	I0917 17:24:33.947711   35332 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:24:33.948169   35332 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:24:33.948197   35332 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:24:33.948391   35332 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:24:33.948780   35332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:33.948833   35332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:33.966162   35332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38683
	I0917 17:24:33.966582   35332 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:33.967037   35332 main.go:141] libmachine: Using API Version  1
	I0917 17:24:33.967059   35332 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:33.967473   35332 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:33.967686   35332 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:24:33.967918   35332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:24:33.967950   35332 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:24:33.970942   35332 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:24:33.971410   35332 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:24:33.971447   35332 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:24:33.971547   35332 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:24:33.971741   35332 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:24:33.971878   35332 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:24:33.972021   35332 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:24:34.057444   35332 ssh_runner.go:195] Run: systemctl --version
	I0917 17:24:34.064232   35332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:24:34.083247   35332 kubeconfig.go:125] found "ha-181247" server: "https://192.168.39.254:8443"
	I0917 17:24:34.083300   35332 api_server.go:166] Checking apiserver status ...
	I0917 17:24:34.083345   35332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:24:34.098732   35332 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1072/cgroup
	W0917 17:24:34.109015   35332 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1072/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 17:24:34.109071   35332 ssh_runner.go:195] Run: ls
	I0917 17:24:34.114052   35332 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0917 17:24:34.120146   35332 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0917 17:24:34.120172   35332 status.go:422] ha-181247 apiserver status = Running (err=<nil>)
	I0917 17:24:34.120187   35332 status.go:257] ha-181247 status: &{Name:ha-181247 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:24:34.120206   35332 status.go:255] checking status of ha-181247-m02 ...
	I0917 17:24:34.120500   35332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:34.120543   35332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:34.136283   35332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40981
	I0917 17:24:34.136745   35332 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:34.137204   35332 main.go:141] libmachine: Using API Version  1
	I0917 17:24:34.137223   35332 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:34.137545   35332 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:34.137725   35332 main.go:141] libmachine: (ha-181247-m02) Calling .GetState
	I0917 17:24:34.139333   35332 status.go:330] ha-181247-m02 host status = "Running" (err=<nil>)
	I0917 17:24:34.139347   35332 host.go:66] Checking if "ha-181247-m02" exists ...
	I0917 17:24:34.139669   35332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:34.139715   35332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:34.155268   35332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35819
	I0917 17:24:34.155734   35332 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:34.156216   35332 main.go:141] libmachine: Using API Version  1
	I0917 17:24:34.156245   35332 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:34.156645   35332 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:34.156868   35332 main.go:141] libmachine: (ha-181247-m02) Calling .GetIP
	I0917 17:24:34.159501   35332 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:24:34.159977   35332 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:24:34.160005   35332 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:24:34.160207   35332 host.go:66] Checking if "ha-181247-m02" exists ...
	I0917 17:24:34.160542   35332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:34.160580   35332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:34.177131   35332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42305
	I0917 17:24:34.177698   35332 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:34.178229   35332 main.go:141] libmachine: Using API Version  1
	I0917 17:24:34.178254   35332 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:34.178606   35332 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:34.178783   35332 main.go:141] libmachine: (ha-181247-m02) Calling .DriverName
	I0917 17:24:34.178955   35332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:24:34.178976   35332 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHHostname
	I0917 17:24:34.182041   35332 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:24:34.182464   35332 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:24:34.182487   35332 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:24:34.182678   35332 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHPort
	I0917 17:24:34.182849   35332 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:24:34.182999   35332 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHUsername
	I0917 17:24:34.183121   35332 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02/id_rsa Username:docker}
	W0917 17:24:37.261459   35332 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.11:22: connect: no route to host
	W0917 17:24:37.261553   35332 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.11:22: connect: no route to host
	E0917 17:24:37.261574   35332 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.11:22: connect: no route to host
	I0917 17:24:37.261584   35332 status.go:257] ha-181247-m02 status: &{Name:ha-181247-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0917 17:24:37.261606   35332 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.11:22: connect: no route to host
	I0917 17:24:37.261613   35332 status.go:255] checking status of ha-181247-m03 ...
	I0917 17:24:37.261918   35332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:37.261959   35332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:37.277017   35332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41549
	I0917 17:24:37.277545   35332 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:37.277971   35332 main.go:141] libmachine: Using API Version  1
	I0917 17:24:37.277989   35332 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:37.278303   35332 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:37.278492   35332 main.go:141] libmachine: (ha-181247-m03) Calling .GetState
	I0917 17:24:37.280033   35332 status.go:330] ha-181247-m03 host status = "Running" (err=<nil>)
	I0917 17:24:37.280051   35332 host.go:66] Checking if "ha-181247-m03" exists ...
	I0917 17:24:37.280337   35332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:37.280375   35332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:37.296819   35332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43595
	I0917 17:24:37.297381   35332 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:37.297870   35332 main.go:141] libmachine: Using API Version  1
	I0917 17:24:37.297892   35332 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:37.298199   35332 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:37.298345   35332 main.go:141] libmachine: (ha-181247-m03) Calling .GetIP
	I0917 17:24:37.301342   35332 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:37.301836   35332 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:24:37.301863   35332 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:37.302031   35332 host.go:66] Checking if "ha-181247-m03" exists ...
	I0917 17:24:37.302444   35332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:37.302490   35332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:37.318566   35332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40629
	I0917 17:24:37.319061   35332 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:37.319568   35332 main.go:141] libmachine: Using API Version  1
	I0917 17:24:37.319596   35332 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:37.319908   35332 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:37.320082   35332 main.go:141] libmachine: (ha-181247-m03) Calling .DriverName
	I0917 17:24:37.320292   35332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:24:37.320314   35332 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	I0917 17:24:37.322989   35332 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:37.323364   35332 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:24:37.323389   35332 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:37.323534   35332 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHPort
	I0917 17:24:37.323714   35332 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:24:37.323844   35332 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHUsername
	I0917 17:24:37.323983   35332 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03/id_rsa Username:docker}
	I0917 17:24:37.409358   35332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:24:37.424806   35332 kubeconfig.go:125] found "ha-181247" server: "https://192.168.39.254:8443"
	I0917 17:24:37.424834   35332 api_server.go:166] Checking apiserver status ...
	I0917 17:24:37.424866   35332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:24:37.441200   35332 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1374/cgroup
	W0917 17:24:37.452663   35332 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1374/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 17:24:37.452726   35332 ssh_runner.go:195] Run: ls
	I0917 17:24:37.458428   35332 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0917 17:24:37.462872   35332 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0917 17:24:37.462903   35332 status.go:422] ha-181247-m03 apiserver status = Running (err=<nil>)
	I0917 17:24:37.462912   35332 status.go:257] ha-181247-m03 status: &{Name:ha-181247-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:24:37.462926   35332 status.go:255] checking status of ha-181247-m04 ...
	I0917 17:24:37.463318   35332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:37.463358   35332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:37.479833   35332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40497
	I0917 17:24:37.480283   35332 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:37.480737   35332 main.go:141] libmachine: Using API Version  1
	I0917 17:24:37.480756   35332 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:37.481068   35332 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:37.481264   35332 main.go:141] libmachine: (ha-181247-m04) Calling .GetState
	I0917 17:24:37.482921   35332 status.go:330] ha-181247-m04 host status = "Running" (err=<nil>)
	I0917 17:24:37.482935   35332 host.go:66] Checking if "ha-181247-m04" exists ...
	I0917 17:24:37.483235   35332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:37.483278   35332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:37.499083   35332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46653
	I0917 17:24:37.499490   35332 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:37.499954   35332 main.go:141] libmachine: Using API Version  1
	I0917 17:24:37.499976   35332 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:37.500271   35332 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:37.500460   35332 main.go:141] libmachine: (ha-181247-m04) Calling .GetIP
	I0917 17:24:37.503068   35332 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:37.503548   35332 main.go:141] libmachine: (ha-181247-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0a:d0", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:20:47 +0000 UTC Type:0 Mac:52:54:00:e5:0a:d0 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-181247-m04 Clientid:01:52:54:00:e5:0a:d0}
	I0917 17:24:37.503580   35332 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined IP address 192.168.39.63 and MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:37.503778   35332 host.go:66] Checking if "ha-181247-m04" exists ...
	I0917 17:24:37.504085   35332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:37.504130   35332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:37.519510   35332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36713
	I0917 17:24:37.519971   35332 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:37.520501   35332 main.go:141] libmachine: Using API Version  1
	I0917 17:24:37.520524   35332 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:37.520841   35332 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:37.521028   35332 main.go:141] libmachine: (ha-181247-m04) Calling .DriverName
	I0917 17:24:37.521212   35332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:24:37.521273   35332 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHHostname
	I0917 17:24:37.524221   35332 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:37.524649   35332 main.go:141] libmachine: (ha-181247-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0a:d0", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:20:47 +0000 UTC Type:0 Mac:52:54:00:e5:0a:d0 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-181247-m04 Clientid:01:52:54:00:e5:0a:d0}
	I0917 17:24:37.524669   35332 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined IP address 192.168.39.63 and MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:37.524923   35332 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHPort
	I0917 17:24:37.525121   35332 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHKeyPath
	I0917 17:24:37.525308   35332 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHUsername
	I0917 17:24:37.525453   35332 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m04/id_rsa Username:docker}
	I0917 17:24:37.613403   35332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:24:37.628604   35332 status.go:257] ha-181247-m04 status: &{Name:ha-181247-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-181247 status -v=7 --alsologtostderr: exit status 7 (660.259222ms)

                                                
                                                
-- stdout --
	ha-181247
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-181247-m02
	type: Control Plane
	host: Stopping
	kubelet: Stopping
	apiserver: Stopping
	kubeconfig: Stopping
	
	ha-181247-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-181247-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:24:43.160661   35449 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:24:43.160938   35449 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:24:43.160949   35449 out.go:358] Setting ErrFile to fd 2...
	I0917 17:24:43.160955   35449 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:24:43.161144   35449 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 17:24:43.161399   35449 out.go:352] Setting JSON to false
	I0917 17:24:43.161433   35449 mustload.go:65] Loading cluster: ha-181247
	I0917 17:24:43.161550   35449 notify.go:220] Checking for updates...
	I0917 17:24:43.161903   35449 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:24:43.161920   35449 status.go:255] checking status of ha-181247 ...
	I0917 17:24:43.162376   35449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:43.162448   35449 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:43.179611   35449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42031
	I0917 17:24:43.180032   35449 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:43.180557   35449 main.go:141] libmachine: Using API Version  1
	I0917 17:24:43.180579   35449 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:43.180987   35449 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:43.181204   35449 main.go:141] libmachine: (ha-181247) Calling .GetState
	I0917 17:24:43.182765   35449 status.go:330] ha-181247 host status = "Running" (err=<nil>)
	I0917 17:24:43.182784   35449 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:24:43.183167   35449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:43.183204   35449 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:43.198983   35449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37629
	I0917 17:24:43.199520   35449 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:43.199992   35449 main.go:141] libmachine: Using API Version  1
	I0917 17:24:43.200014   35449 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:43.200302   35449 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:43.200534   35449 main.go:141] libmachine: (ha-181247) Calling .GetIP
	I0917 17:24:43.203389   35449 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:24:43.203870   35449 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:24:43.203905   35449 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:24:43.204054   35449 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:24:43.204392   35449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:43.204444   35449 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:43.220024   35449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37599
	I0917 17:24:43.220454   35449 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:43.221026   35449 main.go:141] libmachine: Using API Version  1
	I0917 17:24:43.221046   35449 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:43.221430   35449 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:43.221658   35449 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:24:43.221822   35449 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:24:43.221843   35449 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:24:43.224344   35449 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:24:43.224791   35449 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:24:43.224824   35449 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:24:43.224921   35449 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:24:43.225100   35449 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:24:43.225252   35449 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:24:43.225390   35449 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:24:43.311699   35449 ssh_runner.go:195] Run: systemctl --version
	I0917 17:24:43.318344   35449 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:24:43.334139   35449 kubeconfig.go:125] found "ha-181247" server: "https://192.168.39.254:8443"
	I0917 17:24:43.334178   35449 api_server.go:166] Checking apiserver status ...
	I0917 17:24:43.334212   35449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:24:43.351359   35449 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1072/cgroup
	W0917 17:24:43.367798   35449 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1072/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 17:24:43.367847   35449 ssh_runner.go:195] Run: ls
	I0917 17:24:43.372951   35449 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0917 17:24:43.378470   35449 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0917 17:24:43.378497   35449 status.go:422] ha-181247 apiserver status = Running (err=<nil>)
	I0917 17:24:43.378506   35449 status.go:257] ha-181247 status: &{Name:ha-181247 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:24:43.378528   35449 status.go:255] checking status of ha-181247-m02 ...
	I0917 17:24:43.378908   35449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:43.378953   35449 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:43.395172   35449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43999
	I0917 17:24:43.395666   35449 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:43.396129   35449 main.go:141] libmachine: Using API Version  1
	I0917 17:24:43.396149   35449 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:43.396452   35449 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:43.396646   35449 main.go:141] libmachine: (ha-181247-m02) Calling .GetState
	I0917 17:24:43.398266   35449 status.go:330] ha-181247-m02 host status = "Stopping" (err=<nil>)
	I0917 17:24:43.398288   35449 status.go:343] host is not running, skipping remaining checks
	I0917 17:24:43.398296   35449 status.go:257] ha-181247-m02 status: &{Name:ha-181247-m02 Host:Stopping Kubelet:Stopping APIServer:Stopping Kubeconfig:Stopping Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:24:43.398316   35449 status.go:255] checking status of ha-181247-m03 ...
	I0917 17:24:43.398624   35449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:43.398669   35449 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:43.414453   35449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42039
	I0917 17:24:43.414988   35449 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:43.415558   35449 main.go:141] libmachine: Using API Version  1
	I0917 17:24:43.415594   35449 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:43.415945   35449 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:43.416117   35449 main.go:141] libmachine: (ha-181247-m03) Calling .GetState
	I0917 17:24:43.417714   35449 status.go:330] ha-181247-m03 host status = "Running" (err=<nil>)
	I0917 17:24:43.417729   35449 host.go:66] Checking if "ha-181247-m03" exists ...
	I0917 17:24:43.418074   35449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:43.418112   35449 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:43.435148   35449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33059
	I0917 17:24:43.435609   35449 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:43.436131   35449 main.go:141] libmachine: Using API Version  1
	I0917 17:24:43.436152   35449 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:43.436482   35449 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:43.436710   35449 main.go:141] libmachine: (ha-181247-m03) Calling .GetIP
	I0917 17:24:43.439895   35449 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:43.440360   35449 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:24:43.440380   35449 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:43.440650   35449 host.go:66] Checking if "ha-181247-m03" exists ...
	I0917 17:24:43.440954   35449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:43.441003   35449 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:43.459092   35449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38537
	I0917 17:24:43.459590   35449 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:43.460115   35449 main.go:141] libmachine: Using API Version  1
	I0917 17:24:43.460136   35449 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:43.460465   35449 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:43.460634   35449 main.go:141] libmachine: (ha-181247-m03) Calling .DriverName
	I0917 17:24:43.460790   35449 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:24:43.460813   35449 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	I0917 17:24:43.463968   35449 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:43.464338   35449 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:24:43.464370   35449 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:43.464509   35449 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHPort
	I0917 17:24:43.464688   35449 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:24:43.464820   35449 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHUsername
	I0917 17:24:43.464938   35449 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03/id_rsa Username:docker}
	I0917 17:24:43.552066   35449 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:24:43.575685   35449 kubeconfig.go:125] found "ha-181247" server: "https://192.168.39.254:8443"
	I0917 17:24:43.575712   35449 api_server.go:166] Checking apiserver status ...
	I0917 17:24:43.575744   35449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:24:43.592035   35449 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1374/cgroup
	W0917 17:24:43.604470   35449 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1374/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 17:24:43.604553   35449 ssh_runner.go:195] Run: ls
	I0917 17:24:43.609218   35449 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0917 17:24:43.614233   35449 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0917 17:24:43.614258   35449 status.go:422] ha-181247-m03 apiserver status = Running (err=<nil>)
	I0917 17:24:43.614267   35449 status.go:257] ha-181247-m03 status: &{Name:ha-181247-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:24:43.614281   35449 status.go:255] checking status of ha-181247-m04 ...
	I0917 17:24:43.614659   35449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:43.614695   35449 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:43.630213   35449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46177
	I0917 17:24:43.630660   35449 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:43.631216   35449 main.go:141] libmachine: Using API Version  1
	I0917 17:24:43.631244   35449 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:43.631581   35449 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:43.631755   35449 main.go:141] libmachine: (ha-181247-m04) Calling .GetState
	I0917 17:24:43.633512   35449 status.go:330] ha-181247-m04 host status = "Running" (err=<nil>)
	I0917 17:24:43.633526   35449 host.go:66] Checking if "ha-181247-m04" exists ...
	I0917 17:24:43.633804   35449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:43.633836   35449 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:43.649341   35449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36101
	I0917 17:24:43.649807   35449 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:43.650338   35449 main.go:141] libmachine: Using API Version  1
	I0917 17:24:43.650383   35449 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:43.650722   35449 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:43.650893   35449 main.go:141] libmachine: (ha-181247-m04) Calling .GetIP
	I0917 17:24:43.653844   35449 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:43.654242   35449 main.go:141] libmachine: (ha-181247-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0a:d0", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:20:47 +0000 UTC Type:0 Mac:52:54:00:e5:0a:d0 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-181247-m04 Clientid:01:52:54:00:e5:0a:d0}
	I0917 17:24:43.654278   35449 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined IP address 192.168.39.63 and MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:43.654400   35449 host.go:66] Checking if "ha-181247-m04" exists ...
	I0917 17:24:43.654699   35449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:43.654741   35449 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:43.671633   35449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36591
	I0917 17:24:43.672086   35449 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:43.672619   35449 main.go:141] libmachine: Using API Version  1
	I0917 17:24:43.672647   35449 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:43.672954   35449 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:43.673147   35449 main.go:141] libmachine: (ha-181247-m04) Calling .DriverName
	I0917 17:24:43.673356   35449 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:24:43.673377   35449 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHHostname
	I0917 17:24:43.676375   35449 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:43.676852   35449 main.go:141] libmachine: (ha-181247-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0a:d0", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:20:47 +0000 UTC Type:0 Mac:52:54:00:e5:0a:d0 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-181247-m04 Clientid:01:52:54:00:e5:0a:d0}
	I0917 17:24:43.676894   35449 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined IP address 192.168.39.63 and MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:43.677038   35449 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHPort
	I0917 17:24:43.677187   35449 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHKeyPath
	I0917 17:24:43.677371   35449 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHUsername
	I0917 17:24:43.677537   35449 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m04/id_rsa Username:docker}
	I0917 17:24:43.760975   35449 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:24:43.777477   35449 status.go:257] ha-181247-m04 status: &{Name:ha-181247-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-181247 status -v=7 --alsologtostderr: exit status 7 (650.216021ms)

                                                
                                                
-- stdout --
	ha-181247
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-181247-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-181247-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-181247-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:24:50.124969   35557 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:24:50.125223   35557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:24:50.125246   35557 out.go:358] Setting ErrFile to fd 2...
	I0917 17:24:50.125253   35557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:24:50.125482   35557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 17:24:50.125648   35557 out.go:352] Setting JSON to false
	I0917 17:24:50.125680   35557 mustload.go:65] Loading cluster: ha-181247
	I0917 17:24:50.125786   35557 notify.go:220] Checking for updates...
	I0917 17:24:50.126040   35557 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:24:50.126055   35557 status.go:255] checking status of ha-181247 ...
	I0917 17:24:50.126498   35557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:50.126548   35557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:50.144624   35557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35637
	I0917 17:24:50.145179   35557 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:50.145800   35557 main.go:141] libmachine: Using API Version  1
	I0917 17:24:50.145826   35557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:50.146175   35557 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:50.146392   35557 main.go:141] libmachine: (ha-181247) Calling .GetState
	I0917 17:24:50.148198   35557 status.go:330] ha-181247 host status = "Running" (err=<nil>)
	I0917 17:24:50.148219   35557 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:24:50.148818   35557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:50.148880   35557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:50.164213   35557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44811
	I0917 17:24:50.164668   35557 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:50.165146   35557 main.go:141] libmachine: Using API Version  1
	I0917 17:24:50.165175   35557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:50.165505   35557 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:50.165684   35557 main.go:141] libmachine: (ha-181247) Calling .GetIP
	I0917 17:24:50.168575   35557 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:24:50.169094   35557 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:24:50.169120   35557 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:24:50.169302   35557 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:24:50.169595   35557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:50.169633   35557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:50.185049   35557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42173
	I0917 17:24:50.185564   35557 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:50.186133   35557 main.go:141] libmachine: Using API Version  1
	I0917 17:24:50.186155   35557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:50.186509   35557 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:50.186688   35557 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:24:50.186875   35557 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:24:50.186899   35557 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:24:50.190181   35557 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:24:50.190677   35557 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:24:50.190709   35557 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:24:50.190808   35557 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:24:50.190980   35557 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:24:50.191120   35557 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:24:50.191252   35557 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:24:50.277000   35557 ssh_runner.go:195] Run: systemctl --version
	I0917 17:24:50.284273   35557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:24:50.300531   35557 kubeconfig.go:125] found "ha-181247" server: "https://192.168.39.254:8443"
	I0917 17:24:50.300576   35557 api_server.go:166] Checking apiserver status ...
	I0917 17:24:50.300618   35557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:24:50.316102   35557 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1072/cgroup
	W0917 17:24:50.327062   35557 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1072/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 17:24:50.327114   35557 ssh_runner.go:195] Run: ls
	I0917 17:24:50.332476   35557 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0917 17:24:50.339079   35557 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0917 17:24:50.339105   35557 status.go:422] ha-181247 apiserver status = Running (err=<nil>)
	I0917 17:24:50.339114   35557 status.go:257] ha-181247 status: &{Name:ha-181247 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:24:50.339138   35557 status.go:255] checking status of ha-181247-m02 ...
	I0917 17:24:50.339452   35557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:50.339491   35557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:50.355280   35557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33253
	I0917 17:24:50.355760   35557 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:50.356279   35557 main.go:141] libmachine: Using API Version  1
	I0917 17:24:50.356300   35557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:50.356697   35557 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:50.356879   35557 main.go:141] libmachine: (ha-181247-m02) Calling .GetState
	I0917 17:24:50.358742   35557 status.go:330] ha-181247-m02 host status = "Stopped" (err=<nil>)
	I0917 17:24:50.358771   35557 status.go:343] host is not running, skipping remaining checks
	I0917 17:24:50.358778   35557 status.go:257] ha-181247-m02 status: &{Name:ha-181247-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:24:50.358811   35557 status.go:255] checking status of ha-181247-m03 ...
	I0917 17:24:50.359156   35557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:50.359194   35557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:50.375084   35557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34639
	I0917 17:24:50.375585   35557 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:50.376039   35557 main.go:141] libmachine: Using API Version  1
	I0917 17:24:50.376062   35557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:50.376433   35557 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:50.376632   35557 main.go:141] libmachine: (ha-181247-m03) Calling .GetState
	I0917 17:24:50.378284   35557 status.go:330] ha-181247-m03 host status = "Running" (err=<nil>)
	I0917 17:24:50.378304   35557 host.go:66] Checking if "ha-181247-m03" exists ...
	I0917 17:24:50.378740   35557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:50.378808   35557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:50.395371   35557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42429
	I0917 17:24:50.395891   35557 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:50.396409   35557 main.go:141] libmachine: Using API Version  1
	I0917 17:24:50.396436   35557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:50.396781   35557 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:50.396994   35557 main.go:141] libmachine: (ha-181247-m03) Calling .GetIP
	I0917 17:24:50.400068   35557 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:50.400511   35557 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:24:50.400538   35557 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:50.400693   35557 host.go:66] Checking if "ha-181247-m03" exists ...
	I0917 17:24:50.400992   35557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:50.401039   35557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:50.417636   35557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44687
	I0917 17:24:50.418127   35557 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:50.418650   35557 main.go:141] libmachine: Using API Version  1
	I0917 17:24:50.418677   35557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:50.418988   35557 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:50.419171   35557 main.go:141] libmachine: (ha-181247-m03) Calling .DriverName
	I0917 17:24:50.419363   35557 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:24:50.419389   35557 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	I0917 17:24:50.422272   35557 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:50.422752   35557 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:24:50.422780   35557 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:24:50.422903   35557 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHPort
	I0917 17:24:50.423047   35557 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:24:50.423185   35557 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHUsername
	I0917 17:24:50.423300   35557 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03/id_rsa Username:docker}
	I0917 17:24:50.507034   35557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:24:50.528371   35557 kubeconfig.go:125] found "ha-181247" server: "https://192.168.39.254:8443"
	I0917 17:24:50.528398   35557 api_server.go:166] Checking apiserver status ...
	I0917 17:24:50.528431   35557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:24:50.544031   35557 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1374/cgroup
	W0917 17:24:50.559830   35557 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1374/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 17:24:50.559892   35557 ssh_runner.go:195] Run: ls
	I0917 17:24:50.565017   35557 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0917 17:24:50.569376   35557 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0917 17:24:50.569412   35557 status.go:422] ha-181247-m03 apiserver status = Running (err=<nil>)
	I0917 17:24:50.569424   35557 status.go:257] ha-181247-m03 status: &{Name:ha-181247-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:24:50.569461   35557 status.go:255] checking status of ha-181247-m04 ...
	I0917 17:24:50.569810   35557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:50.569854   35557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:50.585958   35557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38349
	I0917 17:24:50.586401   35557 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:50.586921   35557 main.go:141] libmachine: Using API Version  1
	I0917 17:24:50.586946   35557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:50.587239   35557 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:50.587431   35557 main.go:141] libmachine: (ha-181247-m04) Calling .GetState
	I0917 17:24:50.589052   35557 status.go:330] ha-181247-m04 host status = "Running" (err=<nil>)
	I0917 17:24:50.589077   35557 host.go:66] Checking if "ha-181247-m04" exists ...
	I0917 17:24:50.589397   35557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:50.589451   35557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:50.604624   35557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38855
	I0917 17:24:50.605046   35557 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:50.605627   35557 main.go:141] libmachine: Using API Version  1
	I0917 17:24:50.605680   35557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:50.605980   35557 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:50.606186   35557 main.go:141] libmachine: (ha-181247-m04) Calling .GetIP
	I0917 17:24:50.608674   35557 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:50.609118   35557 main.go:141] libmachine: (ha-181247-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0a:d0", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:20:47 +0000 UTC Type:0 Mac:52:54:00:e5:0a:d0 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-181247-m04 Clientid:01:52:54:00:e5:0a:d0}
	I0917 17:24:50.609146   35557 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined IP address 192.168.39.63 and MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:50.609292   35557 host.go:66] Checking if "ha-181247-m04" exists ...
	I0917 17:24:50.609654   35557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:24:50.609694   35557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:24:50.624818   35557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38181
	I0917 17:24:50.625185   35557 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:24:50.625673   35557 main.go:141] libmachine: Using API Version  1
	I0917 17:24:50.625695   35557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:24:50.626012   35557 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:24:50.626173   35557 main.go:141] libmachine: (ha-181247-m04) Calling .DriverName
	I0917 17:24:50.626338   35557 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:24:50.626361   35557 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHHostname
	I0917 17:24:50.629020   35557 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:50.629513   35557 main.go:141] libmachine: (ha-181247-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0a:d0", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:20:47 +0000 UTC Type:0 Mac:52:54:00:e5:0a:d0 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-181247-m04 Clientid:01:52:54:00:e5:0a:d0}
	I0917 17:24:50.629545   35557 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined IP address 192.168.39.63 and MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:24:50.629648   35557 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHPort
	I0917 17:24:50.629802   35557 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHKeyPath
	I0917 17:24:50.629913   35557 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHUsername
	I0917 17:24:50.630020   35557 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m04/id_rsa Username:docker}
	I0917 17:24:50.714965   35557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:24:50.732386   35557 status.go:257] ha-181247-m04 status: &{Name:ha-181247-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-181247 status -v=7 --alsologtostderr: exit status 7 (650.637319ms)

                                                
                                                
-- stdout --
	ha-181247
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-181247-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-181247-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-181247-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:25:01.611773   35661 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:25:01.611905   35661 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:25:01.611914   35661 out.go:358] Setting ErrFile to fd 2...
	I0917 17:25:01.611918   35661 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:25:01.612095   35661 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 17:25:01.612257   35661 out.go:352] Setting JSON to false
	I0917 17:25:01.612283   35661 mustload.go:65] Loading cluster: ha-181247
	I0917 17:25:01.612340   35661 notify.go:220] Checking for updates...
	I0917 17:25:01.612698   35661 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:25:01.612713   35661 status.go:255] checking status of ha-181247 ...
	I0917 17:25:01.613091   35661 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:25:01.613143   35661 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:25:01.632852   35661 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41675
	I0917 17:25:01.633329   35661 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:25:01.634043   35661 main.go:141] libmachine: Using API Version  1
	I0917 17:25:01.634075   35661 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:25:01.634384   35661 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:25:01.634562   35661 main.go:141] libmachine: (ha-181247) Calling .GetState
	I0917 17:25:01.636217   35661 status.go:330] ha-181247 host status = "Running" (err=<nil>)
	I0917 17:25:01.636234   35661 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:25:01.636543   35661 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:25:01.636575   35661 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:25:01.652738   35661 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45315
	I0917 17:25:01.653307   35661 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:25:01.653881   35661 main.go:141] libmachine: Using API Version  1
	I0917 17:25:01.653903   35661 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:25:01.654181   35661 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:25:01.654353   35661 main.go:141] libmachine: (ha-181247) Calling .GetIP
	I0917 17:25:01.657320   35661 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:25:01.657817   35661 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:25:01.657855   35661 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:25:01.658073   35661 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:25:01.658530   35661 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:25:01.658577   35661 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:25:01.674370   35661 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36863
	I0917 17:25:01.674888   35661 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:25:01.675386   35661 main.go:141] libmachine: Using API Version  1
	I0917 17:25:01.675409   35661 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:25:01.675752   35661 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:25:01.675940   35661 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:25:01.676121   35661 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:25:01.676145   35661 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:25:01.679056   35661 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:25:01.679447   35661 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:25:01.679472   35661 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:25:01.679631   35661 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:25:01.679818   35661 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:25:01.679941   35661 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:25:01.680059   35661 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:25:01.769185   35661 ssh_runner.go:195] Run: systemctl --version
	I0917 17:25:01.776376   35661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:25:01.795423   35661 kubeconfig.go:125] found "ha-181247" server: "https://192.168.39.254:8443"
	I0917 17:25:01.795459   35661 api_server.go:166] Checking apiserver status ...
	I0917 17:25:01.795492   35661 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:25:01.811125   35661 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1072/cgroup
	W0917 17:25:01.821722   35661 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1072/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 17:25:01.821773   35661 ssh_runner.go:195] Run: ls
	I0917 17:25:01.826619   35661 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0917 17:25:01.831259   35661 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0917 17:25:01.831284   35661 status.go:422] ha-181247 apiserver status = Running (err=<nil>)
	I0917 17:25:01.831293   35661 status.go:257] ha-181247 status: &{Name:ha-181247 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:25:01.831318   35661 status.go:255] checking status of ha-181247-m02 ...
	I0917 17:25:01.831620   35661 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:25:01.831656   35661 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:25:01.847478   35661 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35153
	I0917 17:25:01.847905   35661 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:25:01.848386   35661 main.go:141] libmachine: Using API Version  1
	I0917 17:25:01.848402   35661 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:25:01.848754   35661 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:25:01.848965   35661 main.go:141] libmachine: (ha-181247-m02) Calling .GetState
	I0917 17:25:01.850758   35661 status.go:330] ha-181247-m02 host status = "Stopped" (err=<nil>)
	I0917 17:25:01.850774   35661 status.go:343] host is not running, skipping remaining checks
	I0917 17:25:01.850781   35661 status.go:257] ha-181247-m02 status: &{Name:ha-181247-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:25:01.850802   35661 status.go:255] checking status of ha-181247-m03 ...
	I0917 17:25:01.851124   35661 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:25:01.851183   35661 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:25:01.866790   35661 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43605
	I0917 17:25:01.867267   35661 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:25:01.867845   35661 main.go:141] libmachine: Using API Version  1
	I0917 17:25:01.867882   35661 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:25:01.868239   35661 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:25:01.868423   35661 main.go:141] libmachine: (ha-181247-m03) Calling .GetState
	I0917 17:25:01.870023   35661 status.go:330] ha-181247-m03 host status = "Running" (err=<nil>)
	I0917 17:25:01.870041   35661 host.go:66] Checking if "ha-181247-m03" exists ...
	I0917 17:25:01.870341   35661 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:25:01.870377   35661 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:25:01.885774   35661 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40991
	I0917 17:25:01.886220   35661 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:25:01.886685   35661 main.go:141] libmachine: Using API Version  1
	I0917 17:25:01.886710   35661 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:25:01.887114   35661 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:25:01.887319   35661 main.go:141] libmachine: (ha-181247-m03) Calling .GetIP
	I0917 17:25:01.890666   35661 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:25:01.891068   35661 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:25:01.891085   35661 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:25:01.891233   35661 host.go:66] Checking if "ha-181247-m03" exists ...
	I0917 17:25:01.891583   35661 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:25:01.891643   35661 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:25:01.907059   35661 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46437
	I0917 17:25:01.907494   35661 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:25:01.908015   35661 main.go:141] libmachine: Using API Version  1
	I0917 17:25:01.908046   35661 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:25:01.908370   35661 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:25:01.908582   35661 main.go:141] libmachine: (ha-181247-m03) Calling .DriverName
	I0917 17:25:01.908753   35661 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:25:01.908773   35661 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	I0917 17:25:01.911724   35661 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:25:01.912113   35661 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:25:01.912146   35661 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:25:01.912297   35661 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHPort
	I0917 17:25:01.912461   35661 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:25:01.912602   35661 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHUsername
	I0917 17:25:01.912848   35661 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03/id_rsa Username:docker}
	I0917 17:25:01.997506   35661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:25:02.015306   35661 kubeconfig.go:125] found "ha-181247" server: "https://192.168.39.254:8443"
	I0917 17:25:02.015338   35661 api_server.go:166] Checking apiserver status ...
	I0917 17:25:02.015388   35661 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:25:02.030844   35661 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1374/cgroup
	W0917 17:25:02.041769   35661 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1374/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 17:25:02.041844   35661 ssh_runner.go:195] Run: ls
	I0917 17:25:02.046961   35661 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0917 17:25:02.053262   35661 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0917 17:25:02.053295   35661 status.go:422] ha-181247-m03 apiserver status = Running (err=<nil>)
	I0917 17:25:02.053306   35661 status.go:257] ha-181247-m03 status: &{Name:ha-181247-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:25:02.053349   35661 status.go:255] checking status of ha-181247-m04 ...
	I0917 17:25:02.053757   35661 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:25:02.053801   35661 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:25:02.069407   35661 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42089
	I0917 17:25:02.069895   35661 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:25:02.070628   35661 main.go:141] libmachine: Using API Version  1
	I0917 17:25:02.070652   35661 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:25:02.070940   35661 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:25:02.071124   35661 main.go:141] libmachine: (ha-181247-m04) Calling .GetState
	I0917 17:25:02.072959   35661 status.go:330] ha-181247-m04 host status = "Running" (err=<nil>)
	I0917 17:25:02.072975   35661 host.go:66] Checking if "ha-181247-m04" exists ...
	I0917 17:25:02.073275   35661 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:25:02.073316   35661 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:25:02.089469   35661 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41653
	I0917 17:25:02.090004   35661 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:25:02.090528   35661 main.go:141] libmachine: Using API Version  1
	I0917 17:25:02.090554   35661 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:25:02.090936   35661 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:25:02.091225   35661 main.go:141] libmachine: (ha-181247-m04) Calling .GetIP
	I0917 17:25:02.094045   35661 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:25:02.094438   35661 main.go:141] libmachine: (ha-181247-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0a:d0", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:20:47 +0000 UTC Type:0 Mac:52:54:00:e5:0a:d0 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-181247-m04 Clientid:01:52:54:00:e5:0a:d0}
	I0917 17:25:02.094466   35661 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined IP address 192.168.39.63 and MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:25:02.094623   35661 host.go:66] Checking if "ha-181247-m04" exists ...
	I0917 17:25:02.094958   35661 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:25:02.095006   35661 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:25:02.110274   35661 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34081
	I0917 17:25:02.110763   35661 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:25:02.111194   35661 main.go:141] libmachine: Using API Version  1
	I0917 17:25:02.111218   35661 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:25:02.111513   35661 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:25:02.111717   35661 main.go:141] libmachine: (ha-181247-m04) Calling .DriverName
	I0917 17:25:02.111875   35661 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:25:02.111894   35661 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHHostname
	I0917 17:25:02.114868   35661 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:25:02.115317   35661 main.go:141] libmachine: (ha-181247-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0a:d0", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:20:47 +0000 UTC Type:0 Mac:52:54:00:e5:0a:d0 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-181247-m04 Clientid:01:52:54:00:e5:0a:d0}
	I0917 17:25:02.115350   35661 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined IP address 192.168.39.63 and MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:25:02.115504   35661 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHPort
	I0917 17:25:02.115674   35661 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHKeyPath
	I0917 17:25:02.115804   35661 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHUsername
	I0917 17:25:02.115940   35661 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m04/id_rsa Username:docker}
	I0917 17:25:02.200951   35661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:25:02.217857   35661 status.go:257] ha-181247-m04 status: &{Name:ha-181247-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-181247 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-181247 -n ha-181247
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-181247 logs -n 25: (1.490924411s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-181247 ssh -n                                                                 | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-181247 cp ha-181247-m03:/home/docker/cp-test.txt                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247:/home/docker/cp-test_ha-181247-m03_ha-181247.txt                       |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n                                                                 | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n ha-181247 sudo cat                                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-181247-m03_ha-181247.txt                                 |           |         |         |                     |                     |
	| cp      | ha-181247 cp ha-181247-m03:/home/docker/cp-test.txt                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m02:/home/docker/cp-test_ha-181247-m03_ha-181247-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n                                                                 | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n ha-181247-m02 sudo cat                                          | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-181247-m03_ha-181247-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-181247 cp ha-181247-m03:/home/docker/cp-test.txt                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m04:/home/docker/cp-test_ha-181247-m03_ha-181247-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n                                                                 | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n ha-181247-m04 sudo cat                                          | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-181247-m03_ha-181247-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-181247 cp testdata/cp-test.txt                                                | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n                                                                 | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-181247 cp ha-181247-m04:/home/docker/cp-test.txt                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3499385804/001/cp-test_ha-181247-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n                                                                 | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-181247 cp ha-181247-m04:/home/docker/cp-test.txt                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247:/home/docker/cp-test_ha-181247-m04_ha-181247.txt                       |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n                                                                 | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n ha-181247 sudo cat                                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-181247-m04_ha-181247.txt                                 |           |         |         |                     |                     |
	| cp      | ha-181247 cp ha-181247-m04:/home/docker/cp-test.txt                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m02:/home/docker/cp-test_ha-181247-m04_ha-181247-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n                                                                 | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n ha-181247-m02 sudo cat                                          | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-181247-m04_ha-181247-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-181247 cp ha-181247-m04:/home/docker/cp-test.txt                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m03:/home/docker/cp-test_ha-181247-m04_ha-181247-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n                                                                 | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n ha-181247-m03 sudo cat                                          | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-181247-m04_ha-181247-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-181247 node stop m02 -v=7                                                     | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-181247 node start m02 -v=7                                                    | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 17:17:12
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 17:17:12.295260   29734 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:17:12.295383   29734 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:17:12.295392   29734 out.go:358] Setting ErrFile to fd 2...
	I0917 17:17:12.295396   29734 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:17:12.295568   29734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 17:17:12.296178   29734 out.go:352] Setting JSON to false
	I0917 17:17:12.297084   29734 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3547,"bootTime":1726589885,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 17:17:12.297187   29734 start.go:139] virtualization: kvm guest
	I0917 17:17:12.299632   29734 out.go:177] * [ha-181247] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 17:17:12.301202   29734 notify.go:220] Checking for updates...
	I0917 17:17:12.301208   29734 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 17:17:12.302756   29734 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 17:17:12.304156   29734 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 17:17:12.305489   29734 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 17:17:12.306572   29734 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 17:17:12.307710   29734 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 17:17:12.309117   29734 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 17:17:12.344556   29734 out.go:177] * Using the kvm2 driver based on user configuration
	I0917 17:17:12.345884   29734 start.go:297] selected driver: kvm2
	I0917 17:17:12.345897   29734 start.go:901] validating driver "kvm2" against <nil>
	I0917 17:17:12.345915   29734 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 17:17:12.346647   29734 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 17:17:12.346716   29734 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19662-11085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0917 17:17:12.362456   29734 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0917 17:17:12.362516   29734 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 17:17:12.362773   29734 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 17:17:12.362807   29734 cni.go:84] Creating CNI manager for ""
	I0917 17:17:12.362842   29734 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0917 17:17:12.362850   29734 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0917 17:17:12.362901   29734 start.go:340] cluster config:
	{Name:ha-181247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-181247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0917 17:17:12.362994   29734 iso.go:125] acquiring lock: {Name:mkee7131e09b1c5eb94d106bda884bcbdbf6f906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 17:17:12.365161   29734 out.go:177] * Starting "ha-181247" primary control-plane node in "ha-181247" cluster
	I0917 17:17:12.366603   29734 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 17:17:12.366647   29734 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0917 17:17:12.366658   29734 cache.go:56] Caching tarball of preloaded images
	I0917 17:17:12.366754   29734 preload.go:172] Found /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 17:17:12.366765   29734 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0917 17:17:12.367061   29734 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/config.json ...
	I0917 17:17:12.367079   29734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/config.json: {Name:mk21af64916b6c67dc99ac97417f17a21d879838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:17:12.367216   29734 start.go:360] acquireMachinesLock for ha-181247: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 17:17:12.367244   29734 start.go:364] duration metric: took 15.704µs to acquireMachinesLock for "ha-181247"
	I0917 17:17:12.367260   29734 start.go:93] Provisioning new machine with config: &{Name:ha-181247 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-181247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 17:17:12.367314   29734 start.go:125] createHost starting for "" (driver="kvm2")
	I0917 17:17:12.369104   29734 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 17:17:12.369279   29734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:17:12.369322   29734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:17:12.384105   29734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46807
	I0917 17:17:12.384598   29734 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:17:12.385148   29734 main.go:141] libmachine: Using API Version  1
	I0917 17:17:12.385167   29734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:17:12.385543   29734 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:17:12.385711   29734 main.go:141] libmachine: (ha-181247) Calling .GetMachineName
	I0917 17:17:12.385846   29734 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:17:12.385978   29734 start.go:159] libmachine.API.Create for "ha-181247" (driver="kvm2")
	I0917 17:17:12.386003   29734 client.go:168] LocalClient.Create starting
	I0917 17:17:12.386030   29734 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem
	I0917 17:17:12.386066   29734 main.go:141] libmachine: Decoding PEM data...
	I0917 17:17:12.386080   29734 main.go:141] libmachine: Parsing certificate...
	I0917 17:17:12.386133   29734 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem
	I0917 17:17:12.386155   29734 main.go:141] libmachine: Decoding PEM data...
	I0917 17:17:12.386170   29734 main.go:141] libmachine: Parsing certificate...
	I0917 17:17:12.386187   29734 main.go:141] libmachine: Running pre-create checks...
	I0917 17:17:12.386195   29734 main.go:141] libmachine: (ha-181247) Calling .PreCreateCheck
	I0917 17:17:12.386517   29734 main.go:141] libmachine: (ha-181247) Calling .GetConfigRaw
	I0917 17:17:12.386899   29734 main.go:141] libmachine: Creating machine...
	I0917 17:17:12.386911   29734 main.go:141] libmachine: (ha-181247) Calling .Create
	I0917 17:17:12.387046   29734 main.go:141] libmachine: (ha-181247) Creating KVM machine...
	I0917 17:17:12.388285   29734 main.go:141] libmachine: (ha-181247) DBG | found existing default KVM network
	I0917 17:17:12.388993   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:12.388835   29757 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001151e0}
	I0917 17:17:12.389058   29734 main.go:141] libmachine: (ha-181247) DBG | created network xml: 
	I0917 17:17:12.389082   29734 main.go:141] libmachine: (ha-181247) DBG | <network>
	I0917 17:17:12.389090   29734 main.go:141] libmachine: (ha-181247) DBG |   <name>mk-ha-181247</name>
	I0917 17:17:12.389097   29734 main.go:141] libmachine: (ha-181247) DBG |   <dns enable='no'/>
	I0917 17:17:12.389120   29734 main.go:141] libmachine: (ha-181247) DBG |   
	I0917 17:17:12.389132   29734 main.go:141] libmachine: (ha-181247) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0917 17:17:12.389140   29734 main.go:141] libmachine: (ha-181247) DBG |     <dhcp>
	I0917 17:17:12.389148   29734 main.go:141] libmachine: (ha-181247) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0917 17:17:12.389171   29734 main.go:141] libmachine: (ha-181247) DBG |     </dhcp>
	I0917 17:17:12.389194   29734 main.go:141] libmachine: (ha-181247) DBG |   </ip>
	I0917 17:17:12.389205   29734 main.go:141] libmachine: (ha-181247) DBG |   
	I0917 17:17:12.389211   29734 main.go:141] libmachine: (ha-181247) DBG | </network>
	I0917 17:17:12.389222   29734 main.go:141] libmachine: (ha-181247) DBG | 
	I0917 17:17:12.394697   29734 main.go:141] libmachine: (ha-181247) DBG | trying to create private KVM network mk-ha-181247 192.168.39.0/24...
	I0917 17:17:12.464235   29734 main.go:141] libmachine: (ha-181247) DBG | private KVM network mk-ha-181247 192.168.39.0/24 created
	I0917 17:17:12.464265   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:12.464199   29757 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 17:17:12.464275   29734 main.go:141] libmachine: (ha-181247) Setting up store path in /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247 ...
	I0917 17:17:12.464289   29734 main.go:141] libmachine: (ha-181247) Building disk image from file:///home/jenkins/minikube-integration/19662-11085/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0917 17:17:12.464466   29734 main.go:141] libmachine: (ha-181247) Downloading /home/jenkins/minikube-integration/19662-11085/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19662-11085/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0917 17:17:12.728745   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:12.728594   29757 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa...
	I0917 17:17:13.051914   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:13.051793   29757 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/ha-181247.rawdisk...
	I0917 17:17:13.051946   29734 main.go:141] libmachine: (ha-181247) DBG | Writing magic tar header
	I0917 17:17:13.051955   29734 main.go:141] libmachine: (ha-181247) DBG | Writing SSH key tar header
	I0917 17:17:13.051962   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:13.051909   29757 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247 ...
	I0917 17:17:13.052022   29734 main.go:141] libmachine: (ha-181247) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247
	I0917 17:17:13.052114   29734 main.go:141] libmachine: (ha-181247) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247 (perms=drwx------)
	I0917 17:17:13.052148   29734 main.go:141] libmachine: (ha-181247) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube/machines (perms=drwxr-xr-x)
	I0917 17:17:13.052165   29734 main.go:141] libmachine: (ha-181247) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube/machines
	I0917 17:17:13.052178   29734 main.go:141] libmachine: (ha-181247) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube (perms=drwxr-xr-x)
	I0917 17:17:13.052190   29734 main.go:141] libmachine: (ha-181247) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 17:17:13.052207   29734 main.go:141] libmachine: (ha-181247) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085
	I0917 17:17:13.052215   29734 main.go:141] libmachine: (ha-181247) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0917 17:17:13.052222   29734 main.go:141] libmachine: (ha-181247) DBG | Checking permissions on dir: /home/jenkins
	I0917 17:17:13.052228   29734 main.go:141] libmachine: (ha-181247) DBG | Checking permissions on dir: /home
	I0917 17:17:13.052235   29734 main.go:141] libmachine: (ha-181247) DBG | Skipping /home - not owner
	I0917 17:17:13.052245   29734 main.go:141] libmachine: (ha-181247) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085 (perms=drwxrwxr-x)
	I0917 17:17:13.052253   29734 main.go:141] libmachine: (ha-181247) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0917 17:17:13.052260   29734 main.go:141] libmachine: (ha-181247) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0917 17:17:13.052266   29734 main.go:141] libmachine: (ha-181247) Creating domain...
	I0917 17:17:13.053913   29734 main.go:141] libmachine: (ha-181247) define libvirt domain using xml: 
	I0917 17:17:13.053929   29734 main.go:141] libmachine: (ha-181247) <domain type='kvm'>
	I0917 17:17:13.053935   29734 main.go:141] libmachine: (ha-181247)   <name>ha-181247</name>
	I0917 17:17:13.053940   29734 main.go:141] libmachine: (ha-181247)   <memory unit='MiB'>2200</memory>
	I0917 17:17:13.053945   29734 main.go:141] libmachine: (ha-181247)   <vcpu>2</vcpu>
	I0917 17:17:13.053948   29734 main.go:141] libmachine: (ha-181247)   <features>
	I0917 17:17:13.053953   29734 main.go:141] libmachine: (ha-181247)     <acpi/>
	I0917 17:17:13.053957   29734 main.go:141] libmachine: (ha-181247)     <apic/>
	I0917 17:17:13.053961   29734 main.go:141] libmachine: (ha-181247)     <pae/>
	I0917 17:17:13.053966   29734 main.go:141] libmachine: (ha-181247)     
	I0917 17:17:13.053970   29734 main.go:141] libmachine: (ha-181247)   </features>
	I0917 17:17:13.053975   29734 main.go:141] libmachine: (ha-181247)   <cpu mode='host-passthrough'>
	I0917 17:17:13.053980   29734 main.go:141] libmachine: (ha-181247)   
	I0917 17:17:13.053984   29734 main.go:141] libmachine: (ha-181247)   </cpu>
	I0917 17:17:13.053988   29734 main.go:141] libmachine: (ha-181247)   <os>
	I0917 17:17:13.053993   29734 main.go:141] libmachine: (ha-181247)     <type>hvm</type>
	I0917 17:17:13.053998   29734 main.go:141] libmachine: (ha-181247)     <boot dev='cdrom'/>
	I0917 17:17:13.054004   29734 main.go:141] libmachine: (ha-181247)     <boot dev='hd'/>
	I0917 17:17:13.054009   29734 main.go:141] libmachine: (ha-181247)     <bootmenu enable='no'/>
	I0917 17:17:13.054015   29734 main.go:141] libmachine: (ha-181247)   </os>
	I0917 17:17:13.054044   29734 main.go:141] libmachine: (ha-181247)   <devices>
	I0917 17:17:13.054063   29734 main.go:141] libmachine: (ha-181247)     <disk type='file' device='cdrom'>
	I0917 17:17:13.054090   29734 main.go:141] libmachine: (ha-181247)       <source file='/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/boot2docker.iso'/>
	I0917 17:17:13.054111   29734 main.go:141] libmachine: (ha-181247)       <target dev='hdc' bus='scsi'/>
	I0917 17:17:13.054122   29734 main.go:141] libmachine: (ha-181247)       <readonly/>
	I0917 17:17:13.054136   29734 main.go:141] libmachine: (ha-181247)     </disk>
	I0917 17:17:13.054148   29734 main.go:141] libmachine: (ha-181247)     <disk type='file' device='disk'>
	I0917 17:17:13.054159   29734 main.go:141] libmachine: (ha-181247)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0917 17:17:13.054174   29734 main.go:141] libmachine: (ha-181247)       <source file='/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/ha-181247.rawdisk'/>
	I0917 17:17:13.054184   29734 main.go:141] libmachine: (ha-181247)       <target dev='hda' bus='virtio'/>
	I0917 17:17:13.054192   29734 main.go:141] libmachine: (ha-181247)     </disk>
	I0917 17:17:13.054200   29734 main.go:141] libmachine: (ha-181247)     <interface type='network'>
	I0917 17:17:13.054214   29734 main.go:141] libmachine: (ha-181247)       <source network='mk-ha-181247'/>
	I0917 17:17:13.054222   29734 main.go:141] libmachine: (ha-181247)       <model type='virtio'/>
	I0917 17:17:13.054240   29734 main.go:141] libmachine: (ha-181247)     </interface>
	I0917 17:17:13.054247   29734 main.go:141] libmachine: (ha-181247)     <interface type='network'>
	I0917 17:17:13.054252   29734 main.go:141] libmachine: (ha-181247)       <source network='default'/>
	I0917 17:17:13.054256   29734 main.go:141] libmachine: (ha-181247)       <model type='virtio'/>
	I0917 17:17:13.054260   29734 main.go:141] libmachine: (ha-181247)     </interface>
	I0917 17:17:13.054264   29734 main.go:141] libmachine: (ha-181247)     <serial type='pty'>
	I0917 17:17:13.054269   29734 main.go:141] libmachine: (ha-181247)       <target port='0'/>
	I0917 17:17:13.054272   29734 main.go:141] libmachine: (ha-181247)     </serial>
	I0917 17:17:13.054276   29734 main.go:141] libmachine: (ha-181247)     <console type='pty'>
	I0917 17:17:13.054280   29734 main.go:141] libmachine: (ha-181247)       <target type='serial' port='0'/>
	I0917 17:17:13.054287   29734 main.go:141] libmachine: (ha-181247)     </console>
	I0917 17:17:13.054296   29734 main.go:141] libmachine: (ha-181247)     <rng model='virtio'>
	I0917 17:17:13.054301   29734 main.go:141] libmachine: (ha-181247)       <backend model='random'>/dev/random</backend>
	I0917 17:17:13.054307   29734 main.go:141] libmachine: (ha-181247)     </rng>
	I0917 17:17:13.054312   29734 main.go:141] libmachine: (ha-181247)     
	I0917 17:17:13.054317   29734 main.go:141] libmachine: (ha-181247)     
	I0917 17:17:13.054322   29734 main.go:141] libmachine: (ha-181247)   </devices>
	I0917 17:17:13.054325   29734 main.go:141] libmachine: (ha-181247) </domain>
	I0917 17:17:13.054331   29734 main.go:141] libmachine: (ha-181247) 
	I0917 17:17:13.058801   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:e9:c7:ce in network default
	I0917 17:17:13.059353   29734 main.go:141] libmachine: (ha-181247) Ensuring networks are active...
	I0917 17:17:13.059369   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:13.060130   29734 main.go:141] libmachine: (ha-181247) Ensuring network default is active
	I0917 17:17:13.060461   29734 main.go:141] libmachine: (ha-181247) Ensuring network mk-ha-181247 is active
	I0917 17:17:13.060945   29734 main.go:141] libmachine: (ha-181247) Getting domain xml...
	I0917 17:17:13.061685   29734 main.go:141] libmachine: (ha-181247) Creating domain...
	I0917 17:17:14.270331   29734 main.go:141] libmachine: (ha-181247) Waiting to get IP...
	I0917 17:17:14.271018   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:14.271449   29734 main.go:141] libmachine: (ha-181247) DBG | unable to find current IP address of domain ha-181247 in network mk-ha-181247
	I0917 17:17:14.271500   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:14.271435   29757 retry.go:31] will retry after 207.881018ms: waiting for machine to come up
	I0917 17:17:14.480839   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:14.481383   29734 main.go:141] libmachine: (ha-181247) DBG | unable to find current IP address of domain ha-181247 in network mk-ha-181247
	I0917 17:17:14.481413   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:14.481338   29757 retry.go:31] will retry after 323.692976ms: waiting for machine to come up
	I0917 17:17:14.806856   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:14.807287   29734 main.go:141] libmachine: (ha-181247) DBG | unable to find current IP address of domain ha-181247 in network mk-ha-181247
	I0917 17:17:14.807309   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:14.807243   29757 retry.go:31] will retry after 339.921351ms: waiting for machine to come up
	I0917 17:17:15.148971   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:15.149412   29734 main.go:141] libmachine: (ha-181247) DBG | unable to find current IP address of domain ha-181247 in network mk-ha-181247
	I0917 17:17:15.149439   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:15.149393   29757 retry.go:31] will retry after 383.286106ms: waiting for machine to come up
	I0917 17:17:15.534034   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:15.534603   29734 main.go:141] libmachine: (ha-181247) DBG | unable to find current IP address of domain ha-181247 in network mk-ha-181247
	I0917 17:17:15.534629   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:15.534563   29757 retry.go:31] will retry after 575.428604ms: waiting for machine to come up
	I0917 17:17:16.111428   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:16.111851   29734 main.go:141] libmachine: (ha-181247) DBG | unable to find current IP address of domain ha-181247 in network mk-ha-181247
	I0917 17:17:16.111891   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:16.111782   29757 retry.go:31] will retry after 923.833339ms: waiting for machine to come up
	I0917 17:17:17.036886   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:17.037288   29734 main.go:141] libmachine: (ha-181247) DBG | unable to find current IP address of domain ha-181247 in network mk-ha-181247
	I0917 17:17:17.037324   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:17.037247   29757 retry.go:31] will retry after 853.549592ms: waiting for machine to come up
	I0917 17:17:17.892848   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:17.893205   29734 main.go:141] libmachine: (ha-181247) DBG | unable to find current IP address of domain ha-181247 in network mk-ha-181247
	I0917 17:17:17.893242   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:17.893158   29757 retry.go:31] will retry after 1.313972164s: waiting for machine to come up
	I0917 17:17:19.208284   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:19.208773   29734 main.go:141] libmachine: (ha-181247) DBG | unable to find current IP address of domain ha-181247 in network mk-ha-181247
	I0917 17:17:19.208798   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:19.208735   29757 retry.go:31] will retry after 1.71538151s: waiting for machine to come up
	I0917 17:17:20.926651   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:20.927074   29734 main.go:141] libmachine: (ha-181247) DBG | unable to find current IP address of domain ha-181247 in network mk-ha-181247
	I0917 17:17:20.927103   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:20.927027   29757 retry.go:31] will retry after 2.217693124s: waiting for machine to come up
	I0917 17:17:23.146319   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:23.146752   29734 main.go:141] libmachine: (ha-181247) DBG | unable to find current IP address of domain ha-181247 in network mk-ha-181247
	I0917 17:17:23.146783   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:23.146687   29757 retry.go:31] will retry after 1.923987178s: waiting for machine to come up
	I0917 17:17:25.072729   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:25.073147   29734 main.go:141] libmachine: (ha-181247) DBG | unable to find current IP address of domain ha-181247 in network mk-ha-181247
	I0917 17:17:25.073189   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:25.073114   29757 retry.go:31] will retry after 3.588058762s: waiting for machine to come up
	I0917 17:17:28.662628   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:28.663074   29734 main.go:141] libmachine: (ha-181247) DBG | unable to find current IP address of domain ha-181247 in network mk-ha-181247
	I0917 17:17:28.663093   29734 main.go:141] libmachine: (ha-181247) DBG | I0917 17:17:28.663020   29757 retry.go:31] will retry after 4.377762468s: waiting for machine to come up
	I0917 17:17:33.042665   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.043121   29734 main.go:141] libmachine: (ha-181247) Found IP for machine: 192.168.39.195
	I0917 17:17:33.043147   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has current primary IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.043155   29734 main.go:141] libmachine: (ha-181247) Reserving static IP address...
	I0917 17:17:33.043511   29734 main.go:141] libmachine: (ha-181247) DBG | unable to find host DHCP lease matching {name: "ha-181247", mac: "52:54:00:51:1e:14", ip: "192.168.39.195"} in network mk-ha-181247
	I0917 17:17:33.119976   29734 main.go:141] libmachine: (ha-181247) Reserved static IP address: 192.168.39.195
	I0917 17:17:33.120002   29734 main.go:141] libmachine: (ha-181247) Waiting for SSH to be available...
	I0917 17:17:33.120013   29734 main.go:141] libmachine: (ha-181247) DBG | Getting to WaitForSSH function...
	I0917 17:17:33.122580   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.122982   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:minikube Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:33.123015   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.123194   29734 main.go:141] libmachine: (ha-181247) DBG | Using SSH client type: external
	I0917 17:17:33.123219   29734 main.go:141] libmachine: (ha-181247) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa (-rw-------)
	I0917 17:17:33.123283   29734 main.go:141] libmachine: (ha-181247) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.195 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 17:17:33.123304   29734 main.go:141] libmachine: (ha-181247) DBG | About to run SSH command:
	I0917 17:17:33.123322   29734 main.go:141] libmachine: (ha-181247) DBG | exit 0
	I0917 17:17:33.253772   29734 main.go:141] libmachine: (ha-181247) DBG | SSH cmd err, output: <nil>: 
	I0917 17:17:33.254049   29734 main.go:141] libmachine: (ha-181247) KVM machine creation complete!
	I0917 17:17:33.254406   29734 main.go:141] libmachine: (ha-181247) Calling .GetConfigRaw
	I0917 17:17:33.254993   29734 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:17:33.255173   29734 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:17:33.255336   29734 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0917 17:17:33.255371   29734 main.go:141] libmachine: (ha-181247) Calling .GetState
	I0917 17:17:33.256649   29734 main.go:141] libmachine: Detecting operating system of created instance...
	I0917 17:17:33.256662   29734 main.go:141] libmachine: Waiting for SSH to be available...
	I0917 17:17:33.256670   29734 main.go:141] libmachine: Getting to WaitForSSH function...
	I0917 17:17:33.256677   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:17:33.258972   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.259370   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:33.259394   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.259523   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:17:33.259702   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:33.259827   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:33.259954   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:17:33.260147   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:17:33.260340   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0917 17:17:33.260352   29734 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0917 17:17:33.368747   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 17:17:33.368769   29734 main.go:141] libmachine: Detecting the provisioner...
	I0917 17:17:33.368777   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:17:33.371379   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.371741   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:33.371768   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.371890   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:17:33.372061   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:33.372235   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:33.372326   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:17:33.372476   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:17:33.372646   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0917 17:17:33.372660   29734 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0917 17:17:33.486345   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0917 17:17:33.486422   29734 main.go:141] libmachine: found compatible host: buildroot
	I0917 17:17:33.486430   29734 main.go:141] libmachine: Provisioning with buildroot...
	I0917 17:17:33.486437   29734 main.go:141] libmachine: (ha-181247) Calling .GetMachineName
	I0917 17:17:33.486682   29734 buildroot.go:166] provisioning hostname "ha-181247"
	I0917 17:17:33.486709   29734 main.go:141] libmachine: (ha-181247) Calling .GetMachineName
	I0917 17:17:33.486904   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:17:33.489683   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.490031   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:33.490057   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.490210   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:17:33.490396   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:33.490505   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:33.490639   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:17:33.490837   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:17:33.491006   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0917 17:17:33.491017   29734 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181247 && echo "ha-181247" | sudo tee /etc/hostname
	I0917 17:17:33.617089   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181247
	
	I0917 17:17:33.617115   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:17:33.619660   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.619945   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:33.619972   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.620114   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:17:33.620302   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:33.620453   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:33.620612   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:17:33.620771   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:17:33.620926   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0917 17:17:33.620941   29734 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181247' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181247/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181247' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 17:17:33.738853   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 17:17:33.738881   29734 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 17:17:33.738936   29734 buildroot.go:174] setting up certificates
	I0917 17:17:33.738950   29734 provision.go:84] configureAuth start
	I0917 17:17:33.738967   29734 main.go:141] libmachine: (ha-181247) Calling .GetMachineName
	I0917 17:17:33.739211   29734 main.go:141] libmachine: (ha-181247) Calling .GetIP
	I0917 17:17:33.741845   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.742160   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:33.742179   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.742325   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:17:33.744318   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.744701   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:33.744727   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.744831   29734 provision.go:143] copyHostCerts
	I0917 17:17:33.744878   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 17:17:33.744930   29734 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 17:17:33.744945   29734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 17:17:33.745036   29734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 17:17:33.745171   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 17:17:33.745202   29734 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 17:17:33.745212   29734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 17:17:33.745274   29734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 17:17:33.745363   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 17:17:33.745487   29734 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 17:17:33.745507   29734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 17:17:33.745580   29734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 17:17:33.745692   29734 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.ha-181247 san=[127.0.0.1 192.168.39.195 ha-181247 localhost minikube]
	I0917 17:17:33.826857   29734 provision.go:177] copyRemoteCerts
	I0917 17:17:33.826917   29734 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 17:17:33.826943   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:17:33.829527   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.829844   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:33.829887   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.830118   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:17:33.830303   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:33.830463   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:17:33.830573   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:17:33.915861   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 17:17:33.915948   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 17:17:33.941842   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 17:17:33.941920   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0917 17:17:33.967877   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 17:17:33.967945   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 17:17:33.993729   29734 provision.go:87] duration metric: took 254.764989ms to configureAuth
	I0917 17:17:33.993752   29734 buildroot.go:189] setting minikube options for container-runtime
	I0917 17:17:33.993914   29734 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:17:33.994039   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:17:33.996709   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.997053   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:33.997081   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:33.997264   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:17:33.997459   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:33.997601   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:33.997716   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:17:33.997851   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:17:33.998110   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0917 17:17:33.998128   29734 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 17:17:34.246446   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 17:17:34.246473   29734 main.go:141] libmachine: Checking connection to Docker...
	I0917 17:17:34.246493   29734 main.go:141] libmachine: (ha-181247) Calling .GetURL
	I0917 17:17:34.247798   29734 main.go:141] libmachine: (ha-181247) DBG | Using libvirt version 6000000
	I0917 17:17:34.250061   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:34.250427   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:34.250452   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:34.250641   29734 main.go:141] libmachine: Docker is up and running!
	I0917 17:17:34.250653   29734 main.go:141] libmachine: Reticulating splines...
	I0917 17:17:34.250659   29734 client.go:171] duration metric: took 21.864649423s to LocalClient.Create
	I0917 17:17:34.250680   29734 start.go:167] duration metric: took 21.864702696s to libmachine.API.Create "ha-181247"
	I0917 17:17:34.250689   29734 start.go:293] postStartSetup for "ha-181247" (driver="kvm2")
	I0917 17:17:34.250697   29734 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 17:17:34.250712   29734 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:17:34.250982   29734 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 17:17:34.251008   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:17:34.253068   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:34.253358   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:34.253395   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:34.253512   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:17:34.253685   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:34.253843   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:17:34.254020   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:17:34.339891   29734 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 17:17:34.344617   29734 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 17:17:34.344645   29734 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 17:17:34.344722   29734 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 17:17:34.344816   29734 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 17:17:34.344827   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> /etc/ssl/certs/182592.pem
	I0917 17:17:34.344956   29734 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 17:17:34.355317   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 17:17:34.382262   29734 start.go:296] duration metric: took 131.561481ms for postStartSetup
	I0917 17:17:34.382311   29734 main.go:141] libmachine: (ha-181247) Calling .GetConfigRaw
	I0917 17:17:34.382983   29734 main.go:141] libmachine: (ha-181247) Calling .GetIP
	I0917 17:17:34.385552   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:34.385902   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:34.385928   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:34.386184   29734 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/config.json ...
	I0917 17:17:34.386420   29734 start.go:128] duration metric: took 22.019096291s to createHost
	I0917 17:17:34.386441   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:17:34.388754   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:34.389042   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:34.389073   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:34.389195   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:17:34.389386   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:34.389604   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:34.389763   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:17:34.389934   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:17:34.390094   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0917 17:17:34.390103   29734 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 17:17:34.502406   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726593454.466259739
	
	I0917 17:17:34.502432   29734 fix.go:216] guest clock: 1726593454.466259739
	I0917 17:17:34.502438   29734 fix.go:229] Guest: 2024-09-17 17:17:34.466259739 +0000 UTC Remote: 2024-09-17 17:17:34.386430471 +0000 UTC m=+22.127309401 (delta=79.829268ms)
	I0917 17:17:34.502463   29734 fix.go:200] guest clock delta is within tolerance: 79.829268ms
	I0917 17:17:34.502467   29734 start.go:83] releasing machines lock for "ha-181247", held for 22.135215361s
	I0917 17:17:34.502486   29734 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:17:34.502755   29734 main.go:141] libmachine: (ha-181247) Calling .GetIP
	I0917 17:17:34.505581   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:34.505944   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:34.505981   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:34.506132   29734 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:17:34.506672   29734 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:17:34.506814   29734 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:17:34.506899   29734 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 17:17:34.506935   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:17:34.507038   29734 ssh_runner.go:195] Run: cat /version.json
	I0917 17:17:34.507056   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:17:34.509573   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:34.509927   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:34.509981   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:34.510009   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:34.510033   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:17:34.510241   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:34.510425   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:17:34.510492   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:34.510512   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:34.510574   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:17:34.510671   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:17:34.510835   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:34.510969   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:17:34.511086   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:17:34.590070   29734 ssh_runner.go:195] Run: systemctl --version
	I0917 17:17:34.615056   29734 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 17:17:34.776832   29734 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 17:17:34.783180   29734 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 17:17:34.783255   29734 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 17:17:34.803950   29734 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 17:17:34.803973   29734 start.go:495] detecting cgroup driver to use...
	I0917 17:17:34.804063   29734 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 17:17:34.822926   29734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 17:17:34.838681   29734 docker.go:217] disabling cri-docker service (if available) ...
	I0917 17:17:34.838762   29734 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 17:17:34.853846   29734 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 17:17:34.869233   29734 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 17:17:34.994726   29734 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 17:17:35.155499   29734 docker.go:233] disabling docker service ...
	I0917 17:17:35.155559   29734 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 17:17:35.171137   29734 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 17:17:35.185712   29734 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 17:17:35.320259   29734 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 17:17:35.451710   29734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 17:17:35.466710   29734 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 17:17:35.487225   29734 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 17:17:35.487292   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:17:35.499290   29734 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 17:17:35.499383   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:17:35.511419   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:17:35.523655   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:17:35.536429   29734 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 17:17:35.549878   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:17:35.562463   29734 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:17:35.582664   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:17:35.595132   29734 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 17:17:35.606971   29734 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 17:17:35.607027   29734 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 17:17:35.621832   29734 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 17:17:35.633279   29734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:17:35.759427   29734 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 17:17:35.859232   29734 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 17:17:35.859323   29734 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 17:17:35.864467   29734 start.go:563] Will wait 60s for crictl version
	I0917 17:17:35.864539   29734 ssh_runner.go:195] Run: which crictl
	I0917 17:17:35.868712   29734 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 17:17:35.914425   29734 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 17:17:35.914511   29734 ssh_runner.go:195] Run: crio --version
	I0917 17:17:35.945749   29734 ssh_runner.go:195] Run: crio --version
	I0917 17:17:35.979543   29734 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 17:17:35.981161   29734 main.go:141] libmachine: (ha-181247) Calling .GetIP
	I0917 17:17:35.983776   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:35.984080   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:35.984123   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:35.984272   29734 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0917 17:17:35.988783   29734 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 17:17:36.003551   29734 kubeadm.go:883] updating cluster {Name:ha-181247 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-181247 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 17:17:36.003694   29734 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 17:17:36.003743   29734 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 17:17:36.040043   29734 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0917 17:17:36.040121   29734 ssh_runner.go:195] Run: which lz4
	I0917 17:17:36.044793   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0917 17:17:36.044906   29734 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 17:17:36.049616   29734 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 17:17:36.049651   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0917 17:17:37.526460   29734 crio.go:462] duration metric: took 1.481579452s to copy over tarball
	I0917 17:17:37.526554   29734 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 17:17:39.581865   29734 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.055284031s)
	I0917 17:17:39.581906   29734 crio.go:469] duration metric: took 2.055410897s to extract the tarball
	I0917 17:17:39.581916   29734 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 17:17:39.619715   29734 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 17:17:39.667830   29734 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 17:17:39.667853   29734 cache_images.go:84] Images are preloaded, skipping loading
	I0917 17:17:39.667862   29734 kubeadm.go:934] updating node { 192.168.39.195 8443 v1.31.1 crio true true} ...
	I0917 17:17:39.667985   29734 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-181247 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-181247 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 17:17:39.668050   29734 ssh_runner.go:195] Run: crio config
	I0917 17:17:39.720134   29734 cni.go:84] Creating CNI manager for ""
	I0917 17:17:39.720157   29734 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0917 17:17:39.720169   29734 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 17:17:39.720198   29734 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.195 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-181247 NodeName:ha-181247 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.195"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 17:17:39.720379   29734 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-181247"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.195
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.195"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 17:17:39.720405   29734 kube-vip.go:115] generating kube-vip config ...
	I0917 17:17:39.720457   29734 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 17:17:39.737470   29734 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 17:17:39.737600   29734 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0917 17:17:39.737658   29734 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 17:17:39.748502   29734 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 17:17:39.748589   29734 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 17:17:39.758725   29734 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0917 17:17:39.776655   29734 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 17:17:39.794865   29734 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0917 17:17:39.812575   29734 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0917 17:17:39.829867   29734 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0917 17:17:39.833919   29734 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 17:17:39.847376   29734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:17:39.967101   29734 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 17:17:39.985136   29734 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247 for IP: 192.168.39.195
	I0917 17:17:39.985164   29734 certs.go:194] generating shared ca certs ...
	I0917 17:17:39.985186   29734 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:17:39.985372   29734 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 17:17:39.985442   29734 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 17:17:39.985456   29734 certs.go:256] generating profile certs ...
	I0917 17:17:39.985529   29734 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/client.key
	I0917 17:17:39.985547   29734 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/client.crt with IP's: []
	I0917 17:17:40.064829   29734 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/client.crt ...
	I0917 17:17:40.064859   29734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/client.crt: {Name:mk3079f6d5b8989ce7b1764d3b37598392b2af32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:17:40.065023   29734 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/client.key ...
	I0917 17:17:40.065034   29734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/client.key: {Name:mk5b49f925f80708dafeed2ecaef8facba26de2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:17:40.065108   29734 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.14a947b9
	I0917 17:17:40.065126   29734 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.14a947b9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.195 192.168.39.254]
	I0917 17:17:40.144821   29734 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.14a947b9 ...
	I0917 17:17:40.144846   29734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.14a947b9: {Name:mk2ea83f7ca9c6e83670f0043b0246ce3797e00f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:17:40.145023   29734 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.14a947b9 ...
	I0917 17:17:40.145040   29734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.14a947b9: {Name:mk21c25b17297bed11f0801fc03553121c429b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:17:40.145138   29734 certs.go:381] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.14a947b9 -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt
	I0917 17:17:40.145266   29734 certs.go:385] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.14a947b9 -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key
	I0917 17:17:40.145343   29734 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.key
	I0917 17:17:40.145359   29734 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.crt with IP's: []
	I0917 17:17:40.498271   29734 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.crt ...
	I0917 17:17:40.498324   29734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.crt: {Name:mk54f89ba6af98139c51d15c40e430bfe59aa203 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:17:40.498507   29734 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.key ...
	I0917 17:17:40.498521   29734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.key: {Name:mk94b16c6c206654a670864f84b420720096ef6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:17:40.498625   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 17:17:40.498644   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 17:17:40.498655   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 17:17:40.498665   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 17:17:40.498681   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 17:17:40.498691   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 17:17:40.498702   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 17:17:40.498712   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 17:17:40.498758   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 17:17:40.498792   29734 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 17:17:40.498801   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 17:17:40.498820   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 17:17:40.498843   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 17:17:40.498869   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 17:17:40.498906   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 17:17:40.498931   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:17:40.498944   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem -> /usr/share/ca-certificates/18259.pem
	I0917 17:17:40.498957   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> /usr/share/ca-certificates/182592.pem
	I0917 17:17:40.499567   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 17:17:40.528588   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 17:17:40.554021   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 17:17:40.579122   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 17:17:40.604527   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0917 17:17:40.630072   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 17:17:40.655533   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 17:17:40.680429   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 17:17:40.706898   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 17:17:40.731672   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 17:17:40.759816   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 17:17:40.809801   29734 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 17:17:40.828357   29734 ssh_runner.go:195] Run: openssl version
	I0917 17:17:40.835303   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 17:17:40.847722   29734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 17:17:40.852545   29734 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 17:17:40.852597   29734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 17:17:40.859002   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 17:17:40.871260   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 17:17:40.883696   29734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:17:40.888589   29734 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:17:40.888661   29734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:17:40.894719   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 17:17:40.906835   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 17:17:40.918826   29734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 17:17:40.923634   29734 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 17:17:40.923689   29734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 17:17:40.929587   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 17:17:40.941672   29734 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 17:17:40.946142   29734 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 17:17:40.946194   29734 kubeadm.go:392] StartCluster: {Name:ha-181247 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-181247 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:17:40.946257   29734 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 17:17:40.946297   29734 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 17:17:40.987549   29734 cri.go:89] found id: ""
	I0917 17:17:40.987615   29734 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 17:17:40.999318   29734 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 17:17:41.010837   29734 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 17:17:41.022189   29734 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 17:17:41.022219   29734 kubeadm.go:157] found existing configuration files:
	
	I0917 17:17:41.022270   29734 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 17:17:41.032930   29734 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 17:17:41.032999   29734 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 17:17:41.043745   29734 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 17:17:41.053997   29734 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 17:17:41.054067   29734 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 17:17:41.064880   29734 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 17:17:41.074961   29734 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 17:17:41.075026   29734 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 17:17:41.085766   29734 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 17:17:41.096174   29734 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 17:17:41.096231   29734 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 17:17:41.107267   29734 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 17:17:41.229035   29734 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 17:17:41.229176   29734 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 17:17:41.347012   29734 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 17:17:41.347117   29734 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 17:17:41.347206   29734 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 17:17:41.358370   29734 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 17:17:41.376928   29734 out.go:235]   - Generating certificates and keys ...
	I0917 17:17:41.377051   29734 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 17:17:41.377114   29734 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 17:17:41.558413   29734 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 17:17:41.652625   29734 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 17:17:42.116063   29734 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 17:17:42.340573   29734 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 17:17:42.606864   29734 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 17:17:42.607028   29734 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-181247 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I0917 17:17:43.174935   29734 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 17:17:43.175172   29734 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-181247 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I0917 17:17:43.325108   29734 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 17:17:43.457430   29734 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 17:17:43.610259   29734 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 17:17:43.610381   29734 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 17:17:43.869331   29734 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 17:17:43.969996   29734 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 17:17:44.104548   29734 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 17:17:44.304014   29734 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 17:17:44.554355   29734 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 17:17:44.554814   29734 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 17:17:44.559120   29734 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 17:17:44.561642   29734 out.go:235]   - Booting up control plane ...
	I0917 17:17:44.561760   29734 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 17:17:44.561883   29734 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 17:17:44.562216   29734 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 17:17:44.579298   29734 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 17:17:44.588734   29734 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 17:17:44.588841   29734 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 17:17:44.733605   29734 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 17:17:44.733824   29734 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 17:17:45.734464   29734 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.004156457s
	I0917 17:17:45.734566   29734 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 17:17:51.382928   29734 kubeadm.go:310] [api-check] The API server is healthy after 5.65227271s
	I0917 17:17:51.397060   29734 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 17:17:51.428650   29734 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 17:17:51.963920   29734 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 17:17:51.964105   29734 kubeadm.go:310] [mark-control-plane] Marking the node ha-181247 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 17:17:51.977148   29734 kubeadm.go:310] [bootstrap-token] Using token: jv4hj7.gvj0gihpcecyr3ei
	I0917 17:17:51.979014   29734 out.go:235]   - Configuring RBAC rules ...
	I0917 17:17:51.979165   29734 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 17:17:51.985435   29734 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 17:17:51.995160   29734 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 17:17:52.010830   29734 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 17:17:52.015515   29734 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 17:17:52.024226   29734 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 17:17:52.040459   29734 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 17:17:52.295460   29734 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 17:17:52.794454   29734 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 17:17:52.795475   29734 kubeadm.go:310] 
	I0917 17:17:52.795579   29734 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 17:17:52.795590   29734 kubeadm.go:310] 
	I0917 17:17:52.795701   29734 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 17:17:52.795712   29734 kubeadm.go:310] 
	I0917 17:17:52.795743   29734 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 17:17:52.795812   29734 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 17:17:52.795859   29734 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 17:17:52.795868   29734 kubeadm.go:310] 
	I0917 17:17:52.795912   29734 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 17:17:52.795919   29734 kubeadm.go:310] 
	I0917 17:17:52.795982   29734 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 17:17:52.795994   29734 kubeadm.go:310] 
	I0917 17:17:52.796046   29734 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 17:17:52.796111   29734 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 17:17:52.796168   29734 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 17:17:52.796175   29734 kubeadm.go:310] 
	I0917 17:17:52.796243   29734 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 17:17:52.796307   29734 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 17:17:52.796313   29734 kubeadm.go:310] 
	I0917 17:17:52.796431   29734 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jv4hj7.gvj0gihpcecyr3ei \
	I0917 17:17:52.796570   29734 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 \
	I0917 17:17:52.796604   29734 kubeadm.go:310] 	--control-plane 
	I0917 17:17:52.796621   29734 kubeadm.go:310] 
	I0917 17:17:52.796741   29734 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 17:17:52.796750   29734 kubeadm.go:310] 
	I0917 17:17:52.796867   29734 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jv4hj7.gvj0gihpcecyr3ei \
	I0917 17:17:52.796995   29734 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 
	I0917 17:17:52.798126   29734 kubeadm.go:310] W0917 17:17:41.195410     825 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 17:17:52.798517   29734 kubeadm.go:310] W0917 17:17:41.196383     825 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 17:17:52.798621   29734 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 17:17:52.798647   29734 cni.go:84] Creating CNI manager for ""
	I0917 17:17:52.798655   29734 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0917 17:17:52.800787   29734 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0917 17:17:52.802389   29734 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0917 17:17:52.808968   29734 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0917 17:17:52.808985   29734 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0917 17:17:52.834898   29734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0917 17:17:53.248046   29734 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 17:17:53.248136   29734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 17:17:53.248163   29734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-181247 minikube.k8s.io/updated_at=2024_09_17T17_17_53_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=ha-181247 minikube.k8s.io/primary=true
	I0917 17:17:53.416678   29734 ops.go:34] apiserver oom_adj: -16
	I0917 17:17:53.416822   29734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 17:17:53.917270   29734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 17:17:54.417627   29734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 17:17:54.917758   29734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 17:17:55.417671   29734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 17:17:55.917001   29734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 17:17:56.416919   29734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 17:17:56.917728   29734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 17:17:57.048080   29734 kubeadm.go:1113] duration metric: took 3.80000476s to wait for elevateKubeSystemPrivileges
	I0917 17:17:57.048120   29734 kubeadm.go:394] duration metric: took 16.10192849s to StartCluster
	I0917 17:17:57.048141   29734 settings.go:142] acquiring lock: {Name:mk34898861b3ed534da4bd8404feab95fe690001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:17:57.048226   29734 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 17:17:57.049291   29734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:17:57.050004   29734 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 17:17:57.050022   29734 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 17:17:57.050049   29734 start.go:241] waiting for startup goroutines ...
	I0917 17:17:57.050068   29734 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 17:17:57.050151   29734 addons.go:69] Setting storage-provisioner=true in profile "ha-181247"
	I0917 17:17:57.050173   29734 addons.go:69] Setting default-storageclass=true in profile "ha-181247"
	I0917 17:17:57.050188   29734 addons.go:234] Setting addon storage-provisioner=true in "ha-181247"
	I0917 17:17:57.050222   29734 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-181247"
	I0917 17:17:57.050272   29734 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:17:57.050226   29734 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:17:57.050724   29734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:17:57.050764   29734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:17:57.050764   29734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:17:57.050804   29734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:17:57.066436   29734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44451
	I0917 17:17:57.066489   29734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37915
	I0917 17:17:57.066942   29734 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:17:57.067003   29734 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:17:57.067508   29734 main.go:141] libmachine: Using API Version  1
	I0917 17:17:57.067528   29734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:17:57.067508   29734 main.go:141] libmachine: Using API Version  1
	I0917 17:17:57.067556   29734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:17:57.067899   29734 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:17:57.067936   29734 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:17:57.068101   29734 main.go:141] libmachine: (ha-181247) Calling .GetState
	I0917 17:17:57.068544   29734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:17:57.068590   29734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:17:57.070182   29734 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 17:17:57.070452   29734 kapi.go:59] client config for ha-181247: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/client.crt", KeyFile:"/home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/client.key", CAFile:"/home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 17:17:57.070857   29734 cert_rotation.go:140] Starting client certificate rotation controller
	I0917 17:17:57.071098   29734 addons.go:234] Setting addon default-storageclass=true in "ha-181247"
	I0917 17:17:57.071132   29734 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:17:57.071433   29734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:17:57.071467   29734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:17:57.084769   29734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43341
	I0917 17:17:57.085293   29734 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:17:57.085895   29734 main.go:141] libmachine: Using API Version  1
	I0917 17:17:57.085919   29734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:17:57.086266   29734 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:17:57.086274   29734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34871
	I0917 17:17:57.086481   29734 main.go:141] libmachine: (ha-181247) Calling .GetState
	I0917 17:17:57.086812   29734 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:17:57.087296   29734 main.go:141] libmachine: Using API Version  1
	I0917 17:17:57.087318   29734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:17:57.087643   29734 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:17:57.088180   29734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:17:57.088219   29734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:17:57.088491   29734 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:17:57.091028   29734 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 17:17:57.092264   29734 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 17:17:57.092280   29734 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 17:17:57.092295   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:17:57.095831   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:57.096408   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:57.096434   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:57.096761   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:17:57.096968   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:57.097113   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:17:57.097247   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:17:57.104921   29734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39379
	I0917 17:17:57.105405   29734 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:17:57.105853   29734 main.go:141] libmachine: Using API Version  1
	I0917 17:17:57.105870   29734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:17:57.106256   29734 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:17:57.106466   29734 main.go:141] libmachine: (ha-181247) Calling .GetState
	I0917 17:17:57.108299   29734 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:17:57.108525   29734 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 17:17:57.108541   29734 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 17:17:57.108554   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:17:57.111624   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:57.112024   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:17:57.112053   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:17:57.112259   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:17:57.112403   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:17:57.112537   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:17:57.112639   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:17:57.160956   29734 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0917 17:17:57.227238   29734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 17:17:57.270544   29734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 17:17:57.553613   29734 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0917 17:17:57.735871   29734 main.go:141] libmachine: Making call to close driver server
	I0917 17:17:57.735899   29734 main.go:141] libmachine: (ha-181247) Calling .Close
	I0917 17:17:57.735935   29734 main.go:141] libmachine: Making call to close driver server
	I0917 17:17:57.735954   29734 main.go:141] libmachine: (ha-181247) Calling .Close
	I0917 17:17:57.736205   29734 main.go:141] libmachine: Successfully made call to close driver server
	I0917 17:17:57.736223   29734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 17:17:57.736232   29734 main.go:141] libmachine: Making call to close driver server
	I0917 17:17:57.736239   29734 main.go:141] libmachine: (ha-181247) Calling .Close
	I0917 17:17:57.736245   29734 main.go:141] libmachine: Successfully made call to close driver server
	I0917 17:17:57.736262   29734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 17:17:57.736272   29734 main.go:141] libmachine: Making call to close driver server
	I0917 17:17:57.736281   29734 main.go:141] libmachine: (ha-181247) Calling .Close
	I0917 17:17:57.736199   29734 main.go:141] libmachine: (ha-181247) DBG | Closing plugin on server side
	I0917 17:17:57.736423   29734 main.go:141] libmachine: Successfully made call to close driver server
	I0917 17:17:57.736433   29734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 17:17:57.736493   29734 main.go:141] libmachine: Successfully made call to close driver server
	I0917 17:17:57.736503   29734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 17:17:57.736578   29734 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 17:17:57.736596   29734 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 17:17:57.736718   29734 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0917 17:17:57.736731   29734 round_trippers.go:469] Request Headers:
	I0917 17:17:57.736742   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:17:57.736747   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:17:57.750672   29734 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0917 17:17:57.751432   29734 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0917 17:17:57.751452   29734 round_trippers.go:469] Request Headers:
	I0917 17:17:57.751463   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:17:57.751471   29734 round_trippers.go:473]     Content-Type: application/json
	I0917 17:17:57.751478   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:17:57.753965   29734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 17:17:57.754145   29734 main.go:141] libmachine: Making call to close driver server
	I0917 17:17:57.754162   29734 main.go:141] libmachine: (ha-181247) Calling .Close
	I0917 17:17:57.754510   29734 main.go:141] libmachine: Successfully made call to close driver server
	I0917 17:17:57.754567   29734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 17:17:57.754583   29734 main.go:141] libmachine: (ha-181247) DBG | Closing plugin on server side
	I0917 17:17:57.756623   29734 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0917 17:17:57.758034   29734 addons.go:510] duration metric: took 707.968528ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0917 17:17:57.758078   29734 start.go:246] waiting for cluster config update ...
	I0917 17:17:57.758090   29734 start.go:255] writing updated cluster config ...
	I0917 17:17:57.759731   29734 out.go:201] 
	I0917 17:17:57.761159   29734 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:17:57.761306   29734 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/config.json ...
	I0917 17:17:57.762945   29734 out.go:177] * Starting "ha-181247-m02" control-plane node in "ha-181247" cluster
	I0917 17:17:57.764198   29734 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 17:17:57.764230   29734 cache.go:56] Caching tarball of preloaded images
	I0917 17:17:57.764349   29734 preload.go:172] Found /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 17:17:57.764361   29734 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0917 17:17:57.764433   29734 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/config.json ...
	I0917 17:17:57.764626   29734 start.go:360] acquireMachinesLock for ha-181247-m02: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 17:17:57.764673   29734 start.go:364] duration metric: took 27.836µs to acquireMachinesLock for "ha-181247-m02"
	I0917 17:17:57.764693   29734 start.go:93] Provisioning new machine with config: &{Name:ha-181247 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-181247 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 17:17:57.764769   29734 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0917 17:17:57.766494   29734 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 17:17:57.766576   29734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:17:57.766611   29734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:17:57.781200   29734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33557
	I0917 17:17:57.781679   29734 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:17:57.782109   29734 main.go:141] libmachine: Using API Version  1
	I0917 17:17:57.782128   29734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:17:57.782423   29734 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:17:57.782604   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetMachineName
	I0917 17:17:57.782725   29734 main.go:141] libmachine: (ha-181247-m02) Calling .DriverName
	I0917 17:17:57.782844   29734 start.go:159] libmachine.API.Create for "ha-181247" (driver="kvm2")
	I0917 17:17:57.782876   29734 client.go:168] LocalClient.Create starting
	I0917 17:17:57.782909   29734 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem
	I0917 17:17:57.782947   29734 main.go:141] libmachine: Decoding PEM data...
	I0917 17:17:57.782970   29734 main.go:141] libmachine: Parsing certificate...
	I0917 17:17:57.783034   29734 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem
	I0917 17:17:57.783071   29734 main.go:141] libmachine: Decoding PEM data...
	I0917 17:17:57.783090   29734 main.go:141] libmachine: Parsing certificate...
	I0917 17:17:57.783124   29734 main.go:141] libmachine: Running pre-create checks...
	I0917 17:17:57.783133   29734 main.go:141] libmachine: (ha-181247-m02) Calling .PreCreateCheck
	I0917 17:17:57.783246   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetConfigRaw
	I0917 17:17:57.783600   29734 main.go:141] libmachine: Creating machine...
	I0917 17:17:57.783614   29734 main.go:141] libmachine: (ha-181247-m02) Calling .Create
	I0917 17:17:57.783705   29734 main.go:141] libmachine: (ha-181247-m02) Creating KVM machine...
	I0917 17:17:57.784871   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found existing default KVM network
	I0917 17:17:57.784944   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found existing private KVM network mk-ha-181247
	I0917 17:17:57.785123   29734 main.go:141] libmachine: (ha-181247-m02) Setting up store path in /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02 ...
	I0917 17:17:57.785140   29734 main.go:141] libmachine: (ha-181247-m02) Building disk image from file:///home/jenkins/minikube-integration/19662-11085/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0917 17:17:57.785285   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:17:57.785116   30104 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 17:17:57.785359   29734 main.go:141] libmachine: (ha-181247-m02) Downloading /home/jenkins/minikube-integration/19662-11085/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19662-11085/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0917 17:17:58.016182   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:17:58.016045   30104 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02/id_rsa...
	I0917 17:17:58.178317   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:17:58.178194   30104 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02/ha-181247-m02.rawdisk...
	I0917 17:17:58.178361   29734 main.go:141] libmachine: (ha-181247-m02) DBG | Writing magic tar header
	I0917 17:17:58.178377   29734 main.go:141] libmachine: (ha-181247-m02) DBG | Writing SSH key tar header
	I0917 17:17:58.178389   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:17:58.178302   30104 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02 ...
	I0917 17:17:58.178429   29734 main.go:141] libmachine: (ha-181247-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02
	I0917 17:17:58.178453   29734 main.go:141] libmachine: (ha-181247-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube/machines
	I0917 17:17:58.178472   29734 main.go:141] libmachine: (ha-181247-m02) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02 (perms=drwx------)
	I0917 17:17:58.178482   29734 main.go:141] libmachine: (ha-181247-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 17:17:58.178492   29734 main.go:141] libmachine: (ha-181247-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085
	I0917 17:17:58.178498   29734 main.go:141] libmachine: (ha-181247-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0917 17:17:58.178506   29734 main.go:141] libmachine: (ha-181247-m02) DBG | Checking permissions on dir: /home/jenkins
	I0917 17:17:58.178515   29734 main.go:141] libmachine: (ha-181247-m02) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube/machines (perms=drwxr-xr-x)
	I0917 17:17:58.178521   29734 main.go:141] libmachine: (ha-181247-m02) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube (perms=drwxr-xr-x)
	I0917 17:17:58.178529   29734 main.go:141] libmachine: (ha-181247-m02) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085 (perms=drwxrwxr-x)
	I0917 17:17:58.178534   29734 main.go:141] libmachine: (ha-181247-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0917 17:17:58.178572   29734 main.go:141] libmachine: (ha-181247-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0917 17:17:58.178589   29734 main.go:141] libmachine: (ha-181247-m02) DBG | Checking permissions on dir: /home
	I0917 17:17:58.178595   29734 main.go:141] libmachine: (ha-181247-m02) Creating domain...
	I0917 17:17:58.178605   29734 main.go:141] libmachine: (ha-181247-m02) DBG | Skipping /home - not owner
	I0917 17:17:58.179564   29734 main.go:141] libmachine: (ha-181247-m02) define libvirt domain using xml: 
	I0917 17:17:58.179579   29734 main.go:141] libmachine: (ha-181247-m02) <domain type='kvm'>
	I0917 17:17:58.179586   29734 main.go:141] libmachine: (ha-181247-m02)   <name>ha-181247-m02</name>
	I0917 17:17:58.179590   29734 main.go:141] libmachine: (ha-181247-m02)   <memory unit='MiB'>2200</memory>
	I0917 17:17:58.179595   29734 main.go:141] libmachine: (ha-181247-m02)   <vcpu>2</vcpu>
	I0917 17:17:58.179599   29734 main.go:141] libmachine: (ha-181247-m02)   <features>
	I0917 17:17:58.179606   29734 main.go:141] libmachine: (ha-181247-m02)     <acpi/>
	I0917 17:17:58.179612   29734 main.go:141] libmachine: (ha-181247-m02)     <apic/>
	I0917 17:17:58.179618   29734 main.go:141] libmachine: (ha-181247-m02)     <pae/>
	I0917 17:17:58.179634   29734 main.go:141] libmachine: (ha-181247-m02)     
	I0917 17:17:58.179644   29734 main.go:141] libmachine: (ha-181247-m02)   </features>
	I0917 17:17:58.179653   29734 main.go:141] libmachine: (ha-181247-m02)   <cpu mode='host-passthrough'>
	I0917 17:17:58.179658   29734 main.go:141] libmachine: (ha-181247-m02)   
	I0917 17:17:58.179664   29734 main.go:141] libmachine: (ha-181247-m02)   </cpu>
	I0917 17:17:58.179668   29734 main.go:141] libmachine: (ha-181247-m02)   <os>
	I0917 17:17:58.179673   29734 main.go:141] libmachine: (ha-181247-m02)     <type>hvm</type>
	I0917 17:17:58.179700   29734 main.go:141] libmachine: (ha-181247-m02)     <boot dev='cdrom'/>
	I0917 17:17:58.179723   29734 main.go:141] libmachine: (ha-181247-m02)     <boot dev='hd'/>
	I0917 17:17:58.179734   29734 main.go:141] libmachine: (ha-181247-m02)     <bootmenu enable='no'/>
	I0917 17:17:58.179743   29734 main.go:141] libmachine: (ha-181247-m02)   </os>
	I0917 17:17:58.179752   29734 main.go:141] libmachine: (ha-181247-m02)   <devices>
	I0917 17:17:58.179761   29734 main.go:141] libmachine: (ha-181247-m02)     <disk type='file' device='cdrom'>
	I0917 17:17:58.179783   29734 main.go:141] libmachine: (ha-181247-m02)       <source file='/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02/boot2docker.iso'/>
	I0917 17:17:58.179794   29734 main.go:141] libmachine: (ha-181247-m02)       <target dev='hdc' bus='scsi'/>
	I0917 17:17:58.179809   29734 main.go:141] libmachine: (ha-181247-m02)       <readonly/>
	I0917 17:17:58.179828   29734 main.go:141] libmachine: (ha-181247-m02)     </disk>
	I0917 17:17:58.179844   29734 main.go:141] libmachine: (ha-181247-m02)     <disk type='file' device='disk'>
	I0917 17:17:58.179861   29734 main.go:141] libmachine: (ha-181247-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0917 17:17:58.179877   29734 main.go:141] libmachine: (ha-181247-m02)       <source file='/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02/ha-181247-m02.rawdisk'/>
	I0917 17:17:58.179887   29734 main.go:141] libmachine: (ha-181247-m02)       <target dev='hda' bus='virtio'/>
	I0917 17:17:58.179893   29734 main.go:141] libmachine: (ha-181247-m02)     </disk>
	I0917 17:17:58.179898   29734 main.go:141] libmachine: (ha-181247-m02)     <interface type='network'>
	I0917 17:17:58.179903   29734 main.go:141] libmachine: (ha-181247-m02)       <source network='mk-ha-181247'/>
	I0917 17:17:58.179910   29734 main.go:141] libmachine: (ha-181247-m02)       <model type='virtio'/>
	I0917 17:17:58.179915   29734 main.go:141] libmachine: (ha-181247-m02)     </interface>
	I0917 17:17:58.179922   29734 main.go:141] libmachine: (ha-181247-m02)     <interface type='network'>
	I0917 17:17:58.179936   29734 main.go:141] libmachine: (ha-181247-m02)       <source network='default'/>
	I0917 17:17:58.179952   29734 main.go:141] libmachine: (ha-181247-m02)       <model type='virtio'/>
	I0917 17:17:58.179963   29734 main.go:141] libmachine: (ha-181247-m02)     </interface>
	I0917 17:17:58.179973   29734 main.go:141] libmachine: (ha-181247-m02)     <serial type='pty'>
	I0917 17:17:58.179982   29734 main.go:141] libmachine: (ha-181247-m02)       <target port='0'/>
	I0917 17:17:58.179991   29734 main.go:141] libmachine: (ha-181247-m02)     </serial>
	I0917 17:17:58.179999   29734 main.go:141] libmachine: (ha-181247-m02)     <console type='pty'>
	I0917 17:17:58.180009   29734 main.go:141] libmachine: (ha-181247-m02)       <target type='serial' port='0'/>
	I0917 17:17:58.180022   29734 main.go:141] libmachine: (ha-181247-m02)     </console>
	I0917 17:17:58.180034   29734 main.go:141] libmachine: (ha-181247-m02)     <rng model='virtio'>
	I0917 17:17:58.180045   29734 main.go:141] libmachine: (ha-181247-m02)       <backend model='random'>/dev/random</backend>
	I0917 17:17:58.180054   29734 main.go:141] libmachine: (ha-181247-m02)     </rng>
	I0917 17:17:58.180061   29734 main.go:141] libmachine: (ha-181247-m02)     
	I0917 17:17:58.180068   29734 main.go:141] libmachine: (ha-181247-m02)     
	I0917 17:17:58.180073   29734 main.go:141] libmachine: (ha-181247-m02)   </devices>
	I0917 17:17:58.180077   29734 main.go:141] libmachine: (ha-181247-m02) </domain>
	I0917 17:17:58.180094   29734 main.go:141] libmachine: (ha-181247-m02) 
	I0917 17:17:58.187935   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:b7:3c:7a in network default
	I0917 17:17:58.188506   29734 main.go:141] libmachine: (ha-181247-m02) Ensuring networks are active...
	I0917 17:17:58.188531   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:17:58.189188   29734 main.go:141] libmachine: (ha-181247-m02) Ensuring network default is active
	I0917 17:17:58.189474   29734 main.go:141] libmachine: (ha-181247-m02) Ensuring network mk-ha-181247 is active
	I0917 17:17:58.189796   29734 main.go:141] libmachine: (ha-181247-m02) Getting domain xml...
	I0917 17:17:58.190559   29734 main.go:141] libmachine: (ha-181247-m02) Creating domain...
	I0917 17:17:59.445602   29734 main.go:141] libmachine: (ha-181247-m02) Waiting to get IP...
	I0917 17:17:59.446507   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:17:59.446930   29734 main.go:141] libmachine: (ha-181247-m02) DBG | unable to find current IP address of domain ha-181247-m02 in network mk-ha-181247
	I0917 17:17:59.446966   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:17:59.446914   30104 retry.go:31] will retry after 263.3297ms: waiting for machine to come up
	I0917 17:17:59.712214   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:17:59.712719   29734 main.go:141] libmachine: (ha-181247-m02) DBG | unable to find current IP address of domain ha-181247-m02 in network mk-ha-181247
	I0917 17:17:59.712744   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:17:59.712669   30104 retry.go:31] will retry after 236.146897ms: waiting for machine to come up
	I0917 17:17:59.950043   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:17:59.950493   29734 main.go:141] libmachine: (ha-181247-m02) DBG | unable to find current IP address of domain ha-181247-m02 in network mk-ha-181247
	I0917 17:17:59.950513   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:17:59.950450   30104 retry.go:31] will retry after 440.967944ms: waiting for machine to come up
	I0917 17:18:00.393105   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:00.393638   29734 main.go:141] libmachine: (ha-181247-m02) DBG | unable to find current IP address of domain ha-181247-m02 in network mk-ha-181247
	I0917 17:18:00.393664   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:18:00.393579   30104 retry.go:31] will retry after 520.557465ms: waiting for machine to come up
	I0917 17:18:00.915263   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:00.915684   29734 main.go:141] libmachine: (ha-181247-m02) DBG | unable to find current IP address of domain ha-181247-m02 in network mk-ha-181247
	I0917 17:18:00.915712   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:18:00.915620   30104 retry.go:31] will retry after 655.302859ms: waiting for machine to come up
	I0917 17:18:01.572071   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:01.572499   29734 main.go:141] libmachine: (ha-181247-m02) DBG | unable to find current IP address of domain ha-181247-m02 in network mk-ha-181247
	I0917 17:18:01.572527   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:18:01.572471   30104 retry.go:31] will retry after 849.8849ms: waiting for machine to come up
	I0917 17:18:02.423434   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:02.423972   29734 main.go:141] libmachine: (ha-181247-m02) DBG | unable to find current IP address of domain ha-181247-m02 in network mk-ha-181247
	I0917 17:18:02.423997   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:18:02.423904   30104 retry.go:31] will retry after 978.609236ms: waiting for machine to come up
	I0917 17:18:03.404323   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:03.404859   29734 main.go:141] libmachine: (ha-181247-m02) DBG | unable to find current IP address of domain ha-181247-m02 in network mk-ha-181247
	I0917 17:18:03.404888   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:18:03.404806   30104 retry.go:31] will retry after 1.1479538s: waiting for machine to come up
	I0917 17:18:04.554114   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:04.554487   29734 main.go:141] libmachine: (ha-181247-m02) DBG | unable to find current IP address of domain ha-181247-m02 in network mk-ha-181247
	I0917 17:18:04.554512   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:18:04.554472   30104 retry.go:31] will retry after 1.832387096s: waiting for machine to come up
	I0917 17:18:06.389580   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:06.390011   29734 main.go:141] libmachine: (ha-181247-m02) DBG | unable to find current IP address of domain ha-181247-m02 in network mk-ha-181247
	I0917 17:18:06.390035   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:18:06.389973   30104 retry.go:31] will retry after 1.907985426s: waiting for machine to come up
	I0917 17:18:08.299652   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:08.300189   29734 main.go:141] libmachine: (ha-181247-m02) DBG | unable to find current IP address of domain ha-181247-m02 in network mk-ha-181247
	I0917 17:18:08.300211   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:18:08.300157   30104 retry.go:31] will retry after 1.842850915s: waiting for machine to come up
	I0917 17:18:10.145000   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:10.145487   29734 main.go:141] libmachine: (ha-181247-m02) DBG | unable to find current IP address of domain ha-181247-m02 in network mk-ha-181247
	I0917 17:18:10.145508   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:18:10.145448   30104 retry.go:31] will retry after 2.563514245s: waiting for machine to come up
	I0917 17:18:12.712222   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:12.712706   29734 main.go:141] libmachine: (ha-181247-m02) DBG | unable to find current IP address of domain ha-181247-m02 in network mk-ha-181247
	I0917 17:18:12.712737   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:18:12.712675   30104 retry.go:31] will retry after 3.925683535s: waiting for machine to come up
	I0917 17:18:16.642998   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:16.643406   29734 main.go:141] libmachine: (ha-181247-m02) DBG | unable to find current IP address of domain ha-181247-m02 in network mk-ha-181247
	I0917 17:18:16.643427   29734 main.go:141] libmachine: (ha-181247-m02) DBG | I0917 17:18:16.643365   30104 retry.go:31] will retry after 4.188157974s: waiting for machine to come up
	I0917 17:18:20.834295   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:20.834870   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has current primary IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:20.834901   29734 main.go:141] libmachine: (ha-181247-m02) Found IP for machine: 192.168.39.11
	I0917 17:18:20.834915   29734 main.go:141] libmachine: (ha-181247-m02) Reserving static IP address...
	I0917 17:18:20.835306   29734 main.go:141] libmachine: (ha-181247-m02) DBG | unable to find host DHCP lease matching {name: "ha-181247-m02", mac: "52:54:00:a4:df:96", ip: "192.168.39.11"} in network mk-ha-181247
	I0917 17:18:20.914631   29734 main.go:141] libmachine: (ha-181247-m02) DBG | Getting to WaitForSSH function...
	I0917 17:18:20.914671   29734 main.go:141] libmachine: (ha-181247-m02) Reserved static IP address: 192.168.39.11
	I0917 17:18:20.914694   29734 main.go:141] libmachine: (ha-181247-m02) Waiting for SSH to be available...
	I0917 17:18:20.917727   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:20.918105   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:20.918135   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:20.918256   29734 main.go:141] libmachine: (ha-181247-m02) DBG | Using SSH client type: external
	I0917 17:18:20.918287   29734 main.go:141] libmachine: (ha-181247-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02/id_rsa (-rw-------)
	I0917 17:18:20.918332   29734 main.go:141] libmachine: (ha-181247-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 17:18:20.918353   29734 main.go:141] libmachine: (ha-181247-m02) DBG | About to run SSH command:
	I0917 17:18:20.918378   29734 main.go:141] libmachine: (ha-181247-m02) DBG | exit 0
	I0917 17:18:21.045627   29734 main.go:141] libmachine: (ha-181247-m02) DBG | SSH cmd err, output: <nil>: 
	I0917 17:18:21.045884   29734 main.go:141] libmachine: (ha-181247-m02) KVM machine creation complete!
	I0917 17:18:21.046333   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetConfigRaw
	I0917 17:18:21.046946   29734 main.go:141] libmachine: (ha-181247-m02) Calling .DriverName
	I0917 17:18:21.047221   29734 main.go:141] libmachine: (ha-181247-m02) Calling .DriverName
	I0917 17:18:21.047384   29734 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0917 17:18:21.047413   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetState
	I0917 17:18:21.048705   29734 main.go:141] libmachine: Detecting operating system of created instance...
	I0917 17:18:21.048717   29734 main.go:141] libmachine: Waiting for SSH to be available...
	I0917 17:18:21.048722   29734 main.go:141] libmachine: Getting to WaitForSSH function...
	I0917 17:18:21.048728   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHHostname
	I0917 17:18:21.050992   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.051417   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:21.051442   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.051589   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHPort
	I0917 17:18:21.051758   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:21.051883   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:21.051992   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHUsername
	I0917 17:18:21.052143   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:18:21.052447   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0917 17:18:21.052463   29734 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0917 17:18:21.160757   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 17:18:21.160782   29734 main.go:141] libmachine: Detecting the provisioner...
	I0917 17:18:21.160790   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHHostname
	I0917 17:18:21.163388   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.163703   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:21.163736   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.163882   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHPort
	I0917 17:18:21.164042   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:21.164222   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:21.164343   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHUsername
	I0917 17:18:21.164539   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:18:21.164733   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0917 17:18:21.164746   29734 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0917 17:18:21.274826   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0917 17:18:21.274913   29734 main.go:141] libmachine: found compatible host: buildroot
	I0917 17:18:21.274923   29734 main.go:141] libmachine: Provisioning with buildroot...
	I0917 17:18:21.274931   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetMachineName
	I0917 17:18:21.275195   29734 buildroot.go:166] provisioning hostname "ha-181247-m02"
	I0917 17:18:21.275211   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetMachineName
	I0917 17:18:21.275418   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHHostname
	I0917 17:18:21.277879   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.278227   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:21.278256   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.278398   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHPort
	I0917 17:18:21.278590   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:21.278731   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:21.278882   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHUsername
	I0917 17:18:21.279031   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:18:21.279198   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0917 17:18:21.279210   29734 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181247-m02 && echo "ha-181247-m02" | sudo tee /etc/hostname
	I0917 17:18:21.405388   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181247-m02
	
	I0917 17:18:21.405424   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHHostname
	I0917 17:18:21.408809   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.409168   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:21.409195   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.409399   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHPort
	I0917 17:18:21.409584   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:21.409728   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:21.409851   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHUsername
	I0917 17:18:21.409983   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:18:21.410157   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0917 17:18:21.410172   29734 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181247-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181247-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181247-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 17:18:21.526487   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 17:18:21.526513   29734 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 17:18:21.526527   29734 buildroot.go:174] setting up certificates
	I0917 17:18:21.526536   29734 provision.go:84] configureAuth start
	I0917 17:18:21.526545   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetMachineName
	I0917 17:18:21.526843   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetIP
	I0917 17:18:21.529384   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.529812   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:21.529836   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.529971   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHHostname
	I0917 17:18:21.532743   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.533108   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:21.533134   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.533269   29734 provision.go:143] copyHostCerts
	I0917 17:18:21.533310   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 17:18:21.533352   29734 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 17:18:21.533361   29734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 17:18:21.533428   29734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 17:18:21.533765   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 17:18:21.533807   29734 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 17:18:21.533815   29734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 17:18:21.533864   29734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 17:18:21.534035   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 17:18:21.534074   29734 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 17:18:21.534084   29734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 17:18:21.534143   29734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 17:18:21.534238   29734 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.ha-181247-m02 san=[127.0.0.1 192.168.39.11 ha-181247-m02 localhost minikube]
	I0917 17:18:21.602336   29734 provision.go:177] copyRemoteCerts
	I0917 17:18:21.602400   29734 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 17:18:21.602427   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHHostname
	I0917 17:18:21.605998   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.606365   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:21.606406   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.606636   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHPort
	I0917 17:18:21.606839   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:21.607021   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHUsername
	I0917 17:18:21.607134   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02/id_rsa Username:docker}
	I0917 17:18:21.692171   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 17:18:21.692260   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 17:18:21.718050   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 17:18:21.718125   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 17:18:21.742902   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 17:18:21.742985   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 17:18:21.768165   29734 provision.go:87] duration metric: took 241.617875ms to configureAuth
	I0917 17:18:21.768198   29734 buildroot.go:189] setting minikube options for container-runtime
	I0917 17:18:21.768391   29734 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:18:21.768463   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHHostname
	I0917 17:18:21.771121   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.771489   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:21.771517   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:21.771752   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHPort
	I0917 17:18:21.771919   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:21.772101   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:21.772248   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHUsername
	I0917 17:18:21.772392   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:18:21.772545   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0917 17:18:21.772559   29734 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 17:18:22.006520   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 17:18:22.006545   29734 main.go:141] libmachine: Checking connection to Docker...
	I0917 17:18:22.006553   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetURL
	I0917 17:18:22.007874   29734 main.go:141] libmachine: (ha-181247-m02) DBG | Using libvirt version 6000000
	I0917 17:18:22.010313   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:22.010655   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:22.010682   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:22.010906   29734 main.go:141] libmachine: Docker is up and running!
	I0917 17:18:22.010921   29734 main.go:141] libmachine: Reticulating splines...
	I0917 17:18:22.010929   29734 client.go:171] duration metric: took 24.228046586s to LocalClient.Create
	I0917 17:18:22.010955   29734 start.go:167] duration metric: took 24.228112951s to libmachine.API.Create "ha-181247"
	I0917 17:18:22.010966   29734 start.go:293] postStartSetup for "ha-181247-m02" (driver="kvm2")
	I0917 17:18:22.010980   29734 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 17:18:22.011005   29734 main.go:141] libmachine: (ha-181247-m02) Calling .DriverName
	I0917 17:18:22.011239   29734 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 17:18:22.011261   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHHostname
	I0917 17:18:22.013775   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:22.014065   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:22.014092   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:22.014234   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHPort
	I0917 17:18:22.014441   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:22.014609   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHUsername
	I0917 17:18:22.014774   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02/id_rsa Username:docker}
	I0917 17:18:22.102790   29734 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 17:18:22.107538   29734 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 17:18:22.107563   29734 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 17:18:22.107640   29734 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 17:18:22.107710   29734 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 17:18:22.107719   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> /etc/ssl/certs/182592.pem
	I0917 17:18:22.107799   29734 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 17:18:22.120049   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 17:18:22.148561   29734 start.go:296] duration metric: took 137.580177ms for postStartSetup
	I0917 17:18:22.148607   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetConfigRaw
	I0917 17:18:22.149220   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetIP
	I0917 17:18:22.152005   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:22.152362   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:22.152384   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:22.152666   29734 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/config.json ...
	I0917 17:18:22.152912   29734 start.go:128] duration metric: took 24.388130663s to createHost
	I0917 17:18:22.152940   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHHostname
	I0917 17:18:22.155168   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:22.155507   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:22.155533   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:22.155714   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHPort
	I0917 17:18:22.155897   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:22.156033   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:22.156180   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHUsername
	I0917 17:18:22.156294   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:18:22.156469   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0917 17:18:22.156480   29734 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 17:18:22.266321   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726593502.221764932
	
	I0917 17:18:22.266340   29734 fix.go:216] guest clock: 1726593502.221764932
	I0917 17:18:22.266347   29734 fix.go:229] Guest: 2024-09-17 17:18:22.221764932 +0000 UTC Remote: 2024-09-17 17:18:22.152926043 +0000 UTC m=+69.893805041 (delta=68.838889ms)
	I0917 17:18:22.266364   29734 fix.go:200] guest clock delta is within tolerance: 68.838889ms
	I0917 17:18:22.266368   29734 start.go:83] releasing machines lock for "ha-181247-m02", held for 24.501686632s
	I0917 17:18:22.266384   29734 main.go:141] libmachine: (ha-181247-m02) Calling .DriverName
	I0917 17:18:22.266622   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetIP
	I0917 17:18:22.269609   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:22.270023   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:22.270058   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:22.272589   29734 out.go:177] * Found network options:
	I0917 17:18:22.274235   29734 out.go:177]   - NO_PROXY=192.168.39.195
	W0917 17:18:22.275808   29734 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 17:18:22.275838   29734 main.go:141] libmachine: (ha-181247-m02) Calling .DriverName
	I0917 17:18:22.276486   29734 main.go:141] libmachine: (ha-181247-m02) Calling .DriverName
	I0917 17:18:22.276802   29734 main.go:141] libmachine: (ha-181247-m02) Calling .DriverName
	I0917 17:18:22.276915   29734 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 17:18:22.276954   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHHostname
	W0917 17:18:22.276986   29734 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 17:18:22.277042   29734 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 17:18:22.277059   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHHostname
	I0917 17:18:22.280134   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:22.280462   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:22.280488   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:22.280508   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:22.280645   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHPort
	I0917 17:18:22.280794   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:22.280930   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHUsername
	I0917 17:18:22.280955   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:22.280994   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:22.281087   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02/id_rsa Username:docker}
	I0917 17:18:22.281162   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHPort
	I0917 17:18:22.281327   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:18:22.281575   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHUsername
	I0917 17:18:22.281701   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02/id_rsa Username:docker}
	I0917 17:18:22.529244   29734 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 17:18:22.535467   29734 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 17:18:22.535526   29734 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 17:18:22.552017   29734 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 17:18:22.552045   29734 start.go:495] detecting cgroup driver to use...
	I0917 17:18:22.552109   29734 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 17:18:22.569131   29734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 17:18:22.585085   29734 docker.go:217] disabling cri-docker service (if available) ...
	I0917 17:18:22.585132   29734 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 17:18:22.600389   29734 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 17:18:22.615637   29734 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 17:18:22.732209   29734 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 17:18:22.872622   29734 docker.go:233] disabling docker service ...
	I0917 17:18:22.872701   29734 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 17:18:22.888542   29734 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 17:18:22.903914   29734 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 17:18:23.051397   29734 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 17:18:23.181885   29734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 17:18:23.200187   29734 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 17:18:23.222525   29734 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 17:18:23.222579   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:18:23.235585   29734 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 17:18:23.235658   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:18:23.248584   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:18:23.261726   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:18:23.274049   29734 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 17:18:23.287882   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:18:23.300763   29734 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:18:23.320232   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:18:23.332578   29734 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 17:18:23.345466   29734 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 17:18:23.345532   29734 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 17:18:23.362760   29734 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 17:18:23.374070   29734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:18:23.505526   29734 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 17:18:23.604959   29734 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 17:18:23.605036   29734 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 17:18:23.610222   29734 start.go:563] Will wait 60s for crictl version
	I0917 17:18:23.610291   29734 ssh_runner.go:195] Run: which crictl
	I0917 17:18:23.614410   29734 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 17:18:23.658480   29734 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 17:18:23.658573   29734 ssh_runner.go:195] Run: crio --version
	I0917 17:18:23.688813   29734 ssh_runner.go:195] Run: crio --version
	I0917 17:18:23.722694   29734 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 17:18:23.724752   29734 out.go:177]   - env NO_PROXY=192.168.39.195
	I0917 17:18:23.726126   29734 main.go:141] libmachine: (ha-181247-m02) Calling .GetIP
	I0917 17:18:23.728958   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:23.729375   29734 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:18:12 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:18:23.729394   29734 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:18:23.729654   29734 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0917 17:18:23.734247   29734 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 17:18:23.748348   29734 mustload.go:65] Loading cluster: ha-181247
	I0917 17:18:23.748548   29734 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:18:23.748862   29734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:18:23.748904   29734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:18:23.763864   29734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41631
	I0917 17:18:23.764339   29734 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:18:23.764903   29734 main.go:141] libmachine: Using API Version  1
	I0917 17:18:23.764923   29734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:18:23.765213   29734 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:18:23.765412   29734 main.go:141] libmachine: (ha-181247) Calling .GetState
	I0917 17:18:23.767286   29734 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:18:23.767583   29734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:18:23.767627   29734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:18:23.783128   29734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40573
	I0917 17:18:23.783610   29734 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:18:23.784033   29734 main.go:141] libmachine: Using API Version  1
	I0917 17:18:23.784050   29734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:18:23.784454   29734 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:18:23.784638   29734 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:18:23.784792   29734 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247 for IP: 192.168.39.11
	I0917 17:18:23.784802   29734 certs.go:194] generating shared ca certs ...
	I0917 17:18:23.784820   29734 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:18:23.784957   29734 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 17:18:23.785010   29734 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 17:18:23.785024   29734 certs.go:256] generating profile certs ...
	I0917 17:18:23.785109   29734 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/client.key
	I0917 17:18:23.785142   29734 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.1273407d
	I0917 17:18:23.785163   29734 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.1273407d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.195 192.168.39.11 192.168.39.254]
	I0917 17:18:24.017669   29734 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.1273407d ...
	I0917 17:18:24.017698   29734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.1273407d: {Name:mk6fcd886260f431a2e141d60740f6e275c19e40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:18:24.017871   29734 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.1273407d ...
	I0917 17:18:24.017883   29734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.1273407d: {Name:mk928b4dd45f83731946f9df6abb001fae0c8aa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:18:24.017955   29734 certs.go:381] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.1273407d -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt
	I0917 17:18:24.018083   29734 certs.go:385] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.1273407d -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key
	I0917 17:18:24.018227   29734 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.key
	I0917 17:18:24.018250   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 17:18:24.018262   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 17:18:24.018273   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 17:18:24.018286   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 17:18:24.018296   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 17:18:24.018306   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 17:18:24.018317   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 17:18:24.018326   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 17:18:24.018375   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 17:18:24.018404   29734 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 17:18:24.018414   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 17:18:24.018434   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 17:18:24.018455   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 17:18:24.018474   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 17:18:24.018554   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 17:18:24.018581   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> /usr/share/ca-certificates/182592.pem
	I0917 17:18:24.018594   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:18:24.018608   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem -> /usr/share/ca-certificates/18259.pem
	I0917 17:18:24.018638   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:18:24.021899   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:18:24.022334   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:18:24.022366   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:18:24.022542   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:18:24.022737   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:18:24.022910   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:18:24.023032   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:18:24.097708   29734 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 17:18:24.103770   29734 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 17:18:24.115930   29734 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 17:18:24.120584   29734 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0917 17:18:24.134328   29734 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 17:18:24.139095   29734 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 17:18:24.151258   29734 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 17:18:24.155654   29734 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0917 17:18:24.165649   29734 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 17:18:24.169836   29734 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 17:18:24.180495   29734 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 17:18:24.185119   29734 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0917 17:18:24.196446   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 17:18:24.222479   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 17:18:24.247258   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 17:18:24.271758   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 17:18:24.296512   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0917 17:18:24.321219   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 17:18:24.346235   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 17:18:24.370848   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 17:18:24.396302   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 17:18:24.422386   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 17:18:24.449417   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 17:18:24.476090   29734 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 17:18:24.495010   29734 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0917 17:18:24.513069   29734 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 17:18:24.533094   29734 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0917 17:18:24.553421   29734 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 17:18:24.573290   29734 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0917 17:18:24.593016   29734 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 17:18:24.611184   29734 ssh_runner.go:195] Run: openssl version
	I0917 17:18:24.617107   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 17:18:24.629819   29734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 17:18:24.634464   29734 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 17:18:24.634518   29734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 17:18:24.640567   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 17:18:24.652501   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 17:18:24.664675   29734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 17:18:24.669605   29734 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 17:18:24.669656   29734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 17:18:24.675690   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 17:18:24.687747   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 17:18:24.701272   29734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:18:24.706122   29734 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:18:24.706198   29734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:18:24.712224   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 17:18:24.724111   29734 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 17:18:24.728943   29734 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 17:18:24.728994   29734 kubeadm.go:934] updating node {m02 192.168.39.11 8443 v1.31.1 crio true true} ...
	I0917 17:18:24.729082   29734 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-181247-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-181247 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 17:18:24.729132   29734 kube-vip.go:115] generating kube-vip config ...
	I0917 17:18:24.729169   29734 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 17:18:24.749085   29734 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 17:18:24.749220   29734 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 17:18:24.749310   29734 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 17:18:24.760483   29734 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0917 17:18:24.760564   29734 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0917 17:18:24.771164   29734 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0917 17:18:24.771197   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0917 17:18:24.771262   29734 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0917 17:18:24.771260   29734 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19662-11085/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0917 17:18:24.771263   29734 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19662-11085/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0917 17:18:24.775912   29734 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0917 17:18:24.775951   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0917 17:18:25.394171   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0917 17:18:25.394246   29734 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0917 17:18:25.400533   29734 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0917 17:18:25.400572   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0917 17:18:25.526918   29734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:18:25.558888   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0917 17:18:25.559005   29734 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0917 17:18:25.580514   29734 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0917 17:18:25.580552   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0917 17:18:26.020645   29734 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 17:18:26.031121   29734 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0917 17:18:26.049185   29734 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 17:18:26.066971   29734 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0917 17:18:26.084941   29734 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0917 17:18:26.089581   29734 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 17:18:26.104518   29734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:18:26.249535   29734 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 17:18:26.267954   29734 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:18:26.268305   29734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:18:26.268352   29734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:18:26.284171   29734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33347
	I0917 17:18:26.284750   29734 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:18:26.285379   29734 main.go:141] libmachine: Using API Version  1
	I0917 17:18:26.285410   29734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:18:26.285784   29734 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:18:26.285973   29734 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:18:26.286132   29734 start.go:317] joinCluster: &{Name:ha-181247 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-181247 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:18:26.286260   29734 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0917 17:18:26.286284   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:18:26.289193   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:18:26.289780   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:18:26.289806   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:18:26.290017   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:18:26.290229   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:18:26.290408   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:18:26.290560   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:18:26.444849   29734 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 17:18:26.444896   29734 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2a07d3.ux3juwlz64et24sq --discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-181247-m02 --control-plane --apiserver-advertise-address=192.168.39.11 --apiserver-bind-port=8443"
	I0917 17:18:49.054070   29734 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2a07d3.ux3juwlz64et24sq --discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-181247-m02 --control-plane --apiserver-advertise-address=192.168.39.11 --apiserver-bind-port=8443": (22.609145893s)
	I0917 17:18:49.054109   29734 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0917 17:18:49.583985   29734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-181247-m02 minikube.k8s.io/updated_at=2024_09_17T17_18_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=ha-181247 minikube.k8s.io/primary=false
	I0917 17:18:49.708990   29734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-181247-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0917 17:18:49.837362   29734 start.go:319] duration metric: took 23.551222749s to joinCluster
	I0917 17:18:49.837441   29734 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 17:18:49.837720   29734 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:18:49.838795   29734 out.go:177] * Verifying Kubernetes components...
	I0917 17:18:49.839889   29734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:18:50.094124   29734 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 17:18:50.126727   29734 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 17:18:50.127076   29734 kapi.go:59] client config for ha-181247: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/client.crt", KeyFile:"/home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/client.key", CAFile:"/home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 17:18:50.127174   29734 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.195:8443
	I0917 17:18:50.127450   29734 node_ready.go:35] waiting up to 6m0s for node "ha-181247-m02" to be "Ready" ...
	I0917 17:18:50.127549   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:50.127557   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:50.127564   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:50.127572   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:50.140407   29734 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0917 17:18:50.628424   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:50.628447   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:50.628457   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:50.628463   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:50.634586   29734 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 17:18:51.128653   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:51.128683   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:51.128695   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:51.128701   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:51.148397   29734 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0917 17:18:51.628341   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:51.628385   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:51.628394   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:51.628398   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:51.662216   29734 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I0917 17:18:52.128469   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:52.128496   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:52.128507   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:52.128514   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:52.133662   29734 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 17:18:52.134368   29734 node_ready.go:53] node "ha-181247-m02" has status "Ready":"False"
	I0917 17:18:52.628563   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:52.628586   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:52.628597   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:52.628602   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:52.708335   29734 round_trippers.go:574] Response Status: 200 OK in 79 milliseconds
	I0917 17:18:53.128636   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:53.128663   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:53.128672   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:53.128677   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:53.132233   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:18:53.627931   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:53.627954   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:53.627962   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:53.627970   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:53.631427   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:18:54.127631   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:54.127652   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:54.127660   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:54.127664   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:54.131464   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:18:54.628648   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:54.628679   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:54.628690   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:54.628694   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:54.632607   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:18:54.633308   29734 node_ready.go:53] node "ha-181247-m02" has status "Ready":"False"
	I0917 17:18:55.127675   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:55.127696   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:55.127706   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:55.127710   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:55.132315   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:18:55.628076   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:55.628099   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:55.628107   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:55.628113   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:55.631189   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:18:56.128064   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:56.128094   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:56.128105   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:56.128111   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:56.134365   29734 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 17:18:56.628672   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:56.628695   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:56.628704   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:56.628709   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:56.631642   29734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 17:18:57.128083   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:57.128106   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:57.128115   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:57.128119   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:57.131959   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:18:57.132534   29734 node_ready.go:53] node "ha-181247-m02" has status "Ready":"False"
	I0917 17:18:57.628498   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:57.628520   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:57.628528   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:57.628532   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:57.632525   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:18:58.128220   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:58.128248   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:58.128259   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:58.128264   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:58.131830   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:18:58.627856   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:58.627881   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:58.627892   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:58.627896   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:58.631192   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:18:59.128329   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:59.128354   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:59.128362   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:59.128366   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:59.132197   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:18:59.132757   29734 node_ready.go:53] node "ha-181247-m02" has status "Ready":"False"
	I0917 17:18:59.628125   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:18:59.628149   29734 round_trippers.go:469] Request Headers:
	I0917 17:18:59.628160   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:18:59.628167   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:18:59.631656   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:00.128448   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:00.128476   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:00.128484   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:00.128489   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:00.132326   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:00.628352   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:00.628380   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:00.628388   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:00.628392   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:00.631953   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:01.127782   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:01.127807   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:01.127817   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:01.127823   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:01.131862   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:19:01.627971   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:01.627997   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:01.628005   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:01.628009   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:01.631442   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:01.632220   29734 node_ready.go:53] node "ha-181247-m02" has status "Ready":"False"
	I0917 17:19:02.128452   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:02.128476   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:02.128491   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:02.128495   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:02.132367   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:02.628547   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:02.628569   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:02.628577   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:02.628581   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:02.632883   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:19:03.128467   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:03.128491   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:03.128499   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:03.128504   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:03.131890   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:03.627748   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:03.627771   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:03.627778   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:03.627783   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:03.631871   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:19:03.632342   29734 node_ready.go:53] node "ha-181247-m02" has status "Ready":"False"
	I0917 17:19:04.128630   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:04.128656   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:04.128665   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:04.128668   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:04.133180   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:19:04.627677   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:04.627702   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:04.627713   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:04.627719   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:04.631015   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:04.631502   29734 node_ready.go:49] node "ha-181247-m02" has status "Ready":"True"
	I0917 17:19:04.631526   29734 node_ready.go:38] duration metric: took 14.504055199s for node "ha-181247-m02" to be "Ready" ...
	I0917 17:19:04.631534   29734 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 17:19:04.631615   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0917 17:19:04.631624   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:04.631631   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:04.631636   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:04.636011   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:19:04.643282   29734 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5lmg4" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:04.643389   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5lmg4
	I0917 17:19:04.643398   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:04.643409   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:04.643419   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:04.647868   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:19:04.648871   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:19:04.648884   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:04.648893   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:04.648898   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:04.652339   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:04.652851   29734 pod_ready.go:93] pod "coredns-7c65d6cfc9-5lmg4" in "kube-system" namespace has status "Ready":"True"
	I0917 17:19:04.652869   29734 pod_ready.go:82] duration metric: took 9.552348ms for pod "coredns-7c65d6cfc9-5lmg4" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:04.652878   29734 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bdthh" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:04.652932   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bdthh
	I0917 17:19:04.652940   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:04.652947   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:04.652950   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:04.657348   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:19:04.658736   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:19:04.658755   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:04.658761   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:04.658764   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:04.662294   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:04.663051   29734 pod_ready.go:93] pod "coredns-7c65d6cfc9-bdthh" in "kube-system" namespace has status "Ready":"True"
	I0917 17:19:04.663067   29734 pod_ready.go:82] duration metric: took 10.183659ms for pod "coredns-7c65d6cfc9-bdthh" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:04.663076   29734 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:04.663126   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/etcd-ha-181247
	I0917 17:19:04.663134   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:04.663140   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:04.663144   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:04.666354   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:04.667375   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:19:04.667390   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:04.667398   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:04.667401   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:04.672811   29734 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 17:19:04.673198   29734 pod_ready.go:93] pod "etcd-ha-181247" in "kube-system" namespace has status "Ready":"True"
	I0917 17:19:04.673215   29734 pod_ready.go:82] duration metric: took 10.133505ms for pod "etcd-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:04.673224   29734 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:04.673291   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/etcd-ha-181247-m02
	I0917 17:19:04.673300   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:04.673306   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:04.673309   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:04.676064   29734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 17:19:04.676574   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:04.676588   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:04.676595   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:04.676599   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:04.679297   29734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 17:19:04.680195   29734 pod_ready.go:93] pod "etcd-ha-181247-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 17:19:04.680211   29734 pod_ready.go:82] duration metric: took 6.968087ms for pod "etcd-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:04.680224   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:04.828649   29734 request.go:632] Waited for 148.367571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181247
	I0917 17:19:04.828725   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181247
	I0917 17:19:04.828731   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:04.828738   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:04.828741   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:04.833066   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:19:05.028113   29734 request.go:632] Waited for 194.320349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:19:05.028199   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:19:05.028209   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:05.028219   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:05.028229   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:05.031956   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:05.032561   29734 pod_ready.go:93] pod "kube-apiserver-ha-181247" in "kube-system" namespace has status "Ready":"True"
	I0917 17:19:05.032584   29734 pod_ready.go:82] duration metric: took 352.352224ms for pod "kube-apiserver-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:05.032596   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:05.228614   29734 request.go:632] Waited for 195.953875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181247-m02
	I0917 17:19:05.228698   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181247-m02
	I0917 17:19:05.228703   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:05.228712   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:05.228719   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:05.232270   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:05.428630   29734 request.go:632] Waited for 195.391292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:05.428712   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:05.428719   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:05.428726   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:05.428731   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:05.432195   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:05.432785   29734 pod_ready.go:93] pod "kube-apiserver-ha-181247-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 17:19:05.432806   29734 pod_ready.go:82] duration metric: took 400.203438ms for pod "kube-apiserver-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:05.432816   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:05.627771   29734 request.go:632] Waited for 194.898858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181247
	I0917 17:19:05.627856   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181247
	I0917 17:19:05.627862   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:05.627869   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:05.627874   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:05.631519   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:05.827768   29734 request.go:632] Waited for 195.295968ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:19:05.827821   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:19:05.827827   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:05.827835   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:05.827839   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:05.831185   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:05.831809   29734 pod_ready.go:93] pod "kube-controller-manager-ha-181247" in "kube-system" namespace has status "Ready":"True"
	I0917 17:19:05.831830   29734 pod_ready.go:82] duration metric: took 399.00684ms for pod "kube-controller-manager-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:05.831840   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:06.027853   29734 request.go:632] Waited for 195.925024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181247-m02
	I0917 17:19:06.027921   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181247-m02
	I0917 17:19:06.027928   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:06.027937   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:06.027944   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:06.031819   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:06.227852   29734 request.go:632] Waited for 195.333615ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:06.227914   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:06.227920   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:06.227928   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:06.227932   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:06.231334   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:06.231806   29734 pod_ready.go:93] pod "kube-controller-manager-ha-181247-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 17:19:06.231822   29734 pod_ready.go:82] duration metric: took 399.976189ms for pod "kube-controller-manager-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:06.231832   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7rrxk" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:06.428530   29734 request.go:632] Waited for 196.625704ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7rrxk
	I0917 17:19:06.428599   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7rrxk
	I0917 17:19:06.428608   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:06.428619   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:06.428628   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:06.432403   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:06.628466   29734 request.go:632] Waited for 195.4225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:19:06.628538   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:19:06.628548   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:06.628555   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:06.628561   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:06.631799   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:06.632499   29734 pod_ready.go:93] pod "kube-proxy-7rrxk" in "kube-system" namespace has status "Ready":"True"
	I0917 17:19:06.632521   29734 pod_ready.go:82] duration metric: took 400.682725ms for pod "kube-proxy-7rrxk" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:06.632533   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xmfcj" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:06.828512   29734 request.go:632] Waited for 195.904312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xmfcj
	I0917 17:19:06.828585   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xmfcj
	I0917 17:19:06.828590   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:06.828597   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:06.828609   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:06.832250   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:07.028409   29734 request.go:632] Waited for 195.37963ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:07.028509   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:07.028520   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:07.028531   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:07.028539   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:07.031632   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:07.032132   29734 pod_ready.go:93] pod "kube-proxy-xmfcj" in "kube-system" namespace has status "Ready":"True"
	I0917 17:19:07.032154   29734 pod_ready.go:82] duration metric: took 399.612352ms for pod "kube-proxy-xmfcj" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:07.032166   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:07.228285   29734 request.go:632] Waited for 196.052237ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181247
	I0917 17:19:07.228353   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181247
	I0917 17:19:07.228358   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:07.228365   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:07.228370   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:07.231898   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:07.428163   29734 request.go:632] Waited for 195.609083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:19:07.428234   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:19:07.428239   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:07.428247   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:07.428257   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:07.431921   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:07.432585   29734 pod_ready.go:93] pod "kube-scheduler-ha-181247" in "kube-system" namespace has status "Ready":"True"
	I0917 17:19:07.432606   29734 pod_ready.go:82] duration metric: took 400.431576ms for pod "kube-scheduler-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:07.432615   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:07.628703   29734 request.go:632] Waited for 196.028502ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181247-m02
	I0917 17:19:07.628784   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181247-m02
	I0917 17:19:07.628794   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:07.628801   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:07.628806   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:07.632103   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:07.828108   29734 request.go:632] Waited for 195.437367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:07.828177   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:19:07.828184   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:07.828193   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:07.828198   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:07.831700   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:19:07.832276   29734 pod_ready.go:93] pod "kube-scheduler-ha-181247-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 17:19:07.832297   29734 pod_ready.go:82] duration metric: took 399.675807ms for pod "kube-scheduler-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:19:07.832310   29734 pod_ready.go:39] duration metric: took 3.200765806s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 17:19:07.832330   29734 api_server.go:52] waiting for apiserver process to appear ...
	I0917 17:19:07.832384   29734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:19:07.849109   29734 api_server.go:72] duration metric: took 18.011632627s to wait for apiserver process to appear ...
	I0917 17:19:07.849139   29734 api_server.go:88] waiting for apiserver healthz status ...
	I0917 17:19:07.849160   29734 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I0917 17:19:07.853417   29734 api_server.go:279] https://192.168.39.195:8443/healthz returned 200:
	ok
	I0917 17:19:07.853492   29734 round_trippers.go:463] GET https://192.168.39.195:8443/version
	I0917 17:19:07.853502   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:07.853515   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:07.853524   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:07.854467   29734 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0917 17:19:07.854585   29734 api_server.go:141] control plane version: v1.31.1
	I0917 17:19:07.854603   29734 api_server.go:131] duration metric: took 5.457921ms to wait for apiserver health ...
	I0917 17:19:07.854613   29734 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 17:19:08.027910   29734 request.go:632] Waited for 173.234881ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0917 17:19:08.028000   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0917 17:19:08.028009   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:08.028020   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:08.028029   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:08.032889   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:19:08.038488   29734 system_pods.go:59] 17 kube-system pods found
	I0917 17:19:08.038523   29734 system_pods.go:61] "coredns-7c65d6cfc9-5lmg4" [1052e249-3530-4220-8214-0c36a02c4215] Running
	I0917 17:19:08.038531   29734 system_pods.go:61] "coredns-7c65d6cfc9-bdthh" [63ae9d00-44ce-47be-80c5-12144ff8c69b] Running
	I0917 17:19:08.038538   29734 system_pods.go:61] "etcd-ha-181247" [6e221481-8a96-432b-935d-8ed44c26ca62] Running
	I0917 17:19:08.038544   29734 system_pods.go:61] "etcd-ha-181247-m02" [a691f359-6e75-464a-9b99-6a9b91ef4907] Running
	I0917 17:19:08.038548   29734 system_pods.go:61] "kindnet-2tkbp" [882de4ca-d789-403e-a22e-22fbc776af10] Running
	I0917 17:19:08.038554   29734 system_pods.go:61] "kindnet-qqpgm" [5e8663b0-97a2-4995-951a-5fcee45c71de] Running
	I0917 17:19:08.038560   29734 system_pods.go:61] "kube-apiserver-ha-181247" [5386d1d6-820f-4f46-a379-f38cab3047ad] Running
	I0917 17:19:08.038565   29734 system_pods.go:61] "kube-apiserver-ha-181247-m02" [9611a83e-8be3-41c3-8477-f020d0494000] Running
	I0917 17:19:08.038571   29734 system_pods.go:61] "kube-controller-manager-ha-181247" [9732aff5-419d-4d8c-ba06-ec37a29cdb95] Running
	I0917 17:19:08.038576   29734 system_pods.go:61] "kube-controller-manager-ha-181247-m02" [6bc1cdbf-ef9a-420f-8250-131c7684745e] Running
	I0917 17:19:08.038581   29734 system_pods.go:61] "kube-proxy-7rrxk" [a075630a-48df-429f-98ef-49bca2d9dac5] Running
	I0917 17:19:08.038588   29734 system_pods.go:61] "kube-proxy-xmfcj" [f2eaf5d5-34e2-45b0-9aa3-5cb28b952dfa] Running
	I0917 17:19:08.038594   29734 system_pods.go:61] "kube-scheduler-ha-181247" [dc64d80c-5975-40e4-b3dd-51a43cb7d5c4] Running
	I0917 17:19:08.038602   29734 system_pods.go:61] "kube-scheduler-ha-181247-m02" [2130254c-2836-4867-b9d4-4371d7897b7f] Running
	I0917 17:19:08.038608   29734 system_pods.go:61] "kube-vip-ha-181247" [45c79311-640f-4df4-8902-e3b09f11d417] Running
	I0917 17:19:08.038614   29734 system_pods.go:61] "kube-vip-ha-181247-m02" [8de63338-cae2-4484-87f8-51d71ebd3d5a] Running
	I0917 17:19:08.038619   29734 system_pods.go:61] "storage-provisioner" [fcef4cf0-61a6-4f9f-9644-f17f7f819237] Running
	I0917 17:19:08.038630   29734 system_pods.go:74] duration metric: took 184.006064ms to wait for pod list to return data ...
	I0917 17:19:08.038642   29734 default_sa.go:34] waiting for default service account to be created ...
	I0917 17:19:08.228086   29734 request.go:632] Waited for 189.360557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/default/serviceaccounts
	I0917 17:19:08.228158   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/default/serviceaccounts
	I0917 17:19:08.228164   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:08.228171   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:08.228175   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:08.232546   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:19:08.232759   29734 default_sa.go:45] found service account: "default"
	I0917 17:19:08.232777   29734 default_sa.go:55] duration metric: took 194.128353ms for default service account to be created ...
	I0917 17:19:08.232788   29734 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 17:19:08.428219   29734 request.go:632] Waited for 195.365702ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0917 17:19:08.428285   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0917 17:19:08.428291   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:08.428298   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:08.428302   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:08.435169   29734 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 17:19:08.440681   29734 system_pods.go:86] 17 kube-system pods found
	I0917 17:19:08.440708   29734 system_pods.go:89] "coredns-7c65d6cfc9-5lmg4" [1052e249-3530-4220-8214-0c36a02c4215] Running
	I0917 17:19:08.440713   29734 system_pods.go:89] "coredns-7c65d6cfc9-bdthh" [63ae9d00-44ce-47be-80c5-12144ff8c69b] Running
	I0917 17:19:08.440718   29734 system_pods.go:89] "etcd-ha-181247" [6e221481-8a96-432b-935d-8ed44c26ca62] Running
	I0917 17:19:08.440721   29734 system_pods.go:89] "etcd-ha-181247-m02" [a691f359-6e75-464a-9b99-6a9b91ef4907] Running
	I0917 17:19:08.440725   29734 system_pods.go:89] "kindnet-2tkbp" [882de4ca-d789-403e-a22e-22fbc776af10] Running
	I0917 17:19:08.440729   29734 system_pods.go:89] "kindnet-qqpgm" [5e8663b0-97a2-4995-951a-5fcee45c71de] Running
	I0917 17:19:08.440732   29734 system_pods.go:89] "kube-apiserver-ha-181247" [5386d1d6-820f-4f46-a379-f38cab3047ad] Running
	I0917 17:19:08.440736   29734 system_pods.go:89] "kube-apiserver-ha-181247-m02" [9611a83e-8be3-41c3-8477-f020d0494000] Running
	I0917 17:19:08.440739   29734 system_pods.go:89] "kube-controller-manager-ha-181247" [9732aff5-419d-4d8c-ba06-ec37a29cdb95] Running
	I0917 17:19:08.440743   29734 system_pods.go:89] "kube-controller-manager-ha-181247-m02" [6bc1cdbf-ef9a-420f-8250-131c7684745e] Running
	I0917 17:19:08.440746   29734 system_pods.go:89] "kube-proxy-7rrxk" [a075630a-48df-429f-98ef-49bca2d9dac5] Running
	I0917 17:19:08.440749   29734 system_pods.go:89] "kube-proxy-xmfcj" [f2eaf5d5-34e2-45b0-9aa3-5cb28b952dfa] Running
	I0917 17:19:08.440753   29734 system_pods.go:89] "kube-scheduler-ha-181247" [dc64d80c-5975-40e4-b3dd-51a43cb7d5c4] Running
	I0917 17:19:08.440756   29734 system_pods.go:89] "kube-scheduler-ha-181247-m02" [2130254c-2836-4867-b9d4-4371d7897b7f] Running
	I0917 17:19:08.440759   29734 system_pods.go:89] "kube-vip-ha-181247" [45c79311-640f-4df4-8902-e3b09f11d417] Running
	I0917 17:19:08.440762   29734 system_pods.go:89] "kube-vip-ha-181247-m02" [8de63338-cae2-4484-87f8-51d71ebd3d5a] Running
	I0917 17:19:08.440765   29734 system_pods.go:89] "storage-provisioner" [fcef4cf0-61a6-4f9f-9644-f17f7f819237] Running
	I0917 17:19:08.440771   29734 system_pods.go:126] duration metric: took 207.978033ms to wait for k8s-apps to be running ...
	I0917 17:19:08.440782   29734 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 17:19:08.440838   29734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:19:08.456872   29734 system_svc.go:56] duration metric: took 16.081642ms WaitForService to wait for kubelet
	I0917 17:19:08.456902   29734 kubeadm.go:582] duration metric: took 18.619431503s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 17:19:08.456921   29734 node_conditions.go:102] verifying NodePressure condition ...
	I0917 17:19:08.628409   29734 request.go:632] Waited for 171.408526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes
	I0917 17:19:08.628469   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes
	I0917 17:19:08.628474   29734 round_trippers.go:469] Request Headers:
	I0917 17:19:08.628482   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:19:08.628486   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:19:08.632523   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:19:08.633362   29734 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 17:19:08.633400   29734 node_conditions.go:123] node cpu capacity is 2
	I0917 17:19:08.633421   29734 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 17:19:08.633426   29734 node_conditions.go:123] node cpu capacity is 2
	I0917 17:19:08.633432   29734 node_conditions.go:105] duration metric: took 176.504883ms to run NodePressure ...
	I0917 17:19:08.633446   29734 start.go:241] waiting for startup goroutines ...
	I0917 17:19:08.633478   29734 start.go:255] writing updated cluster config ...
	I0917 17:19:08.635758   29734 out.go:201] 
	I0917 17:19:08.637200   29734 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:19:08.637324   29734 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/config.json ...
	I0917 17:19:08.638868   29734 out.go:177] * Starting "ha-181247-m03" control-plane node in "ha-181247" cluster
	I0917 17:19:08.639925   29734 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 17:19:08.639949   29734 cache.go:56] Caching tarball of preloaded images
	I0917 17:19:08.640044   29734 preload.go:172] Found /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 17:19:08.640054   29734 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0917 17:19:08.640142   29734 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/config.json ...
	I0917 17:19:08.640314   29734 start.go:360] acquireMachinesLock for ha-181247-m03: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 17:19:08.640366   29734 start.go:364] duration metric: took 33.619µs to acquireMachinesLock for "ha-181247-m03"
	I0917 17:19:08.640384   29734 start.go:93] Provisioning new machine with config: &{Name:ha-181247 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-181247 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 17:19:08.640476   29734 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0917 17:19:08.641862   29734 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 17:19:08.641944   29734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:19:08.641977   29734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:19:08.657294   29734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41971
	I0917 17:19:08.657808   29734 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:19:08.658350   29734 main.go:141] libmachine: Using API Version  1
	I0917 17:19:08.658370   29734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:19:08.658741   29734 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:19:08.658909   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetMachineName
	I0917 17:19:08.659047   29734 main.go:141] libmachine: (ha-181247-m03) Calling .DriverName
	I0917 17:19:08.659174   29734 start.go:159] libmachine.API.Create for "ha-181247" (driver="kvm2")
	I0917 17:19:08.659216   29734 client.go:168] LocalClient.Create starting
	I0917 17:19:08.659266   29734 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem
	I0917 17:19:08.659308   29734 main.go:141] libmachine: Decoding PEM data...
	I0917 17:19:08.659335   29734 main.go:141] libmachine: Parsing certificate...
	I0917 17:19:08.659406   29734 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem
	I0917 17:19:08.659432   29734 main.go:141] libmachine: Decoding PEM data...
	I0917 17:19:08.659448   29734 main.go:141] libmachine: Parsing certificate...
	I0917 17:19:08.659476   29734 main.go:141] libmachine: Running pre-create checks...
	I0917 17:19:08.659487   29734 main.go:141] libmachine: (ha-181247-m03) Calling .PreCreateCheck
	I0917 17:19:08.659660   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetConfigRaw
	I0917 17:19:08.660045   29734 main.go:141] libmachine: Creating machine...
	I0917 17:19:08.660059   29734 main.go:141] libmachine: (ha-181247-m03) Calling .Create
	I0917 17:19:08.660254   29734 main.go:141] libmachine: (ha-181247-m03) Creating KVM machine...
	I0917 17:19:08.661565   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found existing default KVM network
	I0917 17:19:08.661742   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found existing private KVM network mk-ha-181247
	I0917 17:19:08.661892   29734 main.go:141] libmachine: (ha-181247-m03) Setting up store path in /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03 ...
	I0917 17:19:08.661927   29734 main.go:141] libmachine: (ha-181247-m03) Building disk image from file:///home/jenkins/minikube-integration/19662-11085/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0917 17:19:08.662011   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:08.661912   30902 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 17:19:08.662088   29734 main.go:141] libmachine: (ha-181247-m03) Downloading /home/jenkins/minikube-integration/19662-11085/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19662-11085/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0917 17:19:08.890784   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:08.890624   30902 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03/id_rsa...
	I0917 17:19:09.187633   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:09.187519   30902 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03/ha-181247-m03.rawdisk...
	I0917 17:19:09.187663   29734 main.go:141] libmachine: (ha-181247-m03) DBG | Writing magic tar header
	I0917 17:19:09.187673   29734 main.go:141] libmachine: (ha-181247-m03) DBG | Writing SSH key tar header
	I0917 17:19:09.187681   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:09.187637   30902 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03 ...
	I0917 17:19:09.187772   29734 main.go:141] libmachine: (ha-181247-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03
	I0917 17:19:09.187787   29734 main.go:141] libmachine: (ha-181247-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube/machines
	I0917 17:19:09.187812   29734 main.go:141] libmachine: (ha-181247-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 17:19:09.187825   29734 main.go:141] libmachine: (ha-181247-m03) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03 (perms=drwx------)
	I0917 17:19:09.187835   29734 main.go:141] libmachine: (ha-181247-m03) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube/machines (perms=drwxr-xr-x)
	I0917 17:19:09.187842   29734 main.go:141] libmachine: (ha-181247-m03) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube (perms=drwxr-xr-x)
	I0917 17:19:09.187848   29734 main.go:141] libmachine: (ha-181247-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085
	I0917 17:19:09.187858   29734 main.go:141] libmachine: (ha-181247-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0917 17:19:09.187865   29734 main.go:141] libmachine: (ha-181247-m03) DBG | Checking permissions on dir: /home/jenkins
	I0917 17:19:09.187872   29734 main.go:141] libmachine: (ha-181247-m03) DBG | Checking permissions on dir: /home
	I0917 17:19:09.187877   29734 main.go:141] libmachine: (ha-181247-m03) DBG | Skipping /home - not owner
	I0917 17:19:09.187886   29734 main.go:141] libmachine: (ha-181247-m03) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085 (perms=drwxrwxr-x)
	I0917 17:19:09.187894   29734 main.go:141] libmachine: (ha-181247-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0917 17:19:09.187900   29734 main.go:141] libmachine: (ha-181247-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0917 17:19:09.187907   29734 main.go:141] libmachine: (ha-181247-m03) Creating domain...
	I0917 17:19:09.189156   29734 main.go:141] libmachine: (ha-181247-m03) define libvirt domain using xml: 
	I0917 17:19:09.189172   29734 main.go:141] libmachine: (ha-181247-m03) <domain type='kvm'>
	I0917 17:19:09.189187   29734 main.go:141] libmachine: (ha-181247-m03)   <name>ha-181247-m03</name>
	I0917 17:19:09.189198   29734 main.go:141] libmachine: (ha-181247-m03)   <memory unit='MiB'>2200</memory>
	I0917 17:19:09.189203   29734 main.go:141] libmachine: (ha-181247-m03)   <vcpu>2</vcpu>
	I0917 17:19:09.189207   29734 main.go:141] libmachine: (ha-181247-m03)   <features>
	I0917 17:19:09.189212   29734 main.go:141] libmachine: (ha-181247-m03)     <acpi/>
	I0917 17:19:09.189216   29734 main.go:141] libmachine: (ha-181247-m03)     <apic/>
	I0917 17:19:09.189220   29734 main.go:141] libmachine: (ha-181247-m03)     <pae/>
	I0917 17:19:09.189224   29734 main.go:141] libmachine: (ha-181247-m03)     
	I0917 17:19:09.189242   29734 main.go:141] libmachine: (ha-181247-m03)   </features>
	I0917 17:19:09.189249   29734 main.go:141] libmachine: (ha-181247-m03)   <cpu mode='host-passthrough'>
	I0917 17:19:09.189257   29734 main.go:141] libmachine: (ha-181247-m03)   
	I0917 17:19:09.189262   29734 main.go:141] libmachine: (ha-181247-m03)   </cpu>
	I0917 17:19:09.189271   29734 main.go:141] libmachine: (ha-181247-m03)   <os>
	I0917 17:19:09.189275   29734 main.go:141] libmachine: (ha-181247-m03)     <type>hvm</type>
	I0917 17:19:09.189282   29734 main.go:141] libmachine: (ha-181247-m03)     <boot dev='cdrom'/>
	I0917 17:19:09.189286   29734 main.go:141] libmachine: (ha-181247-m03)     <boot dev='hd'/>
	I0917 17:19:09.189293   29734 main.go:141] libmachine: (ha-181247-m03)     <bootmenu enable='no'/>
	I0917 17:19:09.189297   29734 main.go:141] libmachine: (ha-181247-m03)   </os>
	I0917 17:19:09.189302   29734 main.go:141] libmachine: (ha-181247-m03)   <devices>
	I0917 17:19:09.189309   29734 main.go:141] libmachine: (ha-181247-m03)     <disk type='file' device='cdrom'>
	I0917 17:19:09.189368   29734 main.go:141] libmachine: (ha-181247-m03)       <source file='/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03/boot2docker.iso'/>
	I0917 17:19:09.189394   29734 main.go:141] libmachine: (ha-181247-m03)       <target dev='hdc' bus='scsi'/>
	I0917 17:19:09.189406   29734 main.go:141] libmachine: (ha-181247-m03)       <readonly/>
	I0917 17:19:09.189417   29734 main.go:141] libmachine: (ha-181247-m03)     </disk>
	I0917 17:19:09.189430   29734 main.go:141] libmachine: (ha-181247-m03)     <disk type='file' device='disk'>
	I0917 17:19:09.189444   29734 main.go:141] libmachine: (ha-181247-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0917 17:19:09.189463   29734 main.go:141] libmachine: (ha-181247-m03)       <source file='/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03/ha-181247-m03.rawdisk'/>
	I0917 17:19:09.189481   29734 main.go:141] libmachine: (ha-181247-m03)       <target dev='hda' bus='virtio'/>
	I0917 17:19:09.189493   29734 main.go:141] libmachine: (ha-181247-m03)     </disk>
	I0917 17:19:09.189500   29734 main.go:141] libmachine: (ha-181247-m03)     <interface type='network'>
	I0917 17:19:09.189510   29734 main.go:141] libmachine: (ha-181247-m03)       <source network='mk-ha-181247'/>
	I0917 17:19:09.189521   29734 main.go:141] libmachine: (ha-181247-m03)       <model type='virtio'/>
	I0917 17:19:09.189533   29734 main.go:141] libmachine: (ha-181247-m03)     </interface>
	I0917 17:19:09.189541   29734 main.go:141] libmachine: (ha-181247-m03)     <interface type='network'>
	I0917 17:19:09.189577   29734 main.go:141] libmachine: (ha-181247-m03)       <source network='default'/>
	I0917 17:19:09.189603   29734 main.go:141] libmachine: (ha-181247-m03)       <model type='virtio'/>
	I0917 17:19:09.189620   29734 main.go:141] libmachine: (ha-181247-m03)     </interface>
	I0917 17:19:09.189637   29734 main.go:141] libmachine: (ha-181247-m03)     <serial type='pty'>
	I0917 17:19:09.189648   29734 main.go:141] libmachine: (ha-181247-m03)       <target port='0'/>
	I0917 17:19:09.189655   29734 main.go:141] libmachine: (ha-181247-m03)     </serial>
	I0917 17:19:09.189666   29734 main.go:141] libmachine: (ha-181247-m03)     <console type='pty'>
	I0917 17:19:09.189676   29734 main.go:141] libmachine: (ha-181247-m03)       <target type='serial' port='0'/>
	I0917 17:19:09.189681   29734 main.go:141] libmachine: (ha-181247-m03)     </console>
	I0917 17:19:09.189686   29734 main.go:141] libmachine: (ha-181247-m03)     <rng model='virtio'>
	I0917 17:19:09.189696   29734 main.go:141] libmachine: (ha-181247-m03)       <backend model='random'>/dev/random</backend>
	I0917 17:19:09.189708   29734 main.go:141] libmachine: (ha-181247-m03)     </rng>
	I0917 17:19:09.189719   29734 main.go:141] libmachine: (ha-181247-m03)     
	I0917 17:19:09.189725   29734 main.go:141] libmachine: (ha-181247-m03)     
	I0917 17:19:09.189733   29734 main.go:141] libmachine: (ha-181247-m03)   </devices>
	I0917 17:19:09.189744   29734 main.go:141] libmachine: (ha-181247-m03) </domain>
	I0917 17:19:09.189754   29734 main.go:141] libmachine: (ha-181247-m03) 
	I0917 17:19:09.196712   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:88:12:68 in network default
	I0917 17:19:09.197192   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:09.197208   29734 main.go:141] libmachine: (ha-181247-m03) Ensuring networks are active...
	I0917 17:19:09.197831   29734 main.go:141] libmachine: (ha-181247-m03) Ensuring network default is active
	I0917 17:19:09.198205   29734 main.go:141] libmachine: (ha-181247-m03) Ensuring network mk-ha-181247 is active
	I0917 17:19:09.198544   29734 main.go:141] libmachine: (ha-181247-m03) Getting domain xml...
	I0917 17:19:09.199186   29734 main.go:141] libmachine: (ha-181247-m03) Creating domain...
	I0917 17:19:10.470752   29734 main.go:141] libmachine: (ha-181247-m03) Waiting to get IP...
	I0917 17:19:10.471534   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:10.472003   29734 main.go:141] libmachine: (ha-181247-m03) DBG | unable to find current IP address of domain ha-181247-m03 in network mk-ha-181247
	I0917 17:19:10.472058   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:10.471980   30902 retry.go:31] will retry after 230.368754ms: waiting for machine to come up
	I0917 17:19:10.704673   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:10.705152   29734 main.go:141] libmachine: (ha-181247-m03) DBG | unable to find current IP address of domain ha-181247-m03 in network mk-ha-181247
	I0917 17:19:10.705180   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:10.705104   30902 retry.go:31] will retry after 344.628649ms: waiting for machine to come up
	I0917 17:19:11.051458   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:11.051952   29734 main.go:141] libmachine: (ha-181247-m03) DBG | unable to find current IP address of domain ha-181247-m03 in network mk-ha-181247
	I0917 17:19:11.051969   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:11.051922   30902 retry.go:31] will retry after 429.299996ms: waiting for machine to come up
	I0917 17:19:11.482452   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:11.482986   29734 main.go:141] libmachine: (ha-181247-m03) DBG | unable to find current IP address of domain ha-181247-m03 in network mk-ha-181247
	I0917 17:19:11.483018   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:11.482928   30902 retry.go:31] will retry after 445.767937ms: waiting for machine to come up
	I0917 17:19:11.930607   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:11.931010   29734 main.go:141] libmachine: (ha-181247-m03) DBG | unable to find current IP address of domain ha-181247-m03 in network mk-ha-181247
	I0917 17:19:11.931032   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:11.930985   30902 retry.go:31] will retry after 522.333996ms: waiting for machine to come up
	I0917 17:19:12.455383   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:12.455913   29734 main.go:141] libmachine: (ha-181247-m03) DBG | unable to find current IP address of domain ha-181247-m03 in network mk-ha-181247
	I0917 17:19:12.455960   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:12.455891   30902 retry.go:31] will retry after 687.049109ms: waiting for machine to come up
	I0917 17:19:13.144894   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:13.145357   29734 main.go:141] libmachine: (ha-181247-m03) DBG | unable to find current IP address of domain ha-181247-m03 in network mk-ha-181247
	I0917 17:19:13.145382   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:13.145313   30902 retry.go:31] will retry after 1.171486205s: waiting for machine to come up
	I0917 17:19:14.317844   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:14.318370   29734 main.go:141] libmachine: (ha-181247-m03) DBG | unable to find current IP address of domain ha-181247-m03 in network mk-ha-181247
	I0917 17:19:14.318397   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:14.318330   30902 retry.go:31] will retry after 1.218607108s: waiting for machine to come up
	I0917 17:19:15.539487   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:15.540058   29734 main.go:141] libmachine: (ha-181247-m03) DBG | unable to find current IP address of domain ha-181247-m03 in network mk-ha-181247
	I0917 17:19:15.540083   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:15.540017   30902 retry.go:31] will retry after 1.749617094s: waiting for machine to come up
	I0917 17:19:17.290964   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:17.291439   29734 main.go:141] libmachine: (ha-181247-m03) DBG | unable to find current IP address of domain ha-181247-m03 in network mk-ha-181247
	I0917 17:19:17.291474   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:17.291380   30902 retry.go:31] will retry after 2.306914749s: waiting for machine to come up
	I0917 17:19:19.599499   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:19.599990   29734 main.go:141] libmachine: (ha-181247-m03) DBG | unable to find current IP address of domain ha-181247-m03 in network mk-ha-181247
	I0917 17:19:19.600020   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:19.599937   30902 retry.go:31] will retry after 2.681763013s: waiting for machine to come up
	I0917 17:19:22.284617   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:22.284998   29734 main.go:141] libmachine: (ha-181247-m03) DBG | unable to find current IP address of domain ha-181247-m03 in network mk-ha-181247
	I0917 17:19:22.285015   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:22.284962   30902 retry.go:31] will retry after 3.378188576s: waiting for machine to come up
	I0917 17:19:25.665734   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:25.666176   29734 main.go:141] libmachine: (ha-181247-m03) DBG | unable to find current IP address of domain ha-181247-m03 in network mk-ha-181247
	I0917 17:19:25.666198   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:25.666147   30902 retry.go:31] will retry after 2.801526949s: waiting for machine to come up
	I0917 17:19:28.471310   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:28.471831   29734 main.go:141] libmachine: (ha-181247-m03) DBG | unable to find current IP address of domain ha-181247-m03 in network mk-ha-181247
	I0917 17:19:28.471868   29734 main.go:141] libmachine: (ha-181247-m03) DBG | I0917 17:19:28.471800   30902 retry.go:31] will retry after 4.266119746s: waiting for machine to come up
	I0917 17:19:32.742334   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:32.742918   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has current primary IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:32.742940   29734 main.go:141] libmachine: (ha-181247-m03) Found IP for machine: 192.168.39.122
	I0917 17:19:32.742954   29734 main.go:141] libmachine: (ha-181247-m03) Reserving static IP address...
	I0917 17:19:32.743333   29734 main.go:141] libmachine: (ha-181247-m03) DBG | unable to find host DHCP lease matching {name: "ha-181247-m03", mac: "52:54:00:48:b5:33", ip: "192.168.39.122"} in network mk-ha-181247
	I0917 17:19:32.819277   29734 main.go:141] libmachine: (ha-181247-m03) Reserved static IP address: 192.168.39.122
	I0917 17:19:32.819306   29734 main.go:141] libmachine: (ha-181247-m03) Waiting for SSH to be available...
	I0917 17:19:32.819316   29734 main.go:141] libmachine: (ha-181247-m03) DBG | Getting to WaitForSSH function...
	I0917 17:19:32.821761   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:32.822169   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:minikube Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:32.822190   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:32.822367   29734 main.go:141] libmachine: (ha-181247-m03) DBG | Using SSH client type: external
	I0917 17:19:32.822395   29734 main.go:141] libmachine: (ha-181247-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03/id_rsa (-rw-------)
	I0917 17:19:32.822427   29734 main.go:141] libmachine: (ha-181247-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.122 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 17:19:32.822453   29734 main.go:141] libmachine: (ha-181247-m03) DBG | About to run SSH command:
	I0917 17:19:32.822467   29734 main.go:141] libmachine: (ha-181247-m03) DBG | exit 0
	I0917 17:19:32.953457   29734 main.go:141] libmachine: (ha-181247-m03) DBG | SSH cmd err, output: <nil>: 
	I0917 17:19:32.953739   29734 main.go:141] libmachine: (ha-181247-m03) KVM machine creation complete!
	I0917 17:19:32.954036   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetConfigRaw
	I0917 17:19:32.954714   29734 main.go:141] libmachine: (ha-181247-m03) Calling .DriverName
	I0917 17:19:32.954923   29734 main.go:141] libmachine: (ha-181247-m03) Calling .DriverName
	I0917 17:19:32.955073   29734 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0917 17:19:32.955089   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetState
	I0917 17:19:32.956240   29734 main.go:141] libmachine: Detecting operating system of created instance...
	I0917 17:19:32.956256   29734 main.go:141] libmachine: Waiting for SSH to be available...
	I0917 17:19:32.956263   29734 main.go:141] libmachine: Getting to WaitForSSH function...
	I0917 17:19:32.956278   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	I0917 17:19:32.958371   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:32.958730   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:32.958751   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:32.958900   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHPort
	I0917 17:19:32.959056   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:32.959167   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:32.959266   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHUsername
	I0917 17:19:32.959385   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:19:32.959602   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0917 17:19:32.959614   29734 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0917 17:19:33.068826   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 17:19:33.068847   29734 main.go:141] libmachine: Detecting the provisioner...
	I0917 17:19:33.068856   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	I0917 17:19:33.071615   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:33.072011   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:33.072039   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:33.072171   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHPort
	I0917 17:19:33.072367   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:33.072508   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:33.072621   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHUsername
	I0917 17:19:33.072787   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:19:33.072944   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0917 17:19:33.072953   29734 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0917 17:19:33.186697   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0917 17:19:33.186760   29734 main.go:141] libmachine: found compatible host: buildroot
	I0917 17:19:33.186770   29734 main.go:141] libmachine: Provisioning with buildroot...
	I0917 17:19:33.186781   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetMachineName
	I0917 17:19:33.187034   29734 buildroot.go:166] provisioning hostname "ha-181247-m03"
	I0917 17:19:33.187063   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetMachineName
	I0917 17:19:33.187269   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	I0917 17:19:33.189788   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:33.190166   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:33.190198   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:33.190387   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHPort
	I0917 17:19:33.190562   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:33.190695   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:33.190795   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHUsername
	I0917 17:19:33.190937   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:19:33.191097   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0917 17:19:33.191108   29734 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181247-m03 && echo "ha-181247-m03" | sudo tee /etc/hostname
	I0917 17:19:33.316880   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181247-m03
	
	I0917 17:19:33.316904   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	I0917 17:19:33.319374   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:33.319803   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:33.319837   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:33.319999   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHPort
	I0917 17:19:33.320190   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:33.320329   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:33.320437   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHUsername
	I0917 17:19:33.320568   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:19:33.320768   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0917 17:19:33.320792   29734 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181247-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181247-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181247-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 17:19:33.445343   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 17:19:33.445372   29734 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 17:19:33.445395   29734 buildroot.go:174] setting up certificates
	I0917 17:19:33.445411   29734 provision.go:84] configureAuth start
	I0917 17:19:33.445420   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetMachineName
	I0917 17:19:33.445691   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetIP
	I0917 17:19:33.448403   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:33.448827   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:33.448855   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:33.449004   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	I0917 17:19:33.451416   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:33.451797   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:33.451824   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:33.451990   29734 provision.go:143] copyHostCerts
	I0917 17:19:33.452021   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 17:19:33.452060   29734 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 17:19:33.452073   29734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 17:19:33.452157   29734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 17:19:33.452252   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 17:19:33.452277   29734 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 17:19:33.452287   29734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 17:19:33.452342   29734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 17:19:33.452417   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 17:19:33.452440   29734 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 17:19:33.452450   29734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 17:19:33.452487   29734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 17:19:33.452551   29734 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.ha-181247-m03 san=[127.0.0.1 192.168.39.122 ha-181247-m03 localhost minikube]
	I0917 17:19:33.590042   29734 provision.go:177] copyRemoteCerts
	I0917 17:19:33.590093   29734 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 17:19:33.590120   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	I0917 17:19:33.592691   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:33.593024   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:33.593067   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:33.593247   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHPort
	I0917 17:19:33.593427   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:33.593600   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHUsername
	I0917 17:19:33.593736   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03/id_rsa Username:docker}
	I0917 17:19:33.681307   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 17:19:33.681385   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 17:19:33.708421   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 17:19:33.708517   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 17:19:33.735759   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 17:19:33.735833   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 17:19:33.761523   29734 provision.go:87] duration metric: took 316.098149ms to configureAuth
	I0917 17:19:33.761555   29734 buildroot.go:189] setting minikube options for container-runtime
	I0917 17:19:33.761848   29734 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:19:33.761935   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	I0917 17:19:33.764433   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:33.764922   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:33.764961   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:33.765242   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHPort
	I0917 17:19:33.765475   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:33.765667   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:33.765834   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHUsername
	I0917 17:19:33.766032   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:19:33.766257   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0917 17:19:33.766281   29734 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 17:19:34.007705   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 17:19:34.007740   29734 main.go:141] libmachine: Checking connection to Docker...
	I0917 17:19:34.007752   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetURL
	I0917 17:19:34.009192   29734 main.go:141] libmachine: (ha-181247-m03) DBG | Using libvirt version 6000000
	I0917 17:19:34.011683   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:34.012061   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:34.012101   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:34.012253   29734 main.go:141] libmachine: Docker is up and running!
	I0917 17:19:34.012267   29734 main.go:141] libmachine: Reticulating splines...
	I0917 17:19:34.012274   29734 client.go:171] duration metric: took 25.353048014s to LocalClient.Create
	I0917 17:19:34.012303   29734 start.go:167] duration metric: took 25.35312837s to libmachine.API.Create "ha-181247"
	I0917 17:19:34.012316   29734 start.go:293] postStartSetup for "ha-181247-m03" (driver="kvm2")
	I0917 17:19:34.012329   29734 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 17:19:34.012362   29734 main.go:141] libmachine: (ha-181247-m03) Calling .DriverName
	I0917 17:19:34.012602   29734 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 17:19:34.012626   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	I0917 17:19:34.015389   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:34.015790   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:34.015816   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:34.016029   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHPort
	I0917 17:19:34.016197   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:34.016319   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHUsername
	I0917 17:19:34.016473   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03/id_rsa Username:docker}
	I0917 17:19:34.104236   29734 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 17:19:34.108602   29734 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 17:19:34.108625   29734 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 17:19:34.108692   29734 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 17:19:34.108762   29734 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 17:19:34.108774   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> /etc/ssl/certs/182592.pem
	I0917 17:19:34.108863   29734 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 17:19:34.118497   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 17:19:34.143886   29734 start.go:296] duration metric: took 131.555198ms for postStartSetup
	I0917 17:19:34.143930   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetConfigRaw
	I0917 17:19:34.144583   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetIP
	I0917 17:19:34.147117   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:34.147484   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:34.147515   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:34.147804   29734 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/config.json ...
	I0917 17:19:34.148012   29734 start.go:128] duration metric: took 25.507526501s to createHost
	I0917 17:19:34.148037   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	I0917 17:19:34.150418   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:34.150758   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:34.150785   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:34.150996   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHPort
	I0917 17:19:34.151166   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:34.151307   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:34.151445   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHUsername
	I0917 17:19:34.151606   29734 main.go:141] libmachine: Using SSH client type: native
	I0917 17:19:34.151799   29734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0917 17:19:34.151814   29734 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 17:19:34.262347   29734 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726593574.236617709
	
	I0917 17:19:34.262366   29734 fix.go:216] guest clock: 1726593574.236617709
	I0917 17:19:34.262375   29734 fix.go:229] Guest: 2024-09-17 17:19:34.236617709 +0000 UTC Remote: 2024-09-17 17:19:34.148025415 +0000 UTC m=+141.888904346 (delta=88.592294ms)
	I0917 17:19:34.262395   29734 fix.go:200] guest clock delta is within tolerance: 88.592294ms
	I0917 17:19:34.262400   29734 start.go:83] releasing machines lock for "ha-181247-m03", held for 25.622025247s
	I0917 17:19:34.262422   29734 main.go:141] libmachine: (ha-181247-m03) Calling .DriverName
	I0917 17:19:34.262684   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetIP
	I0917 17:19:34.265426   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:34.265760   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:34.265794   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:34.268093   29734 out.go:177] * Found network options:
	I0917 17:19:34.269521   29734 out.go:177]   - NO_PROXY=192.168.39.195,192.168.39.11
	W0917 17:19:34.270946   29734 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 17:19:34.270971   29734 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 17:19:34.270990   29734 main.go:141] libmachine: (ha-181247-m03) Calling .DriverName
	I0917 17:19:34.271522   29734 main.go:141] libmachine: (ha-181247-m03) Calling .DriverName
	I0917 17:19:34.271710   29734 main.go:141] libmachine: (ha-181247-m03) Calling .DriverName
	I0917 17:19:34.271824   29734 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 17:19:34.271864   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	W0917 17:19:34.271881   29734 proxy.go:119] fail to check proxy env: Error ip not in block
	W0917 17:19:34.271901   29734 proxy.go:119] fail to check proxy env: Error ip not in block
	I0917 17:19:34.271971   29734 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 17:19:34.271987   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	I0917 17:19:34.274729   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:34.274812   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:34.275145   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:34.275165   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:34.275213   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:34.275228   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:34.275325   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHPort
	I0917 17:19:34.275470   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHPort
	I0917 17:19:34.275551   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:34.275611   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:19:34.275729   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHUsername
	I0917 17:19:34.275738   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHUsername
	I0917 17:19:34.275876   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03/id_rsa Username:docker}
	I0917 17:19:34.275939   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03/id_rsa Username:docker}
	I0917 17:19:34.531860   29734 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 17:19:34.538856   29734 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 17:19:34.538991   29734 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 17:19:34.556557   29734 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 17:19:34.556582   29734 start.go:495] detecting cgroup driver to use...
	I0917 17:19:34.556664   29734 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 17:19:34.574233   29734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 17:19:34.590846   29734 docker.go:217] disabling cri-docker service (if available) ...
	I0917 17:19:34.590914   29734 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 17:19:34.606281   29734 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 17:19:34.620682   29734 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 17:19:34.740105   29734 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 17:19:34.889013   29734 docker.go:233] disabling docker service ...
	I0917 17:19:34.889085   29734 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 17:19:34.904179   29734 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 17:19:34.918084   29734 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 17:19:35.067080   29734 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 17:19:35.213525   29734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 17:19:35.228510   29734 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 17:19:35.249534   29734 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 17:19:35.249615   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:19:35.261455   29734 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 17:19:35.261533   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:19:35.273150   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:19:35.284319   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:19:35.296139   29734 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 17:19:35.307727   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:19:35.318993   29734 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:19:35.338300   29734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:19:35.350602   29734 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 17:19:35.360817   29734 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 17:19:35.360880   29734 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 17:19:35.375350   29734 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 17:19:35.385443   29734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:19:35.508400   29734 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 17:19:35.609779   29734 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 17:19:35.609860   29734 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 17:19:35.614634   29734 start.go:563] Will wait 60s for crictl version
	I0917 17:19:35.614701   29734 ssh_runner.go:195] Run: which crictl
	I0917 17:19:35.618547   29734 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 17:19:35.659190   29734 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 17:19:35.659274   29734 ssh_runner.go:195] Run: crio --version
	I0917 17:19:35.689203   29734 ssh_runner.go:195] Run: crio --version
	I0917 17:19:35.721078   29734 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 17:19:35.722575   29734 out.go:177]   - env NO_PROXY=192.168.39.195
	I0917 17:19:35.724058   29734 out.go:177]   - env NO_PROXY=192.168.39.195,192.168.39.11
	I0917 17:19:35.725224   29734 main.go:141] libmachine: (ha-181247-m03) Calling .GetIP
	I0917 17:19:35.728092   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:35.728468   29734 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:19:35.728496   29734 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:19:35.728708   29734 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0917 17:19:35.733137   29734 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 17:19:35.746300   29734 mustload.go:65] Loading cluster: ha-181247
	I0917 17:19:35.746532   29734 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:19:35.746838   29734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:19:35.746876   29734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:19:35.762171   29734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38843
	I0917 17:19:35.762651   29734 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:19:35.763150   29734 main.go:141] libmachine: Using API Version  1
	I0917 17:19:35.763183   29734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:19:35.763541   29734 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:19:35.763747   29734 main.go:141] libmachine: (ha-181247) Calling .GetState
	I0917 17:19:35.765372   29734 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:19:35.765673   29734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:19:35.765714   29734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:19:35.781734   29734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46653
	I0917 17:19:35.782089   29734 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:19:35.782536   29734 main.go:141] libmachine: Using API Version  1
	I0917 17:19:35.782558   29734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:19:35.782909   29734 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:19:35.783101   29734 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:19:35.783265   29734 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247 for IP: 192.168.39.122
	I0917 17:19:35.783277   29734 certs.go:194] generating shared ca certs ...
	I0917 17:19:35.783294   29734 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:19:35.783429   29734 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 17:19:35.783466   29734 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 17:19:35.783476   29734 certs.go:256] generating profile certs ...
	I0917 17:19:35.783540   29734 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/client.key
	I0917 17:19:35.783565   29734 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.661a4327
	I0917 17:19:35.783578   29734 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.661a4327 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.195 192.168.39.11 192.168.39.122 192.168.39.254]
	I0917 17:19:35.857068   29734 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.661a4327 ...
	I0917 17:19:35.857099   29734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.661a4327: {Name:mkaec4fe728dbd262613238450879676d5138a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:19:35.857295   29734 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.661a4327 ...
	I0917 17:19:35.857310   29734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.661a4327: {Name:mkae136412c99dae36859e1e80126c8d56b77cf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:19:35.857389   29734 certs.go:381] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.661a4327 -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt
	I0917 17:19:35.857574   29734 certs.go:385] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.661a4327 -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key
	I0917 17:19:35.857700   29734 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.key
	I0917 17:19:35.857715   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 17:19:35.857734   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 17:19:35.857746   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 17:19:35.857759   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 17:19:35.857771   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 17:19:35.857784   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 17:19:35.857796   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 17:19:35.873337   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 17:19:35.873452   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 17:19:35.873484   29734 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 17:19:35.873494   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 17:19:35.873520   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 17:19:35.873544   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 17:19:35.873572   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 17:19:35.873611   29734 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 17:19:35.873640   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem -> /usr/share/ca-certificates/18259.pem
	I0917 17:19:35.873663   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> /usr/share/ca-certificates/182592.pem
	I0917 17:19:35.873674   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:19:35.873707   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:19:35.876770   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:19:35.877171   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:19:35.877197   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:19:35.877480   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:19:35.877668   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:19:35.877831   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:19:35.877940   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:19:35.953578   29734 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 17:19:35.959804   29734 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 17:19:35.980063   29734 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 17:19:35.986792   29734 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0917 17:19:35.999691   29734 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 17:19:36.005080   29734 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 17:19:36.019905   29734 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 17:19:36.025075   29734 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0917 17:19:36.039517   29734 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 17:19:36.044604   29734 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 17:19:36.056083   29734 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 17:19:36.061370   29734 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0917 17:19:36.074689   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 17:19:36.101573   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 17:19:36.127141   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 17:19:36.153027   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 17:19:36.178358   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0917 17:19:36.203619   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 17:19:36.228855   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 17:19:36.254491   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 17:19:36.280182   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 17:19:36.305547   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 17:19:36.331470   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 17:19:36.358264   29734 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 17:19:36.377242   29734 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0917 17:19:36.395522   29734 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 17:19:36.413957   29734 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0917 17:19:36.432410   29734 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 17:19:36.450293   29734 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0917 17:19:36.467354   29734 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 17:19:36.488029   29734 ssh_runner.go:195] Run: openssl version
	I0917 17:19:36.494263   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 17:19:36.505981   29734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 17:19:36.510479   29734 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 17:19:36.510526   29734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 17:19:36.516347   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 17:19:36.527975   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 17:19:36.539870   29734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:19:36.545269   29734 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:19:36.545333   29734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:19:36.551325   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 17:19:36.562948   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 17:19:36.574691   29734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 17:19:36.579551   29734 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 17:19:36.579620   29734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 17:19:36.585554   29734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 17:19:36.598719   29734 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 17:19:36.604510   29734 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 17:19:36.604566   29734 kubeadm.go:934] updating node {m03 192.168.39.122 8443 v1.31.1 crio true true} ...
	I0917 17:19:36.604637   29734 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-181247-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.122
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-181247 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 17:19:36.604662   29734 kube-vip.go:115] generating kube-vip config ...
	I0917 17:19:36.604697   29734 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 17:19:36.622296   29734 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 17:19:36.622381   29734 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 17:19:36.622452   29734 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 17:19:36.632840   29734 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0917 17:19:36.632903   29734 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0917 17:19:36.644101   29734 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0917 17:19:36.644127   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0917 17:19:36.644149   29734 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0917 17:19:36.644170   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0917 17:19:36.644172   29734 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0917 17:19:36.644181   29734 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0917 17:19:36.644215   29734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:19:36.644230   29734 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0917 17:19:36.664028   29734 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0917 17:19:36.664070   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0917 17:19:36.664139   29734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0917 17:19:36.664192   29734 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0917 17:19:36.664220   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0917 17:19:36.664236   29734 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0917 17:19:36.700097   29734 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0917 17:19:36.700145   29734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0917 17:19:37.652846   29734 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 17:19:37.663720   29734 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0917 17:19:37.681818   29734 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 17:19:37.700412   29734 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0917 17:19:37.720111   29734 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0917 17:19:37.724467   29734 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 17:19:37.738316   29734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:19:37.877851   29734 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 17:19:37.897444   29734 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:19:37.897909   29734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:19:37.897966   29734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:19:37.916204   29734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37233
	I0917 17:19:37.916645   29734 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:19:37.917142   29734 main.go:141] libmachine: Using API Version  1
	I0917 17:19:37.917169   29734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:19:37.917548   29734 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:19:37.917750   29734 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:19:37.917882   29734 start.go:317] joinCluster: &{Name:ha-181247 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-181247 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:19:37.918049   29734 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0917 17:19:37.918073   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:19:37.921635   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:19:37.922220   29734 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:19:37.922248   29734 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:19:37.922463   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:19:37.922813   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:19:37.923000   29734 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:19:37.923167   29734 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:19:38.097518   29734 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 17:19:38.097574   29734 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4ed067.7lqtjmb7q7q1uvw2 --discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-181247-m03 --control-plane --apiserver-advertise-address=192.168.39.122 --apiserver-bind-port=8443"
	I0917 17:20:01.587638   29734 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4ed067.7lqtjmb7q7q1uvw2 --discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-181247-m03 --control-plane --apiserver-advertise-address=192.168.39.122 --apiserver-bind-port=8443": (23.490043145s)
	I0917 17:20:01.587678   29734 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0917 17:20:02.179280   29734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-181247-m03 minikube.k8s.io/updated_at=2024_09_17T17_20_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=ha-181247 minikube.k8s.io/primary=false
	I0917 17:20:02.344849   29734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-181247-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0917 17:20:02.506759   29734 start.go:319] duration metric: took 24.58887463s to joinCluster
	I0917 17:20:02.506838   29734 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 17:20:02.507278   29734 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:20:02.508859   29734 out.go:177] * Verifying Kubernetes components...
	I0917 17:20:02.511078   29734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:20:02.768010   29734 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 17:20:02.800525   29734 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 17:20:02.800769   29734 kapi.go:59] client config for ha-181247: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/client.crt", KeyFile:"/home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/client.key", CAFile:"/home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 17:20:02.800825   29734 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.195:8443
	I0917 17:20:02.801006   29734 node_ready.go:35] waiting up to 6m0s for node "ha-181247-m03" to be "Ready" ...
	I0917 17:20:02.801066   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:02.801074   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:02.801081   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:02.801086   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:02.805370   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:03.301972   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:03.301996   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:03.302008   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:03.302015   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:03.305841   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:03.801643   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:03.801673   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:03.801684   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:03.801690   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:03.806263   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:04.301828   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:04.301851   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:04.301864   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:04.301873   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:04.305853   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:04.802066   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:04.802092   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:04.802101   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:04.802104   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:04.806363   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:04.806930   29734 node_ready.go:53] node "ha-181247-m03" has status "Ready":"False"
	I0917 17:20:05.302264   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:05.302290   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:05.302302   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:05.302308   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:05.306375   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:05.801380   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:05.801411   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:05.801422   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:05.801427   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:05.805349   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:06.301374   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:06.301407   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:06.301422   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:06.301432   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:06.304898   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:06.801207   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:06.801264   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:06.801274   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:06.801277   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:06.804783   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:07.302189   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:07.302210   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:07.302221   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:07.302227   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:07.305561   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:07.306249   29734 node_ready.go:53] node "ha-181247-m03" has status "Ready":"False"
	I0917 17:20:07.802160   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:07.802186   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:07.802198   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:07.802205   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:07.806023   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:08.301810   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:08.301834   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:08.301847   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:08.301851   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:08.309265   29734 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0917 17:20:08.801195   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:08.801217   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:08.801240   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:08.801245   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:08.804983   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:09.301155   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:09.301179   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:09.301187   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:09.301190   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:09.304767   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:09.801398   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:09.801421   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:09.801429   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:09.801433   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:09.805173   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:09.806007   29734 node_ready.go:53] node "ha-181247-m03" has status "Ready":"False"
	I0917 17:20:10.301421   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:10.301445   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:10.301453   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:10.301458   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:10.304752   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:10.801766   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:10.801787   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:10.801795   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:10.801799   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:10.805910   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:11.301250   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:11.301272   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:11.301283   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:11.301287   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:11.305087   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:11.801381   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:11.801404   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:11.801414   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:11.801418   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:11.805431   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:11.806115   29734 node_ready.go:53] node "ha-181247-m03" has status "Ready":"False"
	I0917 17:20:12.301979   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:12.302001   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:12.302011   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:12.302018   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:12.306005   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:12.802217   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:12.802239   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:12.802247   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:12.802252   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:12.805899   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:13.301283   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:13.301321   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:13.301330   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:13.301336   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:13.305773   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:13.801647   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:13.801669   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:13.801677   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:13.801683   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:13.805088   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:14.302183   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:14.302209   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:14.302221   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:14.302227   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:14.305690   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:14.306309   29734 node_ready.go:53] node "ha-181247-m03" has status "Ready":"False"
	I0917 17:20:14.801430   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:14.801456   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:14.801466   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:14.801472   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:14.806457   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:15.301422   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:15.301449   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:15.301461   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:15.301469   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:15.305063   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:15.802100   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:15.802121   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:15.802129   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:15.802136   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:15.805547   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:16.301923   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:16.301945   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:16.301953   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:16.301957   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:16.305406   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:16.801782   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:16.801804   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:16.801813   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:16.801817   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:16.805309   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:16.806034   29734 node_ready.go:53] node "ha-181247-m03" has status "Ready":"False"
	I0917 17:20:17.301706   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:17.301732   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:17.301743   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:17.301751   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:17.305245   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:17.305731   29734 node_ready.go:49] node "ha-181247-m03" has status "Ready":"True"
	I0917 17:20:17.305749   29734 node_ready.go:38] duration metric: took 14.504731184s for node "ha-181247-m03" to be "Ready" ...
	I0917 17:20:17.305757   29734 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 17:20:17.305816   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0917 17:20:17.305825   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:17.305832   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:17.305837   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:17.312471   29734 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 17:20:17.319460   29734 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5lmg4" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:17.319541   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5lmg4
	I0917 17:20:17.319549   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:17.319556   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:17.319560   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:17.326614   29734 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0917 17:20:17.327955   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:20:17.327969   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:17.327977   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:17.327981   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:17.336447   29734 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0917 17:20:17.337223   29734 pod_ready.go:93] pod "coredns-7c65d6cfc9-5lmg4" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:17.337266   29734 pod_ready.go:82] duration metric: took 17.77938ms for pod "coredns-7c65d6cfc9-5lmg4" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:17.337278   29734 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bdthh" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:17.337334   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bdthh
	I0917 17:20:17.337341   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:17.337348   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:17.337355   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:17.340474   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:17.341148   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:20:17.341166   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:17.341174   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:17.341178   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:17.343927   29734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 17:20:17.344502   29734 pod_ready.go:93] pod "coredns-7c65d6cfc9-bdthh" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:17.344520   29734 pod_ready.go:82] duration metric: took 7.234573ms for pod "coredns-7c65d6cfc9-bdthh" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:17.344533   29734 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:17.344596   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/etcd-ha-181247
	I0917 17:20:17.344606   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:17.344616   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:17.344623   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:17.348107   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:17.348913   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:20:17.348924   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:17.348931   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:17.348937   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:17.351861   29734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 17:20:17.352464   29734 pod_ready.go:93] pod "etcd-ha-181247" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:17.352484   29734 pod_ready.go:82] duration metric: took 7.943434ms for pod "etcd-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:17.352498   29734 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:17.352551   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/etcd-ha-181247-m02
	I0917 17:20:17.352559   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:17.352566   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:17.352576   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:17.355372   29734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 17:20:17.355924   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:20:17.355937   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:17.355944   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:17.355948   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:17.358721   29734 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0917 17:20:17.359152   29734 pod_ready.go:93] pod "etcd-ha-181247-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:17.359167   29734 pod_ready.go:82] duration metric: took 6.66316ms for pod "etcd-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:17.359179   29734 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-181247-m03" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:17.502636   29734 request.go:632] Waited for 143.380911ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/etcd-ha-181247-m03
	I0917 17:20:17.502720   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/etcd-ha-181247-m03
	I0917 17:20:17.502729   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:17.502741   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:17.502747   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:17.506289   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:17.702257   29734 request.go:632] Waited for 195.390906ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:17.702343   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:17.702351   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:17.702360   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:17.702370   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:17.705911   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:17.706599   29734 pod_ready.go:93] pod "etcd-ha-181247-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:17.706622   29734 pod_ready.go:82] duration metric: took 347.432415ms for pod "etcd-ha-181247-m03" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:17.706639   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:17.902395   29734 request.go:632] Waited for 195.682205ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181247
	I0917 17:20:17.902475   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181247
	I0917 17:20:17.902483   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:17.902494   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:17.902505   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:17.906384   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:18.102577   29734 request.go:632] Waited for 195.384056ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:20:18.102628   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:20:18.102633   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:18.102643   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:18.102651   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:18.107608   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:18.108198   29734 pod_ready.go:93] pod "kube-apiserver-ha-181247" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:18.108225   29734 pod_ready.go:82] duration metric: took 401.578528ms for pod "kube-apiserver-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:18.108239   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:18.302202   29734 request.go:632] Waited for 193.888108ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181247-m02
	I0917 17:20:18.302259   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181247-m02
	I0917 17:20:18.302266   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:18.302276   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:18.302282   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:18.306431   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:18.502397   29734 request.go:632] Waited for 195.211721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:20:18.502464   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:20:18.502469   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:18.502477   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:18.502485   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:18.506567   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:18.507076   29734 pod_ready.go:93] pod "kube-apiserver-ha-181247-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:18.507093   29734 pod_ready.go:82] duration metric: took 398.84232ms for pod "kube-apiserver-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:18.507105   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-181247-m03" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:18.702660   29734 request.go:632] Waited for 195.459967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181247-m03
	I0917 17:20:18.702724   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181247-m03
	I0917 17:20:18.702731   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:18.702742   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:18.702752   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:18.706494   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:18.902093   29734 request.go:632] Waited for 194.812702ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:18.902157   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:18.902162   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:18.902170   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:18.902175   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:18.905661   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:18.906182   29734 pod_ready.go:93] pod "kube-apiserver-ha-181247-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:18.906202   29734 pod_ready.go:82] duration metric: took 399.08599ms for pod "kube-apiserver-ha-181247-m03" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:18.906213   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:19.102266   29734 request.go:632] Waited for 195.989867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181247
	I0917 17:20:19.102334   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181247
	I0917 17:20:19.102339   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:19.102346   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:19.102350   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:19.105958   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:19.302064   29734 request.go:632] Waited for 195.397143ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:20:19.302136   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:20:19.302147   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:19.302159   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:19.302169   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:19.305615   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:19.306389   29734 pod_ready.go:93] pod "kube-controller-manager-ha-181247" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:19.306409   29734 pod_ready.go:82] duration metric: took 400.188287ms for pod "kube-controller-manager-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:19.306422   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:19.502428   29734 request.go:632] Waited for 195.912747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181247-m02
	I0917 17:20:19.502485   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181247-m02
	I0917 17:20:19.502491   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:19.502498   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:19.502503   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:19.506085   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:19.702442   29734 request.go:632] Waited for 195.383611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:20:19.702502   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:20:19.702509   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:19.702519   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:19.702535   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:19.705984   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:19.706637   29734 pod_ready.go:93] pod "kube-controller-manager-ha-181247-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:19.706660   29734 pod_ready.go:82] duration metric: took 400.225093ms for pod "kube-controller-manager-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:19.706669   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-181247-m03" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:19.901720   29734 request.go:632] Waited for 194.990972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181247-m03
	I0917 17:20:19.901798   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181247-m03
	I0917 17:20:19.901806   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:19.901815   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:19.901824   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:19.905444   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:20.102496   29734 request.go:632] Waited for 196.368768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:20.102579   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:20.102586   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:20.102600   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:20.102608   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:20.106315   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:20.106730   29734 pod_ready.go:93] pod "kube-controller-manager-ha-181247-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:20.106749   29734 pod_ready.go:82] duration metric: took 400.070285ms for pod "kube-controller-manager-ha-181247-m03" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:20.106758   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-42gpk" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:20.301796   29734 request.go:632] Waited for 194.972487ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-42gpk
	I0917 17:20:20.301870   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-42gpk
	I0917 17:20:20.301877   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:20.301887   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:20.301892   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:20.305925   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:20.501826   29734 request.go:632] Waited for 195.291541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:20.501896   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:20.501910   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:20.501921   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:20.501931   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:20.506082   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:20.506756   29734 pod_ready.go:93] pod "kube-proxy-42gpk" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:20.506789   29734 pod_ready.go:82] duration metric: took 400.024002ms for pod "kube-proxy-42gpk" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:20.506800   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7rrxk" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:20.701779   29734 request.go:632] Waited for 194.912668ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7rrxk
	I0917 17:20:20.701868   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7rrxk
	I0917 17:20:20.701879   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:20.701887   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:20.701893   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:20.705311   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:20.901827   29734 request.go:632] Waited for 195.713484ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:20:20.901907   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:20:20.901922   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:20.901933   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:20.901939   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:20.905569   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:20.906149   29734 pod_ready.go:93] pod "kube-proxy-7rrxk" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:20.906171   29734 pod_ready.go:82] duration metric: took 399.363425ms for pod "kube-proxy-7rrxk" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:20.906183   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xmfcj" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:21.102209   29734 request.go:632] Waited for 195.95697ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xmfcj
	I0917 17:20:21.102264   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xmfcj
	I0917 17:20:21.102269   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:21.102277   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:21.102280   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:21.105937   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:21.302138   29734 request.go:632] Waited for 195.366412ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:20:21.302216   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:20:21.302222   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:21.302231   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:21.302238   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:21.305707   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:21.306269   29734 pod_ready.go:93] pod "kube-proxy-xmfcj" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:21.306286   29734 pod_ready.go:82] duration metric: took 400.091414ms for pod "kube-proxy-xmfcj" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:21.306296   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:21.501862   29734 request.go:632] Waited for 195.510489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181247
	I0917 17:20:21.501916   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181247
	I0917 17:20:21.501947   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:21.501960   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:21.501971   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:21.505337   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:21.702381   29734 request.go:632] Waited for 196.386954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:20:21.702453   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247
	I0917 17:20:21.702462   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:21.702469   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:21.702473   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:21.706002   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:21.706592   29734 pod_ready.go:93] pod "kube-scheduler-ha-181247" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:21.706614   29734 pod_ready.go:82] duration metric: took 400.31163ms for pod "kube-scheduler-ha-181247" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:21.706623   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:21.902664   29734 request.go:632] Waited for 195.968567ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181247-m02
	I0917 17:20:21.902728   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181247-m02
	I0917 17:20:21.902734   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:21.902742   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:21.902748   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:21.906255   29734 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:22.102348   29734 request.go:632] Waited for 195.386611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:20:22.102411   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m02
	I0917 17:20:22.102417   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:22.102425   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:22.102429   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:22.108294   29734 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 17:20:22.109362   29734 pod_ready.go:93] pod "kube-scheduler-ha-181247-m02" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:22.109389   29734 pod_ready.go:82] duration metric: took 402.758186ms for pod "kube-scheduler-ha-181247-m02" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:22.109403   29734 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-181247-m03" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:22.301913   29734 request.go:632] Waited for 192.42907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181247-m03
	I0917 17:20:22.301971   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181247-m03
	I0917 17:20:22.301976   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:22.301999   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:22.302006   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:22.306135   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:22.502041   29734 request.go:632] Waited for 195.243772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:22.502115   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes/ha-181247-m03
	I0917 17:20:22.502124   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:22.502131   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:22.502137   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:22.506991   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:22.507512   29734 pod_ready.go:93] pod "kube-scheduler-ha-181247-m03" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:22.507534   29734 pod_ready.go:82] duration metric: took 398.122459ms for pod "kube-scheduler-ha-181247-m03" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:22.507548   29734 pod_ready.go:39] duration metric: took 5.201782079s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 17:20:22.507564   29734 api_server.go:52] waiting for apiserver process to appear ...
	I0917 17:20:22.507650   29734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:20:22.526178   29734 api_server.go:72] duration metric: took 20.019308385s to wait for apiserver process to appear ...
	I0917 17:20:22.526212   29734 api_server.go:88] waiting for apiserver healthz status ...
	I0917 17:20:22.526234   29734 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I0917 17:20:22.531460   29734 api_server.go:279] https://192.168.39.195:8443/healthz returned 200:
	ok
	I0917 17:20:22.531521   29734 round_trippers.go:463] GET https://192.168.39.195:8443/version
	I0917 17:20:22.531526   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:22.531534   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:22.531541   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:22.532521   29734 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0917 17:20:22.532592   29734 api_server.go:141] control plane version: v1.31.1
	I0917 17:20:22.532610   29734 api_server.go:131] duration metric: took 6.39045ms to wait for apiserver health ...
	I0917 17:20:22.532619   29734 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 17:20:22.702015   29734 request.go:632] Waited for 169.322514ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0917 17:20:22.702074   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0917 17:20:22.702080   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:22.702101   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:22.702110   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:22.708463   29734 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 17:20:22.715445   29734 system_pods.go:59] 24 kube-system pods found
	I0917 17:20:22.715473   29734 system_pods.go:61] "coredns-7c65d6cfc9-5lmg4" [1052e249-3530-4220-8214-0c36a02c4215] Running
	I0917 17:20:22.715477   29734 system_pods.go:61] "coredns-7c65d6cfc9-bdthh" [63ae9d00-44ce-47be-80c5-12144ff8c69b] Running
	I0917 17:20:22.715481   29734 system_pods.go:61] "etcd-ha-181247" [6e221481-8a96-432b-935d-8ed44c26ca62] Running
	I0917 17:20:22.715485   29734 system_pods.go:61] "etcd-ha-181247-m02" [a691f359-6e75-464a-9b99-6a9b91ef4907] Running
	I0917 17:20:22.715488   29734 system_pods.go:61] "etcd-ha-181247-m03" [793159b6-0236-4b2b-b5a4-ed2f0c9219c2] Running
	I0917 17:20:22.715491   29734 system_pods.go:61] "kindnet-2tkbp" [882de4ca-d789-403e-a22e-22fbc776af10] Running
	I0917 17:20:22.715495   29734 system_pods.go:61] "kindnet-qqpgm" [5e8663b0-97a2-4995-951a-5fcee45c71de] Running
	I0917 17:20:22.715498   29734 system_pods.go:61] "kindnet-tkbmg" [62acea1a-4ee4-475b-9a04-6b8d50d7f1a0] Running
	I0917 17:20:22.715501   29734 system_pods.go:61] "kube-apiserver-ha-181247" [5386d1d6-820f-4f46-a379-f38cab3047ad] Running
	I0917 17:20:22.715507   29734 system_pods.go:61] "kube-apiserver-ha-181247-m02" [9611a83e-8be3-41c3-8477-f020d0494000] Running
	I0917 17:20:22.715511   29734 system_pods.go:61] "kube-apiserver-ha-181247-m03" [7cdb7a90-1646-4bcf-9665-46ce3c679990] Running
	I0917 17:20:22.715517   29734 system_pods.go:61] "kube-controller-manager-ha-181247" [9732aff5-419d-4d8c-ba06-ec37a29cdb95] Running
	I0917 17:20:22.715522   29734 system_pods.go:61] "kube-controller-manager-ha-181247-m02" [6bc1cdbf-ef9a-420f-8250-131c7684745e] Running
	I0917 17:20:22.715529   29734 system_pods.go:61] "kube-controller-manager-ha-181247-m03" [65f2d1cf-4862-4325-afd6-746cc48d2d7f] Running
	I0917 17:20:22.715534   29734 system_pods.go:61] "kube-proxy-42gpk" [7bb2338a-c1fd-4f7e-8981-57b7319cb457] Running
	I0917 17:20:22.715542   29734 system_pods.go:61] "kube-proxy-7rrxk" [a075630a-48df-429f-98ef-49bca2d9dac5] Running
	I0917 17:20:22.715547   29734 system_pods.go:61] "kube-proxy-xmfcj" [f2eaf5d5-34e2-45b0-9aa3-5cb28b952dfa] Running
	I0917 17:20:22.715554   29734 system_pods.go:61] "kube-scheduler-ha-181247" [dc64d80c-5975-40e4-b3dd-51a43cb7d5c4] Running
	I0917 17:20:22.715560   29734 system_pods.go:61] "kube-scheduler-ha-181247-m02" [2130254c-2836-4867-b9d4-4371d7897b7f] Running
	I0917 17:20:22.715566   29734 system_pods.go:61] "kube-scheduler-ha-181247-m03" [fc544b16-e876-4966-a423-d52ff9041059] Running
	I0917 17:20:22.715569   29734 system_pods.go:61] "kube-vip-ha-181247" [45c79311-640f-4df4-8902-e3b09f11d417] Running
	I0917 17:20:22.715575   29734 system_pods.go:61] "kube-vip-ha-181247-m02" [8de63338-cae2-4484-87f8-51d71ebd3d5a] Running
	I0917 17:20:22.715579   29734 system_pods.go:61] "kube-vip-ha-181247-m03" [44816f72-d64d-4989-8719-b340c1b854d2] Running
	I0917 17:20:22.715584   29734 system_pods.go:61] "storage-provisioner" [fcef4cf0-61a6-4f9f-9644-f17f7f819237] Running
	I0917 17:20:22.715590   29734 system_pods.go:74] duration metric: took 182.963459ms to wait for pod list to return data ...
	I0917 17:20:22.715600   29734 default_sa.go:34] waiting for default service account to be created ...
	I0917 17:20:22.902113   29734 request.go:632] Waited for 186.424159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/default/serviceaccounts
	I0917 17:20:22.902163   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/default/serviceaccounts
	I0917 17:20:22.902169   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:22.902177   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:22.902186   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:22.906212   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:22.906329   29734 default_sa.go:45] found service account: "default"
	I0917 17:20:22.906343   29734 default_sa.go:55] duration metric: took 190.733459ms for default service account to be created ...
	I0917 17:20:22.906352   29734 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 17:20:23.102138   29734 request.go:632] Waited for 195.70342ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0917 17:20:23.102207   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/namespaces/kube-system/pods
	I0917 17:20:23.102215   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:23.102225   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:23.102236   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:23.116110   29734 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0917 17:20:23.123175   29734 system_pods.go:86] 24 kube-system pods found
	I0917 17:20:23.123206   29734 system_pods.go:89] "coredns-7c65d6cfc9-5lmg4" [1052e249-3530-4220-8214-0c36a02c4215] Running
	I0917 17:20:23.123214   29734 system_pods.go:89] "coredns-7c65d6cfc9-bdthh" [63ae9d00-44ce-47be-80c5-12144ff8c69b] Running
	I0917 17:20:23.123219   29734 system_pods.go:89] "etcd-ha-181247" [6e221481-8a96-432b-935d-8ed44c26ca62] Running
	I0917 17:20:23.123225   29734 system_pods.go:89] "etcd-ha-181247-m02" [a691f359-6e75-464a-9b99-6a9b91ef4907] Running
	I0917 17:20:23.123232   29734 system_pods.go:89] "etcd-ha-181247-m03" [793159b6-0236-4b2b-b5a4-ed2f0c9219c2] Running
	I0917 17:20:23.123237   29734 system_pods.go:89] "kindnet-2tkbp" [882de4ca-d789-403e-a22e-22fbc776af10] Running
	I0917 17:20:23.123242   29734 system_pods.go:89] "kindnet-qqpgm" [5e8663b0-97a2-4995-951a-5fcee45c71de] Running
	I0917 17:20:23.123248   29734 system_pods.go:89] "kindnet-tkbmg" [62acea1a-4ee4-475b-9a04-6b8d50d7f1a0] Running
	I0917 17:20:23.123253   29734 system_pods.go:89] "kube-apiserver-ha-181247" [5386d1d6-820f-4f46-a379-f38cab3047ad] Running
	I0917 17:20:23.123260   29734 system_pods.go:89] "kube-apiserver-ha-181247-m02" [9611a83e-8be3-41c3-8477-f020d0494000] Running
	I0917 17:20:23.123266   29734 system_pods.go:89] "kube-apiserver-ha-181247-m03" [7cdb7a90-1646-4bcf-9665-46ce3c679990] Running
	I0917 17:20:23.123272   29734 system_pods.go:89] "kube-controller-manager-ha-181247" [9732aff5-419d-4d8c-ba06-ec37a29cdb95] Running
	I0917 17:20:23.123278   29734 system_pods.go:89] "kube-controller-manager-ha-181247-m02" [6bc1cdbf-ef9a-420f-8250-131c7684745e] Running
	I0917 17:20:23.123287   29734 system_pods.go:89] "kube-controller-manager-ha-181247-m03" [65f2d1cf-4862-4325-afd6-746cc48d2d7f] Running
	I0917 17:20:23.123293   29734 system_pods.go:89] "kube-proxy-42gpk" [7bb2338a-c1fd-4f7e-8981-57b7319cb457] Running
	I0917 17:20:23.123302   29734 system_pods.go:89] "kube-proxy-7rrxk" [a075630a-48df-429f-98ef-49bca2d9dac5] Running
	I0917 17:20:23.123308   29734 system_pods.go:89] "kube-proxy-xmfcj" [f2eaf5d5-34e2-45b0-9aa3-5cb28b952dfa] Running
	I0917 17:20:23.123316   29734 system_pods.go:89] "kube-scheduler-ha-181247" [dc64d80c-5975-40e4-b3dd-51a43cb7d5c4] Running
	I0917 17:20:23.123323   29734 system_pods.go:89] "kube-scheduler-ha-181247-m02" [2130254c-2836-4867-b9d4-4371d7897b7f] Running
	I0917 17:20:23.123332   29734 system_pods.go:89] "kube-scheduler-ha-181247-m03" [fc544b16-e876-4966-a423-d52ff9041059] Running
	I0917 17:20:23.123338   29734 system_pods.go:89] "kube-vip-ha-181247" [45c79311-640f-4df4-8902-e3b09f11d417] Running
	I0917 17:20:23.123346   29734 system_pods.go:89] "kube-vip-ha-181247-m02" [8de63338-cae2-4484-87f8-51d71ebd3d5a] Running
	I0917 17:20:23.123351   29734 system_pods.go:89] "kube-vip-ha-181247-m03" [44816f72-d64d-4989-8719-b340c1b854d2] Running
	I0917 17:20:23.123359   29734 system_pods.go:89] "storage-provisioner" [fcef4cf0-61a6-4f9f-9644-f17f7f819237] Running
	I0917 17:20:23.123367   29734 system_pods.go:126] duration metric: took 217.004917ms to wait for k8s-apps to be running ...
	I0917 17:20:23.123379   29734 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 17:20:23.123429   29734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:20:23.143737   29734 system_svc.go:56] duration metric: took 20.348178ms WaitForService to wait for kubelet
	I0917 17:20:23.143773   29734 kubeadm.go:582] duration metric: took 20.636908487s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 17:20:23.143796   29734 node_conditions.go:102] verifying NodePressure condition ...
	I0917 17:20:23.302126   29734 request.go:632] Waited for 158.259398ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.195:8443/api/v1/nodes
	I0917 17:20:23.302204   29734 round_trippers.go:463] GET https://192.168.39.195:8443/api/v1/nodes
	I0917 17:20:23.302215   29734 round_trippers.go:469] Request Headers:
	I0917 17:20:23.302225   29734 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:23.302232   29734 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0917 17:20:23.306459   29734 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0917 17:20:23.307628   29734 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 17:20:23.307648   29734 node_conditions.go:123] node cpu capacity is 2
	I0917 17:20:23.307657   29734 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 17:20:23.307661   29734 node_conditions.go:123] node cpu capacity is 2
	I0917 17:20:23.307664   29734 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 17:20:23.307667   29734 node_conditions.go:123] node cpu capacity is 2
	I0917 17:20:23.307671   29734 node_conditions.go:105] duration metric: took 163.870275ms to run NodePressure ...
	I0917 17:20:23.307684   29734 start.go:241] waiting for startup goroutines ...
	I0917 17:20:23.307702   29734 start.go:255] writing updated cluster config ...
	I0917 17:20:23.307971   29734 ssh_runner.go:195] Run: rm -f paused
	I0917 17:20:23.365174   29734 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 17:20:23.367722   29734 out.go:177] * Done! kubectl is now configured to use "ha-181247" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 17 17:25:03 ha-181247 crio[655]: time="2024-09-17 17:25:03.002566543Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593903002541730,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e9eecde6-a1ce-469e-8164-bd59b78f7264 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:25:03 ha-181247 crio[655]: time="2024-09-17 17:25:03.003359818Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b1061f1e-e7c9-42d3-b96b-06f68bbe4448 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:25:03 ha-181247 crio[655]: time="2024-09-17 17:25:03.003440572Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b1061f1e-e7c9-42d3-b96b-06f68bbe4448 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:25:03 ha-181247 crio[655]: time="2024-09-17 17:25:03.003687393Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1e590e905eaba85f23f252703d2acaae800370c02af50dfc79d700256e17f2f,PodSandboxId:032ab62b0ab6812b12acf668d671cc659557b22a8790aa1fc2cfa4494ec55dd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726593627164258619,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w8wxj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 681ace64-6c78-437e-9e9d-46edd2b4a8c4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f192df08c3590e7bfd79aae969f3dd8f66f4d35701d32e3420611215f63553e5,PodSandboxId:4564f117340894dd0f7a94fcff0ee0f5325c4046162909def7023cd62482149e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726593490994398983,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bdthh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ae9d00-44ce-47be-80c5-12144ff8c69b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595bdaca307f1441f309d1bcf4b34051887f8d890e48b076244fe646ebd3d242,PodSandboxId:251b80e9b641b04b92ef1676da10ff73b6530adcb89c413d79c9dc235a3e2707,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726593490936131858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lmg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1052e249-3530-4220-8214-0c36a02c4215,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c6e5f75c94800e99ebcedfae5efd792c106959333def0368e33a21ce4b57dba,PodSandboxId:e0668ebeee0ff4dc748c04ce37a44def6862e23adab72b158cf4c851639e98aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726593490906209238,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcef4cf0-61a6-4f9f-9644-f17f7f819237,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3e79172e8670acd510e33bf9c5da16c5d3534ce4ffe595a8e325c11657359d,PodSandboxId:2c5d3e765b253f81a3020f932abb56db9dbde7085ba378402b5ad0ce2468bd4e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172659347
8702505406,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a075630a-48df-429f-98ef-49bca2d9dac5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d41e134288854fec45bd7f2346f9652bdd3a1e807f2fc2cbcd942906f8f16d2,PodSandboxId:030199bb820c578808c1be3b2896b11a2b2fd10c184eb736f1a637e365377055,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726593478343285876,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 882de4ca-d789-403e-a22e-22fbc776af10,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe133e1d0be653fbf2459fedd510646aaea8936b333247d26b813696efb08ff5,PodSandboxId:8f4315718476a4a22030583c6b103725a1824c70e7a4b1bbdf37dd9efa472fc5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726593467932726338,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3d93f4585a8cac4e1cf6a8c7c6b68d,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e131e7c4af3fcc40108c8700b714abff6a932db62448fb92d9eced1c2ada9a91,PodSandboxId:6c0fc2dc035f99358aade2656ae8ecd22f042954413615bd3314f4560fe0f22b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726593466010393514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5748534d2a3a40ee72c6688e8f4f184d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bd357b39ecdb7b929a836991acf48843872ad86112b5035be1a2d9f29d4256a,PodSandboxId:3a00f7ab2aec732ef2fbdd6f7b9f0b60cd84c2f9b06306821707f58c3602fbc6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726593465986273254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad9722f4b7cb935efee60829f463e82,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48764653b9795d2cba4178792c492a672b05306c9e7af677049f5a787ecc32d,PodSandboxId:ad3ec6af7c9472fc4d0b392ea77e8488ace601ffa41f53e6d0309bbd19491f62,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726593465857888053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afa944276158c101e1b388244401851,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b77bc3ea3167ff26e92f87afb0c5a5816aa025b692fb78393cc147efa615fb4,PodSandboxId:64083dac55fed338c25d3bc30e0f7fabdd26345b8ad9d8098355f4740009de72,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726593465866557307,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-181247,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48e891f6ff7af12b13f4bafa92c7341b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b1061f1e-e7c9-42d3-b96b-06f68bbe4448 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:25:03 ha-181247 crio[655]: time="2024-09-17 17:25:03.043775279Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=be28f910-c5c8-4387-a75d-59d8e9ab3344 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:25:03 ha-181247 crio[655]: time="2024-09-17 17:25:03.043852001Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=be28f910-c5c8-4387-a75d-59d8e9ab3344 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:25:03 ha-181247 crio[655]: time="2024-09-17 17:25:03.045323374Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f39f0855-4da9-4399-b7e7-0c59dc622cbc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:25:03 ha-181247 crio[655]: time="2024-09-17 17:25:03.045870631Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593903045840599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f39f0855-4da9-4399-b7e7-0c59dc622cbc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:25:03 ha-181247 crio[655]: time="2024-09-17 17:25:03.046360354Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a414c6cd-1470-4765-ad29-1dd9529ddd0d name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:25:03 ha-181247 crio[655]: time="2024-09-17 17:25:03.046438149Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a414c6cd-1470-4765-ad29-1dd9529ddd0d name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:25:03 ha-181247 crio[655]: time="2024-09-17 17:25:03.046689178Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1e590e905eaba85f23f252703d2acaae800370c02af50dfc79d700256e17f2f,PodSandboxId:032ab62b0ab6812b12acf668d671cc659557b22a8790aa1fc2cfa4494ec55dd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726593627164258619,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w8wxj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 681ace64-6c78-437e-9e9d-46edd2b4a8c4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f192df08c3590e7bfd79aae969f3dd8f66f4d35701d32e3420611215f63553e5,PodSandboxId:4564f117340894dd0f7a94fcff0ee0f5325c4046162909def7023cd62482149e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726593490994398983,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bdthh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ae9d00-44ce-47be-80c5-12144ff8c69b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595bdaca307f1441f309d1bcf4b34051887f8d890e48b076244fe646ebd3d242,PodSandboxId:251b80e9b641b04b92ef1676da10ff73b6530adcb89c413d79c9dc235a3e2707,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726593490936131858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lmg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1052e249-3530-4220-8214-0c36a02c4215,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c6e5f75c94800e99ebcedfae5efd792c106959333def0368e33a21ce4b57dba,PodSandboxId:e0668ebeee0ff4dc748c04ce37a44def6862e23adab72b158cf4c851639e98aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726593490906209238,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcef4cf0-61a6-4f9f-9644-f17f7f819237,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3e79172e8670acd510e33bf9c5da16c5d3534ce4ffe595a8e325c11657359d,PodSandboxId:2c5d3e765b253f81a3020f932abb56db9dbde7085ba378402b5ad0ce2468bd4e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172659347
8702505406,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a075630a-48df-429f-98ef-49bca2d9dac5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d41e134288854fec45bd7f2346f9652bdd3a1e807f2fc2cbcd942906f8f16d2,PodSandboxId:030199bb820c578808c1be3b2896b11a2b2fd10c184eb736f1a637e365377055,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726593478343285876,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 882de4ca-d789-403e-a22e-22fbc776af10,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe133e1d0be653fbf2459fedd510646aaea8936b333247d26b813696efb08ff5,PodSandboxId:8f4315718476a4a22030583c6b103725a1824c70e7a4b1bbdf37dd9efa472fc5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726593467932726338,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3d93f4585a8cac4e1cf6a8c7c6b68d,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e131e7c4af3fcc40108c8700b714abff6a932db62448fb92d9eced1c2ada9a91,PodSandboxId:6c0fc2dc035f99358aade2656ae8ecd22f042954413615bd3314f4560fe0f22b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726593466010393514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5748534d2a3a40ee72c6688e8f4f184d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bd357b39ecdb7b929a836991acf48843872ad86112b5035be1a2d9f29d4256a,PodSandboxId:3a00f7ab2aec732ef2fbdd6f7b9f0b60cd84c2f9b06306821707f58c3602fbc6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726593465986273254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad9722f4b7cb935efee60829f463e82,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48764653b9795d2cba4178792c492a672b05306c9e7af677049f5a787ecc32d,PodSandboxId:ad3ec6af7c9472fc4d0b392ea77e8488ace601ffa41f53e6d0309bbd19491f62,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726593465857888053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afa944276158c101e1b388244401851,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b77bc3ea3167ff26e92f87afb0c5a5816aa025b692fb78393cc147efa615fb4,PodSandboxId:64083dac55fed338c25d3bc30e0f7fabdd26345b8ad9d8098355f4740009de72,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726593465866557307,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-181247,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48e891f6ff7af12b13f4bafa92c7341b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a414c6cd-1470-4765-ad29-1dd9529ddd0d name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:25:03 ha-181247 crio[655]: time="2024-09-17 17:25:03.090739099Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=539793eb-45e1-4311-ba58-53656e2044b7 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:25:03 ha-181247 crio[655]: time="2024-09-17 17:25:03.090816262Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=539793eb-45e1-4311-ba58-53656e2044b7 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:25:03 ha-181247 crio[655]: time="2024-09-17 17:25:03.092613984Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9a41dc2f-8e10-4c81-b80a-f09bf58fce6d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:25:03 ha-181247 crio[655]: time="2024-09-17 17:25:03.093129159Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593903093099718,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9a41dc2f-8e10-4c81-b80a-f09bf58fce6d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:25:03 ha-181247 crio[655]: time="2024-09-17 17:25:03.093853129Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=99b82960-4a89-4f5b-b0f4-d530d260c5e4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:25:03 ha-181247 crio[655]: time="2024-09-17 17:25:03.093915581Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=99b82960-4a89-4f5b-b0f4-d530d260c5e4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:25:03 ha-181247 crio[655]: time="2024-09-17 17:25:03.094342044Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1e590e905eaba85f23f252703d2acaae800370c02af50dfc79d700256e17f2f,PodSandboxId:032ab62b0ab6812b12acf668d671cc659557b22a8790aa1fc2cfa4494ec55dd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726593627164258619,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w8wxj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 681ace64-6c78-437e-9e9d-46edd2b4a8c4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f192df08c3590e7bfd79aae969f3dd8f66f4d35701d32e3420611215f63553e5,PodSandboxId:4564f117340894dd0f7a94fcff0ee0f5325c4046162909def7023cd62482149e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726593490994398983,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bdthh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ae9d00-44ce-47be-80c5-12144ff8c69b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595bdaca307f1441f309d1bcf4b34051887f8d890e48b076244fe646ebd3d242,PodSandboxId:251b80e9b641b04b92ef1676da10ff73b6530adcb89c413d79c9dc235a3e2707,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726593490936131858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lmg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1052e249-3530-4220-8214-0c36a02c4215,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c6e5f75c94800e99ebcedfae5efd792c106959333def0368e33a21ce4b57dba,PodSandboxId:e0668ebeee0ff4dc748c04ce37a44def6862e23adab72b158cf4c851639e98aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726593490906209238,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcef4cf0-61a6-4f9f-9644-f17f7f819237,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3e79172e8670acd510e33bf9c5da16c5d3534ce4ffe595a8e325c11657359d,PodSandboxId:2c5d3e765b253f81a3020f932abb56db9dbde7085ba378402b5ad0ce2468bd4e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172659347
8702505406,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a075630a-48df-429f-98ef-49bca2d9dac5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d41e134288854fec45bd7f2346f9652bdd3a1e807f2fc2cbcd942906f8f16d2,PodSandboxId:030199bb820c578808c1be3b2896b11a2b2fd10c184eb736f1a637e365377055,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726593478343285876,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 882de4ca-d789-403e-a22e-22fbc776af10,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe133e1d0be653fbf2459fedd510646aaea8936b333247d26b813696efb08ff5,PodSandboxId:8f4315718476a4a22030583c6b103725a1824c70e7a4b1bbdf37dd9efa472fc5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726593467932726338,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3d93f4585a8cac4e1cf6a8c7c6b68d,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e131e7c4af3fcc40108c8700b714abff6a932db62448fb92d9eced1c2ada9a91,PodSandboxId:6c0fc2dc035f99358aade2656ae8ecd22f042954413615bd3314f4560fe0f22b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726593466010393514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5748534d2a3a40ee72c6688e8f4f184d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bd357b39ecdb7b929a836991acf48843872ad86112b5035be1a2d9f29d4256a,PodSandboxId:3a00f7ab2aec732ef2fbdd6f7b9f0b60cd84c2f9b06306821707f58c3602fbc6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726593465986273254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad9722f4b7cb935efee60829f463e82,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48764653b9795d2cba4178792c492a672b05306c9e7af677049f5a787ecc32d,PodSandboxId:ad3ec6af7c9472fc4d0b392ea77e8488ace601ffa41f53e6d0309bbd19491f62,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726593465857888053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afa944276158c101e1b388244401851,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b77bc3ea3167ff26e92f87afb0c5a5816aa025b692fb78393cc147efa615fb4,PodSandboxId:64083dac55fed338c25d3bc30e0f7fabdd26345b8ad9d8098355f4740009de72,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726593465866557307,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-181247,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48e891f6ff7af12b13f4bafa92c7341b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=99b82960-4a89-4f5b-b0f4-d530d260c5e4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:25:03 ha-181247 crio[655]: time="2024-09-17 17:25:03.139787923Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=308c90ff-321c-49c4-abd0-b9c5c684e5ee name=/runtime.v1.RuntimeService/Version
	Sep 17 17:25:03 ha-181247 crio[655]: time="2024-09-17 17:25:03.139893160Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=308c90ff-321c-49c4-abd0-b9c5c684e5ee name=/runtime.v1.RuntimeService/Version
	Sep 17 17:25:03 ha-181247 crio[655]: time="2024-09-17 17:25:03.142028422Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5ec920f1-e16d-40e2-82be-37832d957592 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:25:03 ha-181247 crio[655]: time="2024-09-17 17:25:03.142616416Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593903142589106,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5ec920f1-e16d-40e2-82be-37832d957592 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:25:03 ha-181247 crio[655]: time="2024-09-17 17:25:03.143519610Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=88aa0842-a986-411b-8fea-c63793106e3d name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:25:03 ha-181247 crio[655]: time="2024-09-17 17:25:03.143598695Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=88aa0842-a986-411b-8fea-c63793106e3d name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:25:03 ha-181247 crio[655]: time="2024-09-17 17:25:03.143865961Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c1e590e905eaba85f23f252703d2acaae800370c02af50dfc79d700256e17f2f,PodSandboxId:032ab62b0ab6812b12acf668d671cc659557b22a8790aa1fc2cfa4494ec55dd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726593627164258619,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w8wxj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 681ace64-6c78-437e-9e9d-46edd2b4a8c4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f192df08c3590e7bfd79aae969f3dd8f66f4d35701d32e3420611215f63553e5,PodSandboxId:4564f117340894dd0f7a94fcff0ee0f5325c4046162909def7023cd62482149e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726593490994398983,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bdthh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ae9d00-44ce-47be-80c5-12144ff8c69b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595bdaca307f1441f309d1bcf4b34051887f8d890e48b076244fe646ebd3d242,PodSandboxId:251b80e9b641b04b92ef1676da10ff73b6530adcb89c413d79c9dc235a3e2707,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726593490936131858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lmg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1052e249-3530-4220-8214-0c36a02c4215,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c6e5f75c94800e99ebcedfae5efd792c106959333def0368e33a21ce4b57dba,PodSandboxId:e0668ebeee0ff4dc748c04ce37a44def6862e23adab72b158cf4c851639e98aa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726593490906209238,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcef4cf0-61a6-4f9f-9644-f17f7f819237,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3e79172e8670acd510e33bf9c5da16c5d3534ce4ffe595a8e325c11657359d,PodSandboxId:2c5d3e765b253f81a3020f932abb56db9dbde7085ba378402b5ad0ce2468bd4e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172659347
8702505406,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a075630a-48df-429f-98ef-49bca2d9dac5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d41e134288854fec45bd7f2346f9652bdd3a1e807f2fc2cbcd942906f8f16d2,PodSandboxId:030199bb820c578808c1be3b2896b11a2b2fd10c184eb736f1a637e365377055,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726593478343285876,Labels:map[string]str
ing{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 882de4ca-d789-403e-a22e-22fbc776af10,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe133e1d0be653fbf2459fedd510646aaea8936b333247d26b813696efb08ff5,PodSandboxId:8f4315718476a4a22030583c6b103725a1824c70e7a4b1bbdf37dd9efa472fc5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726593467932726338,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3d93f4585a8cac4e1cf6a8c7c6b68d,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e131e7c4af3fcc40108c8700b714abff6a932db62448fb92d9eced1c2ada9a91,PodSandboxId:6c0fc2dc035f99358aade2656ae8ecd22f042954413615bd3314f4560fe0f22b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726593466010393514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5748534d2a3a40ee72c6688e8f4f184d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bd357b39ecdb7b929a836991acf48843872ad86112b5035be1a2d9f29d4256a,PodSandboxId:3a00f7ab2aec732ef2fbdd6f7b9f0b60cd84c2f9b06306821707f58c3602fbc6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726593465986273254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad9722f4b7cb935efee60829f463e82,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48764653b9795d2cba4178792c492a672b05306c9e7af677049f5a787ecc32d,PodSandboxId:ad3ec6af7c9472fc4d0b392ea77e8488ace601ffa41f53e6d0309bbd19491f62,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726593465857888053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apis
erver-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afa944276158c101e1b388244401851,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b77bc3ea3167ff26e92f87afb0c5a5816aa025b692fb78393cc147efa615fb4,PodSandboxId:64083dac55fed338c25d3bc30e0f7fabdd26345b8ad9d8098355f4740009de72,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726593465866557307,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-181247,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48e891f6ff7af12b13f4bafa92c7341b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=88aa0842-a986-411b-8fea-c63793106e3d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c1e590e905eab       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   032ab62b0ab68       busybox-7dff88458-w8wxj
	f192df08c3590       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   4564f11734089       coredns-7c65d6cfc9-bdthh
	595bdaca307f1       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   251b80e9b641b       coredns-7c65d6cfc9-5lmg4
	4c6e5f75c9480       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   e0668ebeee0ff       storage-provisioner
	aa3e79172e867       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      7 minutes ago       Running             kube-proxy                0                   2c5d3e765b253       kube-proxy-7rrxk
	8d41e13428885       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      7 minutes ago       Running             kindnet-cni               0                   030199bb820c5       kindnet-2tkbp
	fe133e1d0be65       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   8f4315718476a       kube-vip-ha-181247
	e131e7c4af3fc       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      7 minutes ago       Running             etcd                      0                   6c0fc2dc035f9       etcd-ha-181247
	1bd357b39ecdb       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      7 minutes ago       Running             kube-controller-manager   0                   3a00f7ab2aec7       kube-controller-manager-ha-181247
	2b77bc3ea3167       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      7 minutes ago       Running             kube-scheduler            0                   64083dac55fed       kube-scheduler-ha-181247
	c48764653b979       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      7 minutes ago       Running             kube-apiserver            0                   ad3ec6af7c947       kube-apiserver-ha-181247
	
	
	==> coredns [595bdaca307f1441f309d1bcf4b34051887f8d890e48b076244fe646ebd3d242] <==
	[INFO] 127.0.0.1:49564 - 56083 "HINFO IN 7646535878500117191.6117038551668512559. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015772836s
	[INFO] 10.244.1.2:46481 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.022391474s
	[INFO] 10.244.2.2:58475 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000197137s
	[INFO] 10.244.0.4:47160 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000262396s
	[INFO] 10.244.0.4:43644 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000098798s
	[INFO] 10.244.0.4:58082 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.00083865s
	[INFO] 10.244.1.2:33599 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003427065s
	[INFO] 10.244.1.2:48415 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000218431s
	[INFO] 10.244.1.2:36800 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000118274s
	[INFO] 10.244.2.2:43997 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000248398s
	[INFO] 10.244.2.2:35973 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001811s
	[INFO] 10.244.2.2:49572 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000172284s
	[INFO] 10.244.0.4:47826 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002065843s
	[INFO] 10.244.0.4:36193 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000199582s
	[INFO] 10.244.0.4:50628 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110526s
	[INFO] 10.244.0.4:44724 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114759s
	[INFO] 10.244.0.4:42511 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083739s
	[INFO] 10.244.2.2:46937 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116808s
	[INFO] 10.244.2.2:44451 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000173075s
	[INFO] 10.244.0.4:40459 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064325s
	[INFO] 10.244.1.2:49457 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184596s
	[INFO] 10.244.1.2:38498 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000205346s
	[INFO] 10.244.2.2:59967 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130934s
	[INFO] 10.244.2.2:41589 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000130541s
	[INFO] 10.244.0.4:45130 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000138569s
	
	
	==> coredns [f192df08c3590e7bfd79aae969f3dd8f66f4d35701d32e3420611215f63553e5] <==
	[INFO] 10.244.1.2:48013 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000120798s
	[INFO] 10.244.2.2:52666 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00190695s
	[INFO] 10.244.2.2:46125 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000169465s
	[INFO] 10.244.2.2:56262 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001604819s
	[INFO] 10.244.2.2:50732 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00016708s
	[INFO] 10.244.2.2:42284 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000132466s
	[INFO] 10.244.0.4:37678 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000213338s
	[INFO] 10.244.0.4:44751 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000122274s
	[INFO] 10.244.0.4:56988 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001602111s
	[INFO] 10.244.1.2:42868 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00026031s
	[INFO] 10.244.1.2:40978 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000206846s
	[INFO] 10.244.1.2:41313 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097139s
	[INFO] 10.244.1.2:50208 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000151609s
	[INFO] 10.244.2.2:49264 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143158s
	[INFO] 10.244.2.2:54921 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000162093s
	[INFO] 10.244.0.4:54768 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000211558s
	[INFO] 10.244.0.4:47021 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000048005s
	[INFO] 10.244.0.4:52698 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004567s
	[INFO] 10.244.1.2:39357 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000237795s
	[INFO] 10.244.1.2:48172 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000183611s
	[INFO] 10.244.2.2:56434 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000125357s
	[INFO] 10.244.2.2:37159 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000179695s
	[INFO] 10.244.0.4:40381 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150761s
	[INFO] 10.244.0.4:39726 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074302s
	[INFO] 10.244.0.4:39990 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097242s
	
	
	==> describe nodes <==
	Name:               ha-181247
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-181247
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=ha-181247
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T17_17_53_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 17:17:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181247
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:25:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:20:56 +0000   Tue, 17 Sep 2024 17:17:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:20:56 +0000   Tue, 17 Sep 2024 17:17:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:20:56 +0000   Tue, 17 Sep 2024 17:17:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:20:56 +0000   Tue, 17 Sep 2024 17:18:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    ha-181247
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fef45c02f40245a0a3ede964289ca350
	  System UUID:                fef45c02-f402-45a0-a3ed-e964289ca350
	  Boot ID:                    3253b46a-acef-407f-8fd6-3d5cae46a6bb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-w8wxj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 coredns-7c65d6cfc9-5lmg4             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m6s
	  kube-system                 coredns-7c65d6cfc9-bdthh             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m7s
	  kube-system                 etcd-ha-181247                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m11s
	  kube-system                 kindnet-2tkbp                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m7s
	  kube-system                 kube-apiserver-ha-181247             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m11s
	  kube-system                 kube-controller-manager-ha-181247    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m11s
	  kube-system                 kube-proxy-7rrxk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m7s
	  kube-system                 kube-scheduler-ha-181247             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m11s
	  kube-system                 kube-vip-ha-181247                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m11s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m4s   kube-proxy       
	  Normal  Starting                 7m11s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m11s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m11s  kubelet          Node ha-181247 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m11s  kubelet          Node ha-181247 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m11s  kubelet          Node ha-181247 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m8s   node-controller  Node ha-181247 event: Registered Node ha-181247 in Controller
	  Normal  NodeReady                6m53s  kubelet          Node ha-181247 status is now: NodeReady
	  Normal  RegisteredNode           6m7s   node-controller  Node ha-181247 event: Registered Node ha-181247 in Controller
	  Normal  RegisteredNode           4m56s  node-controller  Node ha-181247 event: Registered Node ha-181247 in Controller
	
	
	Name:               ha-181247-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-181247-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=ha-181247
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T17_18_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 17:18:46 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181247-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:21:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 17 Sep 2024 17:20:48 +0000   Tue, 17 Sep 2024 17:22:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 17 Sep 2024 17:20:48 +0000   Tue, 17 Sep 2024 17:22:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 17 Sep 2024 17:20:48 +0000   Tue, 17 Sep 2024 17:22:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 17 Sep 2024 17:20:48 +0000   Tue, 17 Sep 2024 17:22:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.11
	  Hostname:    ha-181247-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2585a68084874db38baf46d679282ed1
	  System UUID:                2585a680-8487-4db3-8baf-46d679282ed1
	  Boot ID:                    5bfdf389-469b-42f9-975f-6c72da7743b0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-96b8c                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 etcd-ha-181247-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m16s
	  kube-system                 kindnet-qqpgm                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m17s
	  kube-system                 kube-apiserver-ha-181247-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-controller-manager-ha-181247-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-proxy-xmfcj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-scheduler-ha-181247-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-vip-ha-181247-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m10s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m18s (x8 over 6m18s)  kubelet          Node ha-181247-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m18s (x8 over 6m18s)  kubelet          Node ha-181247-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m18s (x7 over 6m18s)  kubelet          Node ha-181247-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m13s                  node-controller  Node ha-181247-m02 event: Registered Node ha-181247-m02 in Controller
	  Normal  RegisteredNode           6m7s                   node-controller  Node ha-181247-m02 event: Registered Node ha-181247-m02 in Controller
	  Normal  RegisteredNode           4m56s                  node-controller  Node ha-181247-m02 event: Registered Node ha-181247-m02 in Controller
	  Normal  NodeNotReady             2m42s                  node-controller  Node ha-181247-m02 status is now: NodeNotReady
	
	
	Name:               ha-181247-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-181247-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=ha-181247
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T17_20_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 17:19:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181247-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:24:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:20:28 +0000   Tue, 17 Sep 2024 17:19:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:20:28 +0000   Tue, 17 Sep 2024 17:19:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:20:28 +0000   Tue, 17 Sep 2024 17:19:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:20:28 +0000   Tue, 17 Sep 2024 17:20:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.122
	  Hostname:    ha-181247-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2b42e80aa44a47aebcfcad072d252d58
	  System UUID:                2b42e80a-a44a-47ae-bcfc-ad072d252d58
	  Boot ID:                    dd80ee86-310e-4a32-94de-53cde30919d8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-mxrbl                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 etcd-ha-181247-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m3s
	  kube-system                 kindnet-tkbmg                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m5s
	  kube-system                 kube-apiserver-ha-181247-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-controller-manager-ha-181247-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 kube-proxy-42gpk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-scheduler-ha-181247-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 kube-vip-ha-181247-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m                   kube-proxy       
	  Normal  NodeAllocatableEnforced  5m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m5s (x8 over 5m6s)  kubelet          Node ha-181247-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m5s (x8 over 5m6s)  kubelet          Node ha-181247-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m5s (x7 over 5m6s)  kubelet          Node ha-181247-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m3s                 node-controller  Node ha-181247-m03 event: Registered Node ha-181247-m03 in Controller
	  Normal  RegisteredNode           5m2s                 node-controller  Node ha-181247-m03 event: Registered Node ha-181247-m03 in Controller
	  Normal  RegisteredNode           4m56s                node-controller  Node ha-181247-m03 event: Registered Node ha-181247-m03 in Controller
	
	
	Name:               ha-181247-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-181247-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=ha-181247
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T17_21_01_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 17:21:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181247-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:24:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:21:32 +0000   Tue, 17 Sep 2024 17:21:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:21:32 +0000   Tue, 17 Sep 2024 17:21:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:21:32 +0000   Tue, 17 Sep 2024 17:21:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:21:32 +0000   Tue, 17 Sep 2024 17:21:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.63
	  Hostname:    ha-181247-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6b33a6f0f712480eacc4183b870e9eb2
	  System UUID:                6b33a6f0-f712-480e-acc4-183b870e9eb2
	  Boot ID:                    85fce420-4742-47df-a8ae-66c460bcd5eb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-ntzg5       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m2s
	  kube-system                 kube-proxy-shlht    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m2s (x2 over 4m3s)  kubelet          Node ha-181247-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m2s (x2 over 4m3s)  kubelet          Node ha-181247-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m2s (x2 over 4m3s)  kubelet          Node ha-181247-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-181247-m04 event: Registered Node ha-181247-m04 in Controller
	  Normal  RegisteredNode           3m58s                node-controller  Node ha-181247-m04 event: Registered Node ha-181247-m04 in Controller
	  Normal  RegisteredNode           3m57s                node-controller  Node ha-181247-m04 event: Registered Node ha-181247-m04 in Controller
	  Normal  NodeReady                3m43s                kubelet          Node ha-181247-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep17 17:17] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051348] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040260] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.884229] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.613872] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.611337] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.348569] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.067341] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053278] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.203985] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.135345] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.305138] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +4.213157] systemd-fstab-generator[739]: Ignoring "noauto" option for root device
	[  +4.747183] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.062031] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.385150] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	[  +0.092383] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.405737] kauditd_printk_skb: 18 callbacks suppressed
	[Sep17 17:18] kauditd_printk_skb: 41 callbacks suppressed
	[ +43.240359] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [e131e7c4af3fcc40108c8700b714abff6a932db62448fb92d9eced1c2ada9a91] <==
	{"level":"warn","ts":"2024-09-17T17:25:03.317861Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:25:03.415592Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:25:03.428446Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:25:03.438622Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:25:03.442489Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:25:03.458142Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:25:03.465389Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:25:03.474084Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:25:03.479146Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:25:03.484711Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:25:03.493681Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:25:03.500726Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:25:03.508257Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:25:03.512178Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:25:03.515436Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:25:03.515596Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:25:03.526346Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:25:03.535693Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:25:03.547995Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:25:03.555103Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:25:03.558693Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:25:03.565245Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:25:03.571698Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:25:03.580348Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-17T17:25:03.616171Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"324857e3fe6e5c62","from":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:25:03 up 7 min,  0 users,  load average: 0.41, 0.52, 0.28
	Linux ha-181247 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8d41e134288854fec45bd7f2346f9652bdd3a1e807f2fc2cbcd942906f8f16d2] <==
	I0917 17:24:29.805690       1 main.go:322] Node ha-181247-m04 has CIDR [10.244.3.0/24] 
	I0917 17:24:39.805104       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0917 17:24:39.805219       1 main.go:299] handling current node
	I0917 17:24:39.805246       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0917 17:24:39.805263       1 main.go:322] Node ha-181247-m02 has CIDR [10.244.1.0/24] 
	I0917 17:24:39.805413       1 main.go:295] Handling node with IPs: map[192.168.39.122:{}]
	I0917 17:24:39.805437       1 main.go:322] Node ha-181247-m03 has CIDR [10.244.2.0/24] 
	I0917 17:24:39.805500       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0917 17:24:39.805518       1 main.go:322] Node ha-181247-m04 has CIDR [10.244.3.0/24] 
	I0917 17:24:49.805159       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0917 17:24:49.805223       1 main.go:299] handling current node
	I0917 17:24:49.805242       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0917 17:24:49.805250       1 main.go:322] Node ha-181247-m02 has CIDR [10.244.1.0/24] 
	I0917 17:24:49.805435       1 main.go:295] Handling node with IPs: map[192.168.39.122:{}]
	I0917 17:24:49.805469       1 main.go:322] Node ha-181247-m03 has CIDR [10.244.2.0/24] 
	I0917 17:24:49.805535       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0917 17:24:49.805563       1 main.go:322] Node ha-181247-m04 has CIDR [10.244.3.0/24] 
	I0917 17:24:59.801097       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0917 17:24:59.801228       1 main.go:299] handling current node
	I0917 17:24:59.801258       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0917 17:24:59.801277       1 main.go:322] Node ha-181247-m02 has CIDR [10.244.1.0/24] 
	I0917 17:24:59.801443       1 main.go:295] Handling node with IPs: map[192.168.39.122:{}]
	I0917 17:24:59.801469       1 main.go:322] Node ha-181247-m03 has CIDR [10.244.2.0/24] 
	I0917 17:24:59.801544       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0917 17:24:59.801563       1 main.go:322] Node ha-181247-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [c48764653b9795d2cba4178792c492a672b05306c9e7af677049f5a787ecc32d] <==
	W0917 17:17:51.094671       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.195]
	I0917 17:17:51.095771       1 controller.go:615] quota admission added evaluator for: endpoints
	I0917 17:17:51.109019       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0917 17:17:51.111941       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0917 17:17:52.254623       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0917 17:17:52.271476       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0917 17:17:52.389757       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0917 17:17:56.707987       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0917 17:17:56.873233       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0917 17:20:28.669364       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58950: use of closed network connection
	E0917 17:20:28.870825       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58974: use of closed network connection
	E0917 17:20:29.150547       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59002: use of closed network connection
	E0917 17:20:29.343681       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59024: use of closed network connection
	E0917 17:20:29.545545       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59054: use of closed network connection
	E0917 17:20:29.731828       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59064: use of closed network connection
	E0917 17:20:29.914034       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59080: use of closed network connection
	E0917 17:20:30.105963       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59096: use of closed network connection
	E0917 17:20:30.297412       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59114: use of closed network connection
	E0917 17:20:30.622591       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59136: use of closed network connection
	E0917 17:20:30.804754       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59144: use of closed network connection
	E0917 17:20:30.991519       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59150: use of closed network connection
	E0917 17:20:31.181489       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59176: use of closed network connection
	E0917 17:20:31.364445       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59192: use of closed network connection
	E0917 17:20:31.551192       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59214: use of closed network connection
	W0917 17:21:51.109604       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.122 192.168.39.195]
	
	
	==> kube-controller-manager [1bd357b39ecdb7b929a836991acf48843872ad86112b5035be1a2d9f29d4256a] <==
	I0917 17:20:56.915161       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247"
	I0917 17:21:01.359253       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-181247-m04\" does not exist"
	I0917 17:21:01.388525       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-181247-m04" podCIDRs=["10.244.3.0/24"]
	I0917 17:21:01.388577       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:21:01.388609       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:21:01.395586       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:21:01.527720       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:21:01.946412       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:21:02.689396       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:21:06.070265       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:21:06.070709       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181247-m04"
	I0917 17:21:06.274495       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:21:11.519267       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:21:20.728668       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:21:20.728824       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-181247-m04"
	I0917 17:21:20.745462       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:21:21.087006       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:21:32.209669       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:22:21.116786       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m02"
	I0917 17:22:21.117029       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-181247-m04"
	I0917 17:22:21.150207       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m02"
	I0917 17:22:21.323712       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="99.682964ms"
	I0917 17:22:21.323825       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="58.393µs"
	I0917 17:22:22.714436       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m02"
	I0917 17:22:26.437332       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m02"
	
	
	==> kube-proxy [aa3e79172e8670acd510e33bf9c5da16c5d3534ce4ffe595a8e325c11657359d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 17:17:58.949022       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 17:17:58.972251       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.195"]
	E0917 17:17:58.972399       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 17:17:59.022186       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 17:17:59.022253       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 17:17:59.022279       1 server_linux.go:169] "Using iptables Proxier"
	I0917 17:17:59.025845       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 17:17:59.026705       1 server.go:483] "Version info" version="v1.31.1"
	I0917 17:17:59.026735       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:17:59.028836       1 config.go:105] "Starting endpoint slice config controller"
	I0917 17:17:59.028842       1 config.go:199] "Starting service config controller"
	I0917 17:17:59.029428       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 17:17:59.031271       1 config.go:328] "Starting node config controller"
	I0917 17:17:59.029496       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 17:17:59.038640       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 17:17:59.038313       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 17:17:59.039243       1 shared_informer.go:320] Caches are synced for node config
	I0917 17:17:59.132284       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [2b77bc3ea3167ff26e92f87afb0c5a5816aa025b692fb78393cc147efa615fb4] <==
	I0917 17:20:24.375688       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-96b8c" node="ha-181247-m02"
	E0917 17:20:24.377314       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-w8wxj\": pod busybox-7dff88458-w8wxj is already assigned to node \"ha-181247\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-w8wxj" node="ha-181247"
	E0917 17:20:24.377407       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 681ace64-6c78-437e-9e9d-46edd2b4a8c4(default/busybox-7dff88458-w8wxj) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-w8wxj"
	E0917 17:20:24.377434       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-w8wxj\": pod busybox-7dff88458-w8wxj is already assigned to node \"ha-181247\"" pod="default/busybox-7dff88458-w8wxj"
	I0917 17:20:24.377472       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-w8wxj" node="ha-181247"
	E0917 17:21:01.463474       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-w5j8r\": pod kindnet-w5j8r is already assigned to node \"ha-181247-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-w5j8r" node="ha-181247-m04"
	E0917 17:21:01.463579       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d9366aa1-9205-4967-a75b-641916ad7d21(kube-system/kindnet-w5j8r) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-w5j8r"
	E0917 17:21:01.463614       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-w5j8r\": pod kindnet-w5j8r is already assigned to node \"ha-181247-m04\"" pod="kube-system/kindnet-w5j8r"
	I0917 17:21:01.463649       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-w5j8r" node="ha-181247-m04"
	E0917 17:21:01.480551       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-shlht\": pod kube-proxy-shlht is already assigned to node \"ha-181247-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-shlht" node="ha-181247-m04"
	E0917 17:21:01.480634       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod af3ec07d-a374-46d8-b9ab-ac02aa23bb0f(kube-system/kube-proxy-shlht) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-shlht"
	E0917 17:21:01.480653       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-shlht\": pod kube-proxy-shlht is already assigned to node \"ha-181247-m04\"" pod="kube-system/kube-proxy-shlht"
	I0917 17:21:01.480686       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-shlht" node="ha-181247-m04"
	E0917 17:21:01.481212       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-ntzg5\": pod kindnet-ntzg5 is already assigned to node \"ha-181247-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-ntzg5" node="ha-181247-m04"
	E0917 17:21:01.481272       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8c3a39fb-fa0a-4e5d-ae4e-7c468cf8cc54(kube-system/kindnet-ntzg5) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-ntzg5"
	E0917 17:21:01.481288       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-ntzg5\": pod kindnet-ntzg5 is already assigned to node \"ha-181247-m04\"" pod="kube-system/kindnet-ntzg5"
	I0917 17:21:01.481324       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ntzg5" node="ha-181247-m04"
	E0917 17:21:01.481718       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-wxx9b\": pod kube-proxy-wxx9b is already assigned to node \"ha-181247-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-wxx9b" node="ha-181247-m04"
	E0917 17:21:01.481771       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod be89da91-3d03-49d5-9c40-8f0a10a29dc4(kube-system/kube-proxy-wxx9b) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-wxx9b"
	E0917 17:21:01.481794       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-wxx9b\": pod kube-proxy-wxx9b is already assigned to node \"ha-181247-m04\"" pod="kube-system/kube-proxy-wxx9b"
	I0917 17:21:01.481828       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-wxx9b" node="ha-181247-m04"
	E0917 17:21:01.598636       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rjzts\": pod kindnet-rjzts is already assigned to node \"ha-181247-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-rjzts" node="ha-181247-m04"
	E0917 17:21:01.598783       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod df1f81cf-787e-4442-b864-71023978df35(kube-system/kindnet-rjzts) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-rjzts"
	E0917 17:21:01.598965       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rjzts\": pod kindnet-rjzts is already assigned to node \"ha-181247-m04\"" pod="kube-system/kindnet-rjzts"
	I0917 17:21:01.599124       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rjzts" node="ha-181247-m04"
	
	
	==> kubelet <==
	Sep 17 17:23:52 ha-181247 kubelet[1302]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 17 17:23:52 ha-181247 kubelet[1302]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 17 17:23:52 ha-181247 kubelet[1302]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 17 17:23:52 ha-181247 kubelet[1302]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 17 17:23:52 ha-181247 kubelet[1302]: E0917 17:23:52.546397    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593832545993478,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:23:52 ha-181247 kubelet[1302]: E0917 17:23:52.546438    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593832545993478,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:24:02 ha-181247 kubelet[1302]: E0917 17:24:02.550589    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593842548800763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:24:02 ha-181247 kubelet[1302]: E0917 17:24:02.550620    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593842548800763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:24:12 ha-181247 kubelet[1302]: E0917 17:24:12.553997    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593852553179402,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:24:12 ha-181247 kubelet[1302]: E0917 17:24:12.554123    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593852553179402,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:24:22 ha-181247 kubelet[1302]: E0917 17:24:22.557173    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593862556286412,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:24:22 ha-181247 kubelet[1302]: E0917 17:24:22.557520    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593862556286412,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:24:32 ha-181247 kubelet[1302]: E0917 17:24:32.559908    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593872559585204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:24:32 ha-181247 kubelet[1302]: E0917 17:24:32.559936    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593872559585204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:24:42 ha-181247 kubelet[1302]: E0917 17:24:42.562489    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593882561937058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:24:42 ha-181247 kubelet[1302]: E0917 17:24:42.562931    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593882561937058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:24:52 ha-181247 kubelet[1302]: E0917 17:24:52.457467    1302 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 17 17:24:52 ha-181247 kubelet[1302]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 17 17:24:52 ha-181247 kubelet[1302]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 17 17:24:52 ha-181247 kubelet[1302]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 17 17:24:52 ha-181247 kubelet[1302]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 17 17:24:52 ha-181247 kubelet[1302]: E0917 17:24:52.566706    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593892565969152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:24:52 ha-181247 kubelet[1302]: E0917 17:24:52.566738    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593892565969152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:25:02 ha-181247 kubelet[1302]: E0917 17:25:02.568935    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593902568477090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:25:02 ha-181247 kubelet[1302]: E0917 17:25:02.568989    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726593902568477090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-181247 -n ha-181247
helpers_test.go:261: (dbg) Run:  kubectl --context ha-181247 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (57.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (379.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-181247 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-181247 -v=7 --alsologtostderr
E0917 17:26:24.983089   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:26:52.685514   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-181247 -v=7 --alsologtostderr: exit status 82 (2m1.955321285s)

                                                
                                                
-- stdout --
	* Stopping node "ha-181247-m04"  ...
	* Stopping node "ha-181247-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:25:05.100706   35906 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:25:05.100826   35906 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:25:05.100835   35906 out.go:358] Setting ErrFile to fd 2...
	I0917 17:25:05.100841   35906 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:25:05.101034   35906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 17:25:05.101287   35906 out.go:352] Setting JSON to false
	I0917 17:25:05.101388   35906 mustload.go:65] Loading cluster: ha-181247
	I0917 17:25:05.101794   35906 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:25:05.101877   35906 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/config.json ...
	I0917 17:25:05.102059   35906 mustload.go:65] Loading cluster: ha-181247
	I0917 17:25:05.102189   35906 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:25:05.102224   35906 stop.go:39] StopHost: ha-181247-m04
	I0917 17:25:05.102594   35906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:25:05.102634   35906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:25:05.118028   35906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34061
	I0917 17:25:05.118582   35906 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:25:05.119214   35906 main.go:141] libmachine: Using API Version  1
	I0917 17:25:05.119235   35906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:25:05.119590   35906 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:25:05.122172   35906 out.go:177] * Stopping node "ha-181247-m04"  ...
	I0917 17:25:05.123858   35906 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0917 17:25:05.123892   35906 main.go:141] libmachine: (ha-181247-m04) Calling .DriverName
	I0917 17:25:05.124162   35906 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0917 17:25:05.124192   35906 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHHostname
	I0917 17:25:05.127314   35906 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:25:05.127876   35906 main.go:141] libmachine: (ha-181247-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0a:d0", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:20:47 +0000 UTC Type:0 Mac:52:54:00:e5:0a:d0 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-181247-m04 Clientid:01:52:54:00:e5:0a:d0}
	I0917 17:25:05.127908   35906 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined IP address 192.168.39.63 and MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:25:05.128105   35906 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHPort
	I0917 17:25:05.128293   35906 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHKeyPath
	I0917 17:25:05.128477   35906 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHUsername
	I0917 17:25:05.128629   35906 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m04/id_rsa Username:docker}
	I0917 17:25:05.216959   35906 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0917 17:25:05.271107   35906 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0917 17:25:05.325659   35906 main.go:141] libmachine: Stopping "ha-181247-m04"...
	I0917 17:25:05.325685   35906 main.go:141] libmachine: (ha-181247-m04) Calling .GetState
	I0917 17:25:05.327254   35906 main.go:141] libmachine: (ha-181247-m04) Calling .Stop
	I0917 17:25:05.330846   35906 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 0/120
	I0917 17:25:06.573149   35906 main.go:141] libmachine: (ha-181247-m04) Calling .GetState
	I0917 17:25:06.574349   35906 main.go:141] libmachine: Machine "ha-181247-m04" was stopped.
	I0917 17:25:06.574370   35906 stop.go:75] duration metric: took 1.450519994s to stop
	I0917 17:25:06.574389   35906 stop.go:39] StopHost: ha-181247-m03
	I0917 17:25:06.574690   35906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:25:06.574733   35906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:25:06.590795   35906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41679
	I0917 17:25:06.591264   35906 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:25:06.591824   35906 main.go:141] libmachine: Using API Version  1
	I0917 17:25:06.591847   35906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:25:06.592179   35906 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:25:06.595255   35906 out.go:177] * Stopping node "ha-181247-m03"  ...
	I0917 17:25:06.596548   35906 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0917 17:25:06.596580   35906 main.go:141] libmachine: (ha-181247-m03) Calling .DriverName
	I0917 17:25:06.596841   35906 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0917 17:25:06.596867   35906 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHHostname
	I0917 17:25:06.600404   35906 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:25:06.600903   35906 main.go:141] libmachine: (ha-181247-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:b5:33", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:19:24 +0000 UTC Type:0 Mac:52:54:00:48:b5:33 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:ha-181247-m03 Clientid:01:52:54:00:48:b5:33}
	I0917 17:25:06.600934   35906 main.go:141] libmachine: (ha-181247-m03) DBG | domain ha-181247-m03 has defined IP address 192.168.39.122 and MAC address 52:54:00:48:b5:33 in network mk-ha-181247
	I0917 17:25:06.601115   35906 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHPort
	I0917 17:25:06.601318   35906 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHKeyPath
	I0917 17:25:06.601485   35906 main.go:141] libmachine: (ha-181247-m03) Calling .GetSSHUsername
	I0917 17:25:06.601606   35906 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m03/id_rsa Username:docker}
	I0917 17:25:06.695258   35906 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0917 17:25:06.754702   35906 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0917 17:25:06.812127   35906 main.go:141] libmachine: Stopping "ha-181247-m03"...
	I0917 17:25:06.812149   35906 main.go:141] libmachine: (ha-181247-m03) Calling .GetState
	I0917 17:25:06.813685   35906 main.go:141] libmachine: (ha-181247-m03) Calling .Stop
	I0917 17:25:06.817325   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 0/120
	I0917 17:25:07.818595   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 1/120
	I0917 17:25:08.820014   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 2/120
	I0917 17:25:09.821360   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 3/120
	I0917 17:25:10.822855   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 4/120
	I0917 17:25:11.825401   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 5/120
	I0917 17:25:12.827567   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 6/120
	I0917 17:25:13.829668   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 7/120
	I0917 17:25:14.831223   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 8/120
	I0917 17:25:15.832722   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 9/120
	I0917 17:25:16.834930   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 10/120
	I0917 17:25:17.836357   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 11/120
	I0917 17:25:18.837880   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 12/120
	I0917 17:25:19.839337   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 13/120
	I0917 17:25:20.840771   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 14/120
	I0917 17:25:21.842659   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 15/120
	I0917 17:25:22.844195   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 16/120
	I0917 17:25:23.845536   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 17/120
	I0917 17:25:24.846940   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 18/120
	I0917 17:25:25.848250   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 19/120
	I0917 17:25:26.850302   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 20/120
	I0917 17:25:27.851755   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 21/120
	I0917 17:25:28.853610   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 22/120
	I0917 17:25:29.855054   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 23/120
	I0917 17:25:30.856678   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 24/120
	I0917 17:25:31.858359   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 25/120
	I0917 17:25:32.860096   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 26/120
	I0917 17:25:33.862068   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 27/120
	I0917 17:25:34.864124   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 28/120
	I0917 17:25:35.865708   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 29/120
	I0917 17:25:36.867729   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 30/120
	I0917 17:25:37.869112   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 31/120
	I0917 17:25:38.870810   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 32/120
	I0917 17:25:39.872479   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 33/120
	I0917 17:25:40.874049   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 34/120
	I0917 17:25:41.875855   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 35/120
	I0917 17:25:42.877289   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 36/120
	I0917 17:25:43.878836   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 37/120
	I0917 17:25:44.880135   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 38/120
	I0917 17:25:45.881637   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 39/120
	I0917 17:25:46.883636   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 40/120
	I0917 17:25:47.885023   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 41/120
	I0917 17:25:48.886488   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 42/120
	I0917 17:25:49.887966   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 43/120
	I0917 17:25:50.889491   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 44/120
	I0917 17:25:51.891272   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 45/120
	I0917 17:25:52.892582   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 46/120
	I0917 17:25:53.893950   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 47/120
	I0917 17:25:54.895648   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 48/120
	I0917 17:25:55.896803   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 49/120
	I0917 17:25:56.898647   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 50/120
	I0917 17:25:57.900113   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 51/120
	I0917 17:25:58.901479   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 52/120
	I0917 17:25:59.902967   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 53/120
	I0917 17:26:00.904282   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 54/120
	I0917 17:26:01.906406   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 55/120
	I0917 17:26:02.907634   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 56/120
	I0917 17:26:03.909079   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 57/120
	I0917 17:26:04.910366   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 58/120
	I0917 17:26:05.911577   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 59/120
	I0917 17:26:06.912973   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 60/120
	I0917 17:26:07.914259   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 61/120
	I0917 17:26:08.915444   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 62/120
	I0917 17:26:09.916855   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 63/120
	I0917 17:26:10.918342   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 64/120
	I0917 17:26:11.920098   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 65/120
	I0917 17:26:12.921703   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 66/120
	I0917 17:26:13.923226   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 67/120
	I0917 17:26:14.924801   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 68/120
	I0917 17:26:15.926166   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 69/120
	I0917 17:26:16.928407   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 70/120
	I0917 17:26:17.929866   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 71/120
	I0917 17:26:18.931430   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 72/120
	I0917 17:26:19.933027   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 73/120
	I0917 17:26:20.934426   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 74/120
	I0917 17:26:21.936177   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 75/120
	I0917 17:26:22.937680   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 76/120
	I0917 17:26:23.939049   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 77/120
	I0917 17:26:24.940676   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 78/120
	I0917 17:26:25.942035   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 79/120
	I0917 17:26:26.943580   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 80/120
	I0917 17:26:27.945399   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 81/120
	I0917 17:26:28.946814   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 82/120
	I0917 17:26:29.948202   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 83/120
	I0917 17:26:30.949431   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 84/120
	I0917 17:26:31.951010   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 85/120
	I0917 17:26:32.952525   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 86/120
	I0917 17:26:33.953907   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 87/120
	I0917 17:26:34.955421   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 88/120
	I0917 17:26:35.956720   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 89/120
	I0917 17:26:36.958422   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 90/120
	I0917 17:26:37.959816   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 91/120
	I0917 17:26:38.961167   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 92/120
	I0917 17:26:39.962793   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 93/120
	I0917 17:26:40.964280   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 94/120
	I0917 17:26:41.966190   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 95/120
	I0917 17:26:42.968113   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 96/120
	I0917 17:26:43.969496   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 97/120
	I0917 17:26:44.970974   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 98/120
	I0917 17:26:45.972222   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 99/120
	I0917 17:26:46.973911   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 100/120
	I0917 17:26:47.975308   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 101/120
	I0917 17:26:48.976924   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 102/120
	I0917 17:26:49.978386   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 103/120
	I0917 17:26:50.979797   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 104/120
	I0917 17:26:51.981222   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 105/120
	I0917 17:26:52.982939   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 106/120
	I0917 17:26:53.984398   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 107/120
	I0917 17:26:54.985904   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 108/120
	I0917 17:26:55.987469   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 109/120
	I0917 17:26:56.988927   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 110/120
	I0917 17:26:57.990605   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 111/120
	I0917 17:26:58.992002   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 112/120
	I0917 17:26:59.993396   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 113/120
	I0917 17:27:00.994756   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 114/120
	I0917 17:27:01.996506   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 115/120
	I0917 17:27:02.997966   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 116/120
	I0917 17:27:03.999483   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 117/120
	I0917 17:27:05.001123   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 118/120
	I0917 17:27:06.002631   35906 main.go:141] libmachine: (ha-181247-m03) Waiting for machine to stop 119/120
	I0917 17:27:07.003626   35906 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0917 17:27:07.003672   35906 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0917 17:27:07.005869   35906 out.go:201] 
	W0917 17:27:07.007379   35906 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0917 17:27:07.007396   35906 out.go:270] * 
	* 
	W0917 17:27:07.010227   35906 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 17:27:07.011498   35906 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-181247 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-181247 --wait=true -v=7 --alsologtostderr
E0917 17:28:50.532513   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:30:13.601574   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-181247 --wait=true -v=7 --alsologtostderr: (4m15.084080296s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-181247
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-181247 -n ha-181247
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-181247 logs -n 25: (1.956748155s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-181247 cp ha-181247-m03:/home/docker/cp-test.txt                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m02:/home/docker/cp-test_ha-181247-m03_ha-181247-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n                                                                 | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n ha-181247-m02 sudo cat                                          | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-181247-m03_ha-181247-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-181247 cp ha-181247-m03:/home/docker/cp-test.txt                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m04:/home/docker/cp-test_ha-181247-m03_ha-181247-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n                                                                 | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n ha-181247-m04 sudo cat                                          | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-181247-m03_ha-181247-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-181247 cp testdata/cp-test.txt                                                | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n                                                                 | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-181247 cp ha-181247-m04:/home/docker/cp-test.txt                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3499385804/001/cp-test_ha-181247-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n                                                                 | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-181247 cp ha-181247-m04:/home/docker/cp-test.txt                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247:/home/docker/cp-test_ha-181247-m04_ha-181247.txt                       |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n                                                                 | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n ha-181247 sudo cat                                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-181247-m04_ha-181247.txt                                 |           |         |         |                     |                     |
	| cp      | ha-181247 cp ha-181247-m04:/home/docker/cp-test.txt                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m02:/home/docker/cp-test_ha-181247-m04_ha-181247-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n                                                                 | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n ha-181247-m02 sudo cat                                          | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-181247-m04_ha-181247-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-181247 cp ha-181247-m04:/home/docker/cp-test.txt                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m03:/home/docker/cp-test_ha-181247-m04_ha-181247-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n                                                                 | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n ha-181247-m03 sudo cat                                          | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-181247-m04_ha-181247-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-181247 node stop m02 -v=7                                                     | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-181247 node start m02 -v=7                                                    | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-181247 -v=7                                                           | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-181247 -v=7                                                                | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-181247 --wait=true -v=7                                                    | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:27 UTC | 17 Sep 24 17:31 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-181247                                                                | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:31 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 17:27:07
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 17:27:07.059381   36365 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:27:07.059669   36365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:27:07.059681   36365 out.go:358] Setting ErrFile to fd 2...
	I0917 17:27:07.059686   36365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:27:07.059923   36365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 17:27:07.060544   36365 out.go:352] Setting JSON to false
	I0917 17:27:07.061656   36365 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4142,"bootTime":1726589885,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 17:27:07.061761   36365 start.go:139] virtualization: kvm guest
	I0917 17:27:07.064168   36365 out.go:177] * [ha-181247] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 17:27:07.065879   36365 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 17:27:07.065890   36365 notify.go:220] Checking for updates...
	I0917 17:27:07.068433   36365 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 17:27:07.070027   36365 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 17:27:07.071316   36365 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 17:27:07.072756   36365 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 17:27:07.074232   36365 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 17:27:07.076170   36365 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:27:07.076317   36365 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 17:27:07.076843   36365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:27:07.076885   36365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:27:07.095271   36365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34271
	I0917 17:27:07.095691   36365 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:27:07.096280   36365 main.go:141] libmachine: Using API Version  1
	I0917 17:27:07.096312   36365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:27:07.096678   36365 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:27:07.096949   36365 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:27:07.136571   36365 out.go:177] * Using the kvm2 driver based on existing profile
	I0917 17:27:07.137942   36365 start.go:297] selected driver: kvm2
	I0917 17:27:07.137964   36365 start.go:901] validating driver "kvm2" against &{Name:ha-181247 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-181247 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.63 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:27:07.138114   36365 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 17:27:07.138495   36365 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 17:27:07.138599   36365 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19662-11085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0917 17:27:07.154907   36365 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0917 17:27:07.155585   36365 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 17:27:07.155627   36365 cni.go:84] Creating CNI manager for ""
	I0917 17:27:07.155693   36365 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 17:27:07.155783   36365 start.go:340] cluster config:
	{Name:ha-181247 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-181247 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.63 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:27:07.155991   36365 iso.go:125] acquiring lock: {Name:mkee7131e09b1c5eb94d106bda884bcbdbf6f906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 17:27:07.158391   36365 out.go:177] * Starting "ha-181247" primary control-plane node in "ha-181247" cluster
	I0917 17:27:07.159905   36365 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 17:27:07.159960   36365 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0917 17:27:07.159974   36365 cache.go:56] Caching tarball of preloaded images
	I0917 17:27:07.160079   36365 preload.go:172] Found /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 17:27:07.160092   36365 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0917 17:27:07.160241   36365 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/config.json ...
	I0917 17:27:07.160477   36365 start.go:360] acquireMachinesLock for ha-181247: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 17:27:07.160532   36365 start.go:364] duration metric: took 33.648µs to acquireMachinesLock for "ha-181247"
	I0917 17:27:07.160552   36365 start.go:96] Skipping create...Using existing machine configuration
	I0917 17:27:07.160560   36365 fix.go:54] fixHost starting: 
	I0917 17:27:07.160856   36365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:27:07.160896   36365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:27:07.176113   36365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42551
	I0917 17:27:07.176651   36365 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:27:07.177156   36365 main.go:141] libmachine: Using API Version  1
	I0917 17:27:07.177178   36365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:27:07.177521   36365 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:27:07.177724   36365 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:27:07.177883   36365 main.go:141] libmachine: (ha-181247) Calling .GetState
	I0917 17:27:07.179481   36365 fix.go:112] recreateIfNeeded on ha-181247: state=Running err=<nil>
	W0917 17:27:07.179498   36365 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 17:27:07.181707   36365 out.go:177] * Updating the running kvm2 "ha-181247" VM ...
	I0917 17:27:07.183167   36365 machine.go:93] provisionDockerMachine start ...
	I0917 17:27:07.183188   36365 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:27:07.183440   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:27:07.186012   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:27:07.186507   36365 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:27:07.186526   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:27:07.186805   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:27:07.187009   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:27:07.187171   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:27:07.187271   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:27:07.187398   36365 main.go:141] libmachine: Using SSH client type: native
	I0917 17:27:07.187650   36365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0917 17:27:07.187663   36365 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 17:27:07.302568   36365 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181247
	
	I0917 17:27:07.302602   36365 main.go:141] libmachine: (ha-181247) Calling .GetMachineName
	I0917 17:27:07.302861   36365 buildroot.go:166] provisioning hostname "ha-181247"
	I0917 17:27:07.302890   36365 main.go:141] libmachine: (ha-181247) Calling .GetMachineName
	I0917 17:27:07.303115   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:27:07.306335   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:27:07.306806   36365 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:27:07.306836   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:27:07.307024   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:27:07.307210   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:27:07.307416   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:27:07.307551   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:27:07.307706   36365 main.go:141] libmachine: Using SSH client type: native
	I0917 17:27:07.307974   36365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0917 17:27:07.307993   36365 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181247 && echo "ha-181247" | sudo tee /etc/hostname
	I0917 17:27:07.438076   36365 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181247
	
	I0917 17:27:07.438100   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:27:07.441155   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:27:07.441659   36365 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:27:07.441695   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:27:07.441850   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:27:07.442049   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:27:07.442205   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:27:07.442337   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:27:07.442501   36365 main.go:141] libmachine: Using SSH client type: native
	I0917 17:27:07.442673   36365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0917 17:27:07.442687   36365 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181247' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181247/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181247' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 17:27:07.555031   36365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 17:27:07.555071   36365 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 17:27:07.555090   36365 buildroot.go:174] setting up certificates
	I0917 17:27:07.555099   36365 provision.go:84] configureAuth start
	I0917 17:27:07.555107   36365 main.go:141] libmachine: (ha-181247) Calling .GetMachineName
	I0917 17:27:07.555370   36365 main.go:141] libmachine: (ha-181247) Calling .GetIP
	I0917 17:27:07.558099   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:27:07.558554   36365 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:27:07.558573   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:27:07.558770   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:27:07.561424   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:27:07.561798   36365 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:27:07.561829   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:27:07.562048   36365 provision.go:143] copyHostCerts
	I0917 17:27:07.562075   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 17:27:07.562111   36365 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 17:27:07.562121   36365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 17:27:07.562193   36365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 17:27:07.562268   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 17:27:07.562285   36365 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 17:27:07.562289   36365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 17:27:07.562325   36365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 17:27:07.562368   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 17:27:07.562382   36365 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 17:27:07.562390   36365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 17:27:07.562413   36365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 17:27:07.562470   36365 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.ha-181247 san=[127.0.0.1 192.168.39.195 ha-181247 localhost minikube]
	I0917 17:27:07.646706   36365 provision.go:177] copyRemoteCerts
	I0917 17:27:07.646768   36365 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 17:27:07.646792   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:27:07.649927   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:27:07.650353   36365 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:27:07.650383   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:27:07.650674   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:27:07.650898   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:27:07.651133   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:27:07.651310   36365 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:27:07.736834   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 17:27:07.736905   36365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 17:27:07.765943   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 17:27:07.766046   36365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0917 17:27:07.793860   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 17:27:07.793926   36365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 17:27:07.822875   36365 provision.go:87] duration metric: took 267.764697ms to configureAuth
	I0917 17:27:07.822913   36365 buildroot.go:189] setting minikube options for container-runtime
	I0917 17:27:07.823205   36365 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:27:07.823299   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:27:07.826114   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:27:07.826599   36365 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:27:07.826630   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:27:07.826791   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:27:07.827005   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:27:07.827150   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:27:07.827303   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:27:07.827482   36365 main.go:141] libmachine: Using SSH client type: native
	I0917 17:27:07.827650   36365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0917 17:27:07.827671   36365 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 17:28:38.754051   36365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 17:28:38.754084   36365 machine.go:96] duration metric: took 1m31.570902352s to provisionDockerMachine
	I0917 17:28:38.754097   36365 start.go:293] postStartSetup for "ha-181247" (driver="kvm2")
	I0917 17:28:38.754111   36365 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 17:28:38.754129   36365 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:28:38.754474   36365 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 17:28:38.754508   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:28:38.757777   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:28:38.758268   36365 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:28:38.758295   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:28:38.758498   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:28:38.758702   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:28:38.759018   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:28:38.759210   36365 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:28:38.845570   36365 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 17:28:38.850043   36365 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 17:28:38.850072   36365 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 17:28:38.850137   36365 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 17:28:38.850231   36365 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 17:28:38.850242   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> /etc/ssl/certs/182592.pem
	I0917 17:28:38.850335   36365 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 17:28:38.860193   36365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 17:28:38.887078   36365 start.go:296] duration metric: took 132.965629ms for postStartSetup
	I0917 17:28:38.887132   36365 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:28:38.887445   36365 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 17:28:38.887471   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:28:38.890078   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:28:38.890516   36365 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:28:38.890540   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:28:38.890766   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:28:38.890944   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:28:38.891106   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:28:38.891209   36365 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	W0917 17:28:38.976129   36365 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0917 17:28:38.976162   36365 fix.go:56] duration metric: took 1m31.815604589s for fixHost
	I0917 17:28:38.976183   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:28:38.978870   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:28:38.979320   36365 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:28:38.979365   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:28:38.979461   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:28:38.979664   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:28:38.979805   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:28:38.979945   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:28:38.980090   36365 main.go:141] libmachine: Using SSH client type: native
	I0917 17:28:38.980267   36365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0917 17:28:38.980279   36365 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 17:28:39.094543   36365 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726594119.035454615
	
	I0917 17:28:39.094566   36365 fix.go:216] guest clock: 1726594119.035454615
	I0917 17:28:39.094575   36365 fix.go:229] Guest: 2024-09-17 17:28:39.035454615 +0000 UTC Remote: 2024-09-17 17:28:38.976169426 +0000 UTC m=+91.954213076 (delta=59.285189ms)
	I0917 17:28:39.094599   36365 fix.go:200] guest clock delta is within tolerance: 59.285189ms
	I0917 17:28:39.094605   36365 start.go:83] releasing machines lock for "ha-181247", held for 1m31.934061095s
	I0917 17:28:39.094632   36365 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:28:39.094904   36365 main.go:141] libmachine: (ha-181247) Calling .GetIP
	I0917 17:28:39.097681   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:28:39.098033   36365 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:28:39.098065   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:28:39.098208   36365 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:28:39.098937   36365 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:28:39.099140   36365 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:28:39.099264   36365 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 17:28:39.099313   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:28:39.099361   36365 ssh_runner.go:195] Run: cat /version.json
	I0917 17:28:39.099382   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:28:39.101895   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:28:39.101921   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:28:39.102309   36365 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:28:39.102344   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:28:39.102373   36365 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:28:39.102419   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:28:39.102483   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:28:39.102700   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:28:39.102710   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:28:39.102904   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:28:39.102906   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:28:39.103072   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:28:39.103097   36365 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:28:39.103211   36365 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:28:39.194762   36365 ssh_runner.go:195] Run: systemctl --version
	I0917 17:28:39.215412   36365 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 17:28:39.392490   36365 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 17:28:39.401799   36365 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 17:28:39.401871   36365 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 17:28:39.412258   36365 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 17:28:39.412283   36365 start.go:495] detecting cgroup driver to use...
	I0917 17:28:39.412336   36365 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 17:28:39.433268   36365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 17:28:39.449082   36365 docker.go:217] disabling cri-docker service (if available) ...
	I0917 17:28:39.449142   36365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 17:28:39.464610   36365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 17:28:39.479714   36365 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 17:28:39.645679   36365 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 17:28:39.809867   36365 docker.go:233] disabling docker service ...
	I0917 17:28:39.809948   36365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 17:28:39.831643   36365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 17:28:39.848306   36365 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 17:28:40.021274   36365 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 17:28:40.171988   36365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 17:28:40.188607   36365 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 17:28:40.209086   36365 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 17:28:40.209154   36365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:28:40.221652   36365 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 17:28:40.221736   36365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:28:40.233447   36365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:28:40.245864   36365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:28:40.257876   36365 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 17:28:40.269788   36365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:28:40.281777   36365 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:28:40.294126   36365 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:28:40.305395   36365 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 17:28:40.315744   36365 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 17:28:40.326347   36365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:28:40.472679   36365 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 17:28:42.893452   36365 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.420738973s)
	I0917 17:28:42.893486   36365 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 17:28:42.893538   36365 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 17:28:42.901847   36365 start.go:563] Will wait 60s for crictl version
	I0917 17:28:42.901905   36365 ssh_runner.go:195] Run: which crictl
	I0917 17:28:42.905812   36365 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 17:28:42.944093   36365 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 17:28:42.944199   36365 ssh_runner.go:195] Run: crio --version
	I0917 17:28:42.979559   36365 ssh_runner.go:195] Run: crio --version
	I0917 17:28:43.013266   36365 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 17:28:43.015132   36365 main.go:141] libmachine: (ha-181247) Calling .GetIP
	I0917 17:28:43.018395   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:28:43.018773   36365 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:28:43.018804   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:28:43.018997   36365 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0917 17:28:43.025026   36365 kubeadm.go:883] updating cluster {Name:ha-181247 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-181247 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.63 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 17:28:43.025379   36365 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 17:28:43.025453   36365 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 17:28:43.075504   36365 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 17:28:43.075529   36365 crio.go:433] Images already preloaded, skipping extraction
	I0917 17:28:43.075585   36365 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 17:28:43.113122   36365 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 17:28:43.113148   36365 cache_images.go:84] Images are preloaded, skipping loading
	I0917 17:28:43.113160   36365 kubeadm.go:934] updating node { 192.168.39.195 8443 v1.31.1 crio true true} ...
	I0917 17:28:43.113285   36365 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-181247 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-181247 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 17:28:43.113365   36365 ssh_runner.go:195] Run: crio config
	I0917 17:28:43.167249   36365 cni.go:84] Creating CNI manager for ""
	I0917 17:28:43.167278   36365 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 17:28:43.167288   36365 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 17:28:43.167315   36365 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.195 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-181247 NodeName:ha-181247 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.195"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 17:28:43.167486   36365 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-181247"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.195
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.195"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 17:28:43.167510   36365 kube-vip.go:115] generating kube-vip config ...
	I0917 17:28:43.167561   36365 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 17:28:43.179878   36365 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 17:28:43.180003   36365 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 17:28:43.180088   36365 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 17:28:43.190295   36365 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 17:28:43.190385   36365 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 17:28:43.200596   36365 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0917 17:28:43.219373   36365 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 17:28:43.237790   36365 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0917 17:28:43.257661   36365 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0917 17:28:43.276695   36365 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0917 17:28:43.281865   36365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:28:43.429986   36365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 17:28:43.445803   36365 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247 for IP: 192.168.39.195
	I0917 17:28:43.445827   36365 certs.go:194] generating shared ca certs ...
	I0917 17:28:43.445843   36365 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:28:43.446017   36365 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 17:28:43.446072   36365 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 17:28:43.446087   36365 certs.go:256] generating profile certs ...
	I0917 17:28:43.446184   36365 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/client.key
	I0917 17:28:43.446219   36365 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.bcc9b76d
	I0917 17:28:43.446236   36365 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.bcc9b76d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.195 192.168.39.11 192.168.39.122 192.168.39.254]
	I0917 17:28:43.570763   36365 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.bcc9b76d ...
	I0917 17:28:43.570800   36365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.bcc9b76d: {Name:mk034a01997b55799b7e68b7917c6787739766d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:28:43.570981   36365 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.bcc9b76d ...
	I0917 17:28:43.570993   36365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.bcc9b76d: {Name:mk56769bdec63cb34da7404ed80355a546378f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:28:43.571066   36365 certs.go:381] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.bcc9b76d -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt
	I0917 17:28:43.571233   36365 certs.go:385] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.bcc9b76d -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key
	I0917 17:28:43.571375   36365 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.key
	I0917 17:28:43.571390   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 17:28:43.571404   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 17:28:43.571417   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 17:28:43.571429   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 17:28:43.571444   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 17:28:43.571457   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 17:28:43.571470   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 17:28:43.571482   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 17:28:43.571533   36365 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 17:28:43.571560   36365 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 17:28:43.571569   36365 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 17:28:43.571589   36365 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 17:28:43.571610   36365 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 17:28:43.571632   36365 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 17:28:43.571672   36365 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 17:28:43.571700   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> /usr/share/ca-certificates/182592.pem
	I0917 17:28:43.571714   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:28:43.571726   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem -> /usr/share/ca-certificates/18259.pem
	I0917 17:28:43.572347   36365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 17:28:43.600489   36365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 17:28:43.627271   36365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 17:28:43.653537   36365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 17:28:43.680440   36365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 17:28:43.706154   36365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 17:28:43.732565   36365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 17:28:43.758769   36365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 17:28:43.784917   36365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 17:28:43.811978   36365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 17:28:43.837533   36365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 17:28:43.863590   36365 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 17:28:43.881556   36365 ssh_runner.go:195] Run: openssl version
	I0917 17:28:43.887893   36365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 17:28:43.901761   36365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:28:43.906902   36365 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:28:43.906964   36365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:28:43.913335   36365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 17:28:43.923587   36365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 17:28:43.935186   36365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 17:28:43.939831   36365 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 17:28:43.939880   36365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 17:28:43.945844   36365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 17:28:43.956175   36365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 17:28:43.968811   36365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 17:28:43.973665   36365 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 17:28:43.973728   36365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 17:28:43.979691   36365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 17:28:43.990007   36365 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 17:28:43.995067   36365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 17:28:44.001113   36365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 17:28:44.007052   36365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 17:28:44.013269   36365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 17:28:44.019312   36365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 17:28:44.030381   36365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 17:28:44.036653   36365 kubeadm.go:392] StartCluster: {Name:ha-181247 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-181247 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.63 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:28:44.036781   36365 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 17:28:44.036839   36365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 17:28:44.081936   36365 cri.go:89] found id: "c0b6697ed71d9634546240444406aecec2623303f0c13d18dc5c2f4e4fe9559d"
	I0917 17:28:44.081965   36365 cri.go:89] found id: "5800f16007ffd726fc1ae2824192d18e0680d6772934633730106e07505d6321"
	I0917 17:28:44.081971   36365 cri.go:89] found id: "16a324568e3b36f8c61b1b0ff2dadbfd908eae0771d6c335b9ae9b62cf27023e"
	I0917 17:28:44.081976   36365 cri.go:89] found id: "f192df08c3590e7bfd79aae969f3dd8f66f4d35701d32e3420611215f63553e5"
	I0917 17:28:44.081980   36365 cri.go:89] found id: "595bdaca307f1441f309d1bcf4b34051887f8d890e48b076244fe646ebd3d242"
	I0917 17:28:44.081985   36365 cri.go:89] found id: "4c6e5f75c94800e99ebcedfae5efd792c106959333def0368e33a21ce4b57dba"
	I0917 17:28:44.081989   36365 cri.go:89] found id: "aa3e79172e8670acd510e33bf9c5da16c5d3534ce4ffe595a8e325c11657359d"
	I0917 17:28:44.081993   36365 cri.go:89] found id: "8d41e134288854fec45bd7f2346f9652bdd3a1e807f2fc2cbcd942906f8f16d2"
	I0917 17:28:44.081997   36365 cri.go:89] found id: "fe133e1d0be653fbf2459fedd510646aaea8936b333247d26b813696efb08ff5"
	I0917 17:28:44.082004   36365 cri.go:89] found id: "e131e7c4af3fcc40108c8700b714abff6a932db62448fb92d9eced1c2ada9a91"
	I0917 17:28:44.082008   36365 cri.go:89] found id: "1bd357b39ecdb7b929a836991acf48843872ad86112b5035be1a2d9f29d4256a"
	I0917 17:28:44.082012   36365 cri.go:89] found id: "2b77bc3ea3167ff26e92f87afb0c5a5816aa025b692fb78393cc147efa615fb4"
	I0917 17:28:44.082016   36365 cri.go:89] found id: "c48764653b9795d2cba4178792c492a672b05306c9e7af677049f5a787ecc32d"
	I0917 17:28:44.082020   36365 cri.go:89] found id: ""
	I0917 17:28:44.082061   36365 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 17 17:31:22 ha-181247 crio[3559]: time="2024-09-17 17:31:22.875600621Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8bc2c0f3-e273-4dca-a87e-7444b24ae140 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:31:22 ha-181247 crio[3559]: time="2024-09-17 17:31:22.877435679Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5e605ce0-1f5c-4c62-8fc5-1afa56455a6b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:31:22 ha-181247 crio[3559]: time="2024-09-17 17:31:22.877899097Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594282877872260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5e605ce0-1f5c-4c62-8fc5-1afa56455a6b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:31:22 ha-181247 crio[3559]: time="2024-09-17 17:31:22.878784212Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9fd1409-3e0a-4e80-861e-74b47605958d name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:31:22 ha-181247 crio[3559]: time="2024-09-17 17:31:22.878845831Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9fd1409-3e0a-4e80-861e-74b47605958d name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:31:22 ha-181247 crio[3559]: time="2024-09-17 17:31:22.880249852Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:66e3c05e0a322c056ce6663fb5927823b5203e792db7564e24855819dd79380a,PodSandboxId:8a99278a4aabaeb673baebb3299b207a1cc469df6183497938cc4da48d0d989d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726594205451548352,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcef4cf0-61a6-4f9f-9644-f17f7f819237,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:902398e71d1c4ad20c5b15fa15132b981b81450766429108d5bfb5908d26e43e,PodSandboxId:5403d6c44b701aeb861aeca376475c318e2332bf44bdb2eb8778f6244592ef72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726594168450810396,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad9722f4b7cb935efee60829f463e82,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5d783c80592a10abe999e14e4f3929b183e45e6ec1aa38a049557fb506a780c,PodSandboxId:69fceaec5a1d1fd6679cf3637d991c5f2b901f5b5c497fc3dc6ec82cbc2e7611,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726594163774155936,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w8wxj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 681ace64-6c78-437e-9e9d-46edd2b4a8c4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3de5c9eb5bc6e0826bc6cee807a8461e9459d4567b45925c4142709163169c,PodSandboxId:331d57b91b4af9f8d6d1c82122eb151fd002653837435bbc31f43568066011af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726594163326854622,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afa944276158c101e1b388244401851,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ad4df81626cef60b73562b5608dfcc703fb4767e6831cde09375b17feb5d5c3,PodSandboxId:8a99278a4aabaeb673baebb3299b207a1cc469df6183497938cc4da48d0d989d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726594162442540876,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcef4cf0-61a6-4f9f-9644-f17f7f819237,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c401bf90a7a1a531d540c9091deb2112ea28bbeb9bacbedfe1f9edb84c5fab4,PodSandboxId:0122ff4dbdcc14cd2c57d3a9233ac49d8022f39a25a02c39ea32203931059bd3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726594144803901483,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aed6fcaf0b2bec2d4bdeb50696d03324,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e006a9b2aae6748542deecb0f28974d6d9aa2a82006af2640940b3f9b61863e3,PodSandboxId:ef2f4d54b977f375b920271720748727e7bf3c7f7923b2f3899e7e9c75a2861b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726594132127679444,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a075630a-48df-429f-98ef-49bca2d9dac5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:79aeef1c489439f99016dfe63b6c19ec3dafc13f649ee64cf83855ab0598a90a,PodSandboxId:b6c98a7fec7f5771c5418884361504798d252e0dec9fcf3f98c5d85c63a2adb1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726594130810348311,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lmg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1052e249-3530-4220-8214-0c36a02c4215,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3b0015ac8d2310e74efe8f4ade225b3ed047e43ccda61682d1ee8796c753d0f,PodSandboxId:392dae9735c73c55d4a36328420d772cb29c9dbfc311db5b79628fd69cd38590,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726594130553543051,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 882de4ca-d789-403e-a22e-22fbc776af10,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53ab2cc61c85bcd91d55627ca1c495f8df1e2af7877e16370d9bb58038c37d35,PodSandboxId:c8290f8fe843646b039a9cc7442e523f6d635bcf96f64c299984f542aff9c22b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726594130717475289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bdthh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ae9d00-44ce-47be-80c5-12144ff8c69b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea3f2b560fe5b38c04e3d8a4e416a5590512d910d1d5b1142735703ab8cb704e,PodSandboxId:37fc3cb1074825a7032531fd5d76c4c2de499e673326f174b628d6cc266127a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726594130503031471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-181247,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 48e891f6ff7af12b13f4bafa92c7341b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4062b287e38135972ca2a83f7e5e2c394cf5e89e0b663a3717db8e56f051a633,PodSandboxId:331d57b91b4af9f8d6d1c82122eb151fd002653837435bbc31f43568066011af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726594130358560852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 3afa944276158c101e1b388244401851,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ee2daa7460a214dbdb41e54393bd05a62faf7c1a45c86af80c54d03c10fc3d,PodSandboxId:7b562ed00e84eab3969ce84ff04b5df5b662d05fa11b30b55da4190ac62aa9d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726594130284032408,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5748534d2a3a40ee72c6688e8f4f184d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b475bb0d2d976d199c926f1d876639ddc076b8ccbd47e18c436cd8348a0b12,PodSandboxId:5403d6c44b701aeb861aeca376475c318e2332bf44bdb2eb8778f6244592ef72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726594130234849591,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad9722f4b7cb935efee60829f463e82,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1e590e905eaba85f23f252703d2acaae800370c02af50dfc79d700256e17f2f,PodSandboxId:032ab62b0ab6812b12acf668d671cc659557b22a8790aa1fc2cfa4494ec55dd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726593627164505554,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w8wxj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 681ace64-6c78-437e-9e9d-46edd2b4a8c4,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f192df08c3590e7bfd79aae969f3dd8f66f4d35701d32e3420611215f63553e5,PodSandboxId:4564f117340894dd0f7a94fcff0ee0f5325c4046162909def7023cd62482149e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726593490994501976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bdthh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ae9d00-44ce-47be-80c5-12144ff8c69b,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595bdaca307f1441f309d1bcf4b34051887f8d890e48b076244fe646ebd3d242,PodSandboxId:251b80e9b641b04b92ef1676da10ff73b6530adcb89c413d79c9dc235a3e2707,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726593490936203600,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lmg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1052e249-3530-4220-8214-0c36a02c4215,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3e79172e8670acd510e33bf9c5da16c5d3534ce4ffe595a8e325c11657359d,PodSandboxId:2c5d3e765b253f81a3020f932abb56db9dbde7085ba378402b5ad0ce2468bd4e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726593478702515823,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a075630a-48df-429f-98ef-49bca2d9dac5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d41e134288854fec45bd7f2346f9652bdd3a1e807f2fc2cbcd942906f8f16d2,PodSandboxId:030199bb820c578808c1be3b2896b11a2b2fd10c184eb736f1a637e365377055,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726593478343369849,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 882de4ca-d789-403e-a22e-22fbc776af10,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e131e7c4af3fcc40108c8700b714abff6a932db62448fb92d9eced1c2ada9a91,PodSandboxId:6c0fc2dc035f99358aade2656ae8ecd22f042954413615bd3314f4560fe0f22b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726593466010452800,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5748534d2a3a40ee72c6688e8f4f184d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b77bc3ea3167ff26e92f87afb0c5a5816aa025b692fb78393cc147efa615fb4,PodSandboxId:64083dac55fed338c25d3bc30e0f7fabdd26345b8ad9d8098355f4740009de72,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726593465866672350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48e891f6ff7af12b13f4bafa92c7341b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9fd1409-3e0a-4e80-861e-74b47605958d name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:31:22 ha-181247 crio[3559]: time="2024-09-17 17:31:22.914560006Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=de144c08-1e4f-42fd-a524-a6ff5bea13c3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 17 17:31:22 ha-181247 crio[3559]: time="2024-09-17 17:31:22.915020156Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:69fceaec5a1d1fd6679cf3637d991c5f2b901f5b5c497fc3dc6ec82cbc2e7611,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-w8wxj,Uid:681ace64-6c78-437e-9e9d-46edd2b4a8c4,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726594163601019428,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-w8wxj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 681ace64-6c78-437e-9e9d-46edd2b4a8c4,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-17T17:20:24.355639436Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0122ff4dbdcc14cd2c57d3a9233ac49d8022f39a25a02c39ea32203931059bd3,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-181247,Uid:aed6fcaf0b2bec2d4bdeb50696d03324,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1726594144702823054,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aed6fcaf0b2bec2d4bdeb50696d03324,},Annotations:map[string]string{kubernetes.io/config.hash: aed6fcaf0b2bec2d4bdeb50696d03324,kubernetes.io/config.seen: 2024-09-17T17:28:43.218896298Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c8290f8fe843646b039a9cc7442e523f6d635bcf96f64c299984f542aff9c22b,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-bdthh,Uid:63ae9d00-44ce-47be-80c5-12144ff8c69b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726594129994639709,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-bdthh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ae9d00-44ce-47be-80c5-12144ff8c69b,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09
-17T17:18:10.377502035Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:331d57b91b4af9f8d6d1c82122eb151fd002653837435bbc31f43568066011af,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-181247,Uid:3afa944276158c101e1b388244401851,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726594129917322376,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afa944276158c101e1b388244401851,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.195:8443,kubernetes.io/config.hash: 3afa944276158c101e1b388244401851,kubernetes.io/config.seen: 2024-09-17T17:17:52.337838488Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:37fc3cb1074825a7032531fd5d76c4c2de499e673326f174b628d6cc266127a5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-181247,Uid:48e8
91f6ff7af12b13f4bafa92c7341b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726594129910806567,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48e891f6ff7af12b13f4bafa92c7341b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 48e891f6ff7af12b13f4bafa92c7341b,kubernetes.io/config.seen: 2024-09-17T17:17:52.337832125Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ef2f4d54b977f375b920271720748727e7bf3c7f7923b2f3899e7e9c75a2861b,Metadata:&PodSandboxMetadata{Name:kube-proxy-7rrxk,Uid:a075630a-48df-429f-98ef-49bca2d9dac5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726594129896246427,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-7rrxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a075630a-48df-429f-98ef-49bca2d9dac5,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-17T17:17:56.765746551Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b6c98a7fec7f5771c5418884361504798d252e0dec9fcf3f98c5d85c63a2adb1,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-5lmg4,Uid:1052e249-3530-4220-8214-0c36a02c4215,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726594129891975090,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lmg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1052e249-3530-4220-8214-0c36a02c4215,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-17T17:18:10.361922817Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8a99278a4aabaeb673baebb3299b207a1cc469df6183497938cc4da48d0d989d,Metadata:&PodSandboxMetadata{Name:storage-provisione
r,Uid:fcef4cf0-61a6-4f9f-9644-f17f7f819237,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726594129856902333,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcef4cf0-61a6-4f9f-9644-f17f7f819237,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"servic
eAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-17T17:18:10.378683672Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7b562ed00e84eab3969ce84ff04b5df5b662d05fa11b30b55da4190ac62aa9d9,Metadata:&PodSandboxMetadata{Name:etcd-ha-181247,Uid:5748534d2a3a40ee72c6688e8f4f184d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726594129852541424,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5748534d2a3a40ee72c6688e8f4f184d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.195:2379,kubernetes.io/config.hash: 5748534d2a3a40ee72c6688e8f4f184d,kubernetes.io/config.seen: 2024-09-17T17:17:52.337837433Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:392
dae9735c73c55d4a36328420d772cb29c9dbfc311db5b79628fd69cd38590,Metadata:&PodSandboxMetadata{Name:kindnet-2tkbp,Uid:882de4ca-d789-403e-a22e-22fbc776af10,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726594129849556899,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-2tkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 882de4ca-d789-403e-a22e-22fbc776af10,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-17T17:17:56.767471868Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5403d6c44b701aeb861aeca376475c318e2332bf44bdb2eb8778f6244592ef72,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-181247,Uid:7ad9722f4b7cb935efee60829f463e82,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726594129845547517,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.
container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad9722f4b7cb935efee60829f463e82,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7ad9722f4b7cb935efee60829f463e82,kubernetes.io/config.seen: 2024-09-17T17:17:52.337839463Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:032ab62b0ab6812b12acf668d671cc659557b22a8790aa1fc2cfa4494ec55dd9,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-w8wxj,Uid:681ace64-6c78-437e-9e9d-46edd2b4a8c4,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726593624674528897,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-w8wxj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 681ace64-6c78-437e-9e9d-46edd2b4a8c4,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-17T17:20:24.355639436Z,kubernetes.io/config.source
: api,},RuntimeHandler:,},&PodSandbox{Id:4564f117340894dd0f7a94fcff0ee0f5325c4046162909def7023cd62482149e,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-bdthh,Uid:63ae9d00-44ce-47be-80c5-12144ff8c69b,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726593490701691899,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-bdthh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ae9d00-44ce-47be-80c5-12144ff8c69b,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-17T17:18:10.377502035Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:251b80e9b641b04b92ef1676da10ff73b6530adcb89c413d79c9dc235a3e2707,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-5lmg4,Uid:1052e249-3530-4220-8214-0c36a02c4215,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726593490671206414,Labels:map[string]string{io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: coredns-7c65d6cfc9-5lmg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1052e249-3530-4220-8214-0c36a02c4215,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-17T17:18:10.361922817Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2c5d3e765b253f81a3020f932abb56db9dbde7085ba378402b5ad0ce2468bd4e,Metadata:&PodSandboxMetadata{Name:kube-proxy-7rrxk,Uid:a075630a-48df-429f-98ef-49bca2d9dac5,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726593478572251843,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-7rrxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a075630a-48df-429f-98ef-49bca2d9dac5,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-17T17:17:56.765746551Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&
PodSandbox{Id:030199bb820c578808c1be3b2896b11a2b2fd10c184eb736f1a637e365377055,Metadata:&PodSandboxMetadata{Name:kindnet-2tkbp,Uid:882de4ca-d789-403e-a22e-22fbc776af10,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726593477982391841,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-2tkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 882de4ca-d789-403e-a22e-22fbc776af10,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-17T17:17:56.767471868Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6c0fc2dc035f99358aade2656ae8ecd22f042954413615bd3314f4560fe0f22b,Metadata:&PodSandboxMetadata{Name:etcd-ha-181247,Uid:5748534d2a3a40ee72c6688e8f4f184d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726593465731491447,Labels:map[string]string{component: etcd,io.kubernetes.container.name:
POD,io.kubernetes.pod.name: etcd-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5748534d2a3a40ee72c6688e8f4f184d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.195:2379,kubernetes.io/config.hash: 5748534d2a3a40ee72c6688e8f4f184d,kubernetes.io/config.seen: 2024-09-17T17:17:45.206269745Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:64083dac55fed338c25d3bc30e0f7fabdd26345b8ad9d8098355f4740009de72,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-181247,Uid:48e891f6ff7af12b13f4bafa92c7341b,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726593465665384927,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48e891f6ff7af12b13f4bafa92c7341b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 48e891f6
ff7af12b13f4bafa92c7341b,kubernetes.io/config.seen: 2024-09-17T17:17:45.206264630Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=de144c08-1e4f-42fd-a524-a6ff5bea13c3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 17 17:31:22 ha-181247 crio[3559]: time="2024-09-17 17:31:22.916251699Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=133aad51-7c5c-4a7c-9f97-53ee98983050 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:31:22 ha-181247 crio[3559]: time="2024-09-17 17:31:22.916316571Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=133aad51-7c5c-4a7c-9f97-53ee98983050 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:31:22 ha-181247 crio[3559]: time="2024-09-17 17:31:22.916755163Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:66e3c05e0a322c056ce6663fb5927823b5203e792db7564e24855819dd79380a,PodSandboxId:8a99278a4aabaeb673baebb3299b207a1cc469df6183497938cc4da48d0d989d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726594205451548352,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcef4cf0-61a6-4f9f-9644-f17f7f819237,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:902398e71d1c4ad20c5b15fa15132b981b81450766429108d5bfb5908d26e43e,PodSandboxId:5403d6c44b701aeb861aeca376475c318e2332bf44bdb2eb8778f6244592ef72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726594168450810396,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad9722f4b7cb935efee60829f463e82,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5d783c80592a10abe999e14e4f3929b183e45e6ec1aa38a049557fb506a780c,PodSandboxId:69fceaec5a1d1fd6679cf3637d991c5f2b901f5b5c497fc3dc6ec82cbc2e7611,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726594163774155936,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w8wxj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 681ace64-6c78-437e-9e9d-46edd2b4a8c4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3de5c9eb5bc6e0826bc6cee807a8461e9459d4567b45925c4142709163169c,PodSandboxId:331d57b91b4af9f8d6d1c82122eb151fd002653837435bbc31f43568066011af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726594163326854622,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afa944276158c101e1b388244401851,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ad4df81626cef60b73562b5608dfcc703fb4767e6831cde09375b17feb5d5c3,PodSandboxId:8a99278a4aabaeb673baebb3299b207a1cc469df6183497938cc4da48d0d989d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726594162442540876,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcef4cf0-61a6-4f9f-9644-f17f7f819237,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c401bf90a7a1a531d540c9091deb2112ea28bbeb9bacbedfe1f9edb84c5fab4,PodSandboxId:0122ff4dbdcc14cd2c57d3a9233ac49d8022f39a25a02c39ea32203931059bd3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726594144803901483,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aed6fcaf0b2bec2d4bdeb50696d03324,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e006a9b2aae6748542deecb0f28974d6d9aa2a82006af2640940b3f9b61863e3,PodSandboxId:ef2f4d54b977f375b920271720748727e7bf3c7f7923b2f3899e7e9c75a2861b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726594132127679444,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a075630a-48df-429f-98ef-49bca2d9dac5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:79aeef1c489439f99016dfe63b6c19ec3dafc13f649ee64cf83855ab0598a90a,PodSandboxId:b6c98a7fec7f5771c5418884361504798d252e0dec9fcf3f98c5d85c63a2adb1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726594130810348311,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lmg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1052e249-3530-4220-8214-0c36a02c4215,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3b0015ac8d2310e74efe8f4ade225b3ed047e43ccda61682d1ee8796c753d0f,PodSandboxId:392dae9735c73c55d4a36328420d772cb29c9dbfc311db5b79628fd69cd38590,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726594130553543051,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 882de4ca-d789-403e-a22e-22fbc776af10,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53ab2cc61c85bcd91d55627ca1c495f8df1e2af7877e16370d9bb58038c37d35,PodSandboxId:c8290f8fe843646b039a9cc7442e523f6d635bcf96f64c299984f542aff9c22b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726594130717475289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bdthh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ae9d00-44ce-47be-80c5-12144ff8c69b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea3f2b560fe5b38c04e3d8a4e416a5590512d910d1d5b1142735703ab8cb704e,PodSandboxId:37fc3cb1074825a7032531fd5d76c4c2de499e673326f174b628d6cc266127a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726594130503031471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-181247,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 48e891f6ff7af12b13f4bafa92c7341b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4062b287e38135972ca2a83f7e5e2c394cf5e89e0b663a3717db8e56f051a633,PodSandboxId:331d57b91b4af9f8d6d1c82122eb151fd002653837435bbc31f43568066011af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726594130358560852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 3afa944276158c101e1b388244401851,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ee2daa7460a214dbdb41e54393bd05a62faf7c1a45c86af80c54d03c10fc3d,PodSandboxId:7b562ed00e84eab3969ce84ff04b5df5b662d05fa11b30b55da4190ac62aa9d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726594130284032408,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5748534d2a3a40ee72c6688e8f4f184d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b475bb0d2d976d199c926f1d876639ddc076b8ccbd47e18c436cd8348a0b12,PodSandboxId:5403d6c44b701aeb861aeca376475c318e2332bf44bdb2eb8778f6244592ef72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726594130234849591,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad9722f4b7cb935efee60829f463e82,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1e590e905eaba85f23f252703d2acaae800370c02af50dfc79d700256e17f2f,PodSandboxId:032ab62b0ab6812b12acf668d671cc659557b22a8790aa1fc2cfa4494ec55dd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726593627164505554,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w8wxj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 681ace64-6c78-437e-9e9d-46edd2b4a8c4,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f192df08c3590e7bfd79aae969f3dd8f66f4d35701d32e3420611215f63553e5,PodSandboxId:4564f117340894dd0f7a94fcff0ee0f5325c4046162909def7023cd62482149e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726593490994501976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bdthh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ae9d00-44ce-47be-80c5-12144ff8c69b,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595bdaca307f1441f309d1bcf4b34051887f8d890e48b076244fe646ebd3d242,PodSandboxId:251b80e9b641b04b92ef1676da10ff73b6530adcb89c413d79c9dc235a3e2707,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726593490936203600,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lmg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1052e249-3530-4220-8214-0c36a02c4215,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3e79172e8670acd510e33bf9c5da16c5d3534ce4ffe595a8e325c11657359d,PodSandboxId:2c5d3e765b253f81a3020f932abb56db9dbde7085ba378402b5ad0ce2468bd4e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726593478702515823,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a075630a-48df-429f-98ef-49bca2d9dac5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d41e134288854fec45bd7f2346f9652bdd3a1e807f2fc2cbcd942906f8f16d2,PodSandboxId:030199bb820c578808c1be3b2896b11a2b2fd10c184eb736f1a637e365377055,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726593478343369849,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 882de4ca-d789-403e-a22e-22fbc776af10,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e131e7c4af3fcc40108c8700b714abff6a932db62448fb92d9eced1c2ada9a91,PodSandboxId:6c0fc2dc035f99358aade2656ae8ecd22f042954413615bd3314f4560fe0f22b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726593466010452800,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5748534d2a3a40ee72c6688e8f4f184d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b77bc3ea3167ff26e92f87afb0c5a5816aa025b692fb78393cc147efa615fb4,PodSandboxId:64083dac55fed338c25d3bc30e0f7fabdd26345b8ad9d8098355f4740009de72,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726593465866672350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48e891f6ff7af12b13f4bafa92c7341b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=133aad51-7c5c-4a7c-9f97-53ee98983050 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:31:22 ha-181247 crio[3559]: time="2024-09-17 17:31:22.930954410Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=86c4252a-2ff7-4cdb-bedb-1c7452067be9 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:31:22 ha-181247 crio[3559]: time="2024-09-17 17:31:22.931030467Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=86c4252a-2ff7-4cdb-bedb-1c7452067be9 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:31:22 ha-181247 crio[3559]: time="2024-09-17 17:31:22.932442532Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=42b90386-6b09-43e8-afcb-d368f6eee1e1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:31:22 ha-181247 crio[3559]: time="2024-09-17 17:31:22.932888574Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594282932864175,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=42b90386-6b09-43e8-afcb-d368f6eee1e1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:31:22 ha-181247 crio[3559]: time="2024-09-17 17:31:22.933418432Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e8ec22cc-2dad-490a-880e-695f3848a5fc name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:31:22 ha-181247 crio[3559]: time="2024-09-17 17:31:22.933501079Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e8ec22cc-2dad-490a-880e-695f3848a5fc name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:31:22 ha-181247 crio[3559]: time="2024-09-17 17:31:22.933983034Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:66e3c05e0a322c056ce6663fb5927823b5203e792db7564e24855819dd79380a,PodSandboxId:8a99278a4aabaeb673baebb3299b207a1cc469df6183497938cc4da48d0d989d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726594205451548352,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcef4cf0-61a6-4f9f-9644-f17f7f819237,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:902398e71d1c4ad20c5b15fa15132b981b81450766429108d5bfb5908d26e43e,PodSandboxId:5403d6c44b701aeb861aeca376475c318e2332bf44bdb2eb8778f6244592ef72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726594168450810396,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad9722f4b7cb935efee60829f463e82,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5d783c80592a10abe999e14e4f3929b183e45e6ec1aa38a049557fb506a780c,PodSandboxId:69fceaec5a1d1fd6679cf3637d991c5f2b901f5b5c497fc3dc6ec82cbc2e7611,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726594163774155936,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w8wxj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 681ace64-6c78-437e-9e9d-46edd2b4a8c4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3de5c9eb5bc6e0826bc6cee807a8461e9459d4567b45925c4142709163169c,PodSandboxId:331d57b91b4af9f8d6d1c82122eb151fd002653837435bbc31f43568066011af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726594163326854622,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afa944276158c101e1b388244401851,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ad4df81626cef60b73562b5608dfcc703fb4767e6831cde09375b17feb5d5c3,PodSandboxId:8a99278a4aabaeb673baebb3299b207a1cc469df6183497938cc4da48d0d989d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726594162442540876,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcef4cf0-61a6-4f9f-9644-f17f7f819237,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c401bf90a7a1a531d540c9091deb2112ea28bbeb9bacbedfe1f9edb84c5fab4,PodSandboxId:0122ff4dbdcc14cd2c57d3a9233ac49d8022f39a25a02c39ea32203931059bd3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726594144803901483,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aed6fcaf0b2bec2d4bdeb50696d03324,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e006a9b2aae6748542deecb0f28974d6d9aa2a82006af2640940b3f9b61863e3,PodSandboxId:ef2f4d54b977f375b920271720748727e7bf3c7f7923b2f3899e7e9c75a2861b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726594132127679444,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a075630a-48df-429f-98ef-49bca2d9dac5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:79aeef1c489439f99016dfe63b6c19ec3dafc13f649ee64cf83855ab0598a90a,PodSandboxId:b6c98a7fec7f5771c5418884361504798d252e0dec9fcf3f98c5d85c63a2adb1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726594130810348311,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lmg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1052e249-3530-4220-8214-0c36a02c4215,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3b0015ac8d2310e74efe8f4ade225b3ed047e43ccda61682d1ee8796c753d0f,PodSandboxId:392dae9735c73c55d4a36328420d772cb29c9dbfc311db5b79628fd69cd38590,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726594130553543051,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 882de4ca-d789-403e-a22e-22fbc776af10,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53ab2cc61c85bcd91d55627ca1c495f8df1e2af7877e16370d9bb58038c37d35,PodSandboxId:c8290f8fe843646b039a9cc7442e523f6d635bcf96f64c299984f542aff9c22b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726594130717475289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bdthh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ae9d00-44ce-47be-80c5-12144ff8c69b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea3f2b560fe5b38c04e3d8a4e416a5590512d910d1d5b1142735703ab8cb704e,PodSandboxId:37fc3cb1074825a7032531fd5d76c4c2de499e673326f174b628d6cc266127a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726594130503031471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-181247,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 48e891f6ff7af12b13f4bafa92c7341b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4062b287e38135972ca2a83f7e5e2c394cf5e89e0b663a3717db8e56f051a633,PodSandboxId:331d57b91b4af9f8d6d1c82122eb151fd002653837435bbc31f43568066011af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726594130358560852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 3afa944276158c101e1b388244401851,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ee2daa7460a214dbdb41e54393bd05a62faf7c1a45c86af80c54d03c10fc3d,PodSandboxId:7b562ed00e84eab3969ce84ff04b5df5b662d05fa11b30b55da4190ac62aa9d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726594130284032408,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5748534d2a3a40ee72c6688e8f4f184d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b475bb0d2d976d199c926f1d876639ddc076b8ccbd47e18c436cd8348a0b12,PodSandboxId:5403d6c44b701aeb861aeca376475c318e2332bf44bdb2eb8778f6244592ef72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726594130234849591,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad9722f4b7cb935efee60829f463e82,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1e590e905eaba85f23f252703d2acaae800370c02af50dfc79d700256e17f2f,PodSandboxId:032ab62b0ab6812b12acf668d671cc659557b22a8790aa1fc2cfa4494ec55dd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726593627164505554,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w8wxj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 681ace64-6c78-437e-9e9d-46edd2b4a8c4,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f192df08c3590e7bfd79aae969f3dd8f66f4d35701d32e3420611215f63553e5,PodSandboxId:4564f117340894dd0f7a94fcff0ee0f5325c4046162909def7023cd62482149e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726593490994501976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bdthh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ae9d00-44ce-47be-80c5-12144ff8c69b,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595bdaca307f1441f309d1bcf4b34051887f8d890e48b076244fe646ebd3d242,PodSandboxId:251b80e9b641b04b92ef1676da10ff73b6530adcb89c413d79c9dc235a3e2707,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726593490936203600,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lmg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1052e249-3530-4220-8214-0c36a02c4215,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3e79172e8670acd510e33bf9c5da16c5d3534ce4ffe595a8e325c11657359d,PodSandboxId:2c5d3e765b253f81a3020f932abb56db9dbde7085ba378402b5ad0ce2468bd4e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726593478702515823,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a075630a-48df-429f-98ef-49bca2d9dac5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d41e134288854fec45bd7f2346f9652bdd3a1e807f2fc2cbcd942906f8f16d2,PodSandboxId:030199bb820c578808c1be3b2896b11a2b2fd10c184eb736f1a637e365377055,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726593478343369849,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 882de4ca-d789-403e-a22e-22fbc776af10,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e131e7c4af3fcc40108c8700b714abff6a932db62448fb92d9eced1c2ada9a91,PodSandboxId:6c0fc2dc035f99358aade2656ae8ecd22f042954413615bd3314f4560fe0f22b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726593466010452800,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5748534d2a3a40ee72c6688e8f4f184d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b77bc3ea3167ff26e92f87afb0c5a5816aa025b692fb78393cc147efa615fb4,PodSandboxId:64083dac55fed338c25d3bc30e0f7fabdd26345b8ad9d8098355f4740009de72,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726593465866672350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48e891f6ff7af12b13f4bafa92c7341b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e8ec22cc-2dad-490a-880e-695f3848a5fc name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:31:22 ha-181247 crio[3559]: time="2024-09-17 17:31:22.991014982Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6bfeb8d9-2920-4126-a9b3-dc0ba1d58ecb name=/runtime.v1.RuntimeService/Version
	Sep 17 17:31:22 ha-181247 crio[3559]: time="2024-09-17 17:31:22.991223278Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6bfeb8d9-2920-4126-a9b3-dc0ba1d58ecb name=/runtime.v1.RuntimeService/Version
	Sep 17 17:31:22 ha-181247 crio[3559]: time="2024-09-17 17:31:22.993910653Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=181b7406-65f6-4195-af23-70df0334dd6e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:31:22 ha-181247 crio[3559]: time="2024-09-17 17:31:22.994864277Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594282994828330,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=181b7406-65f6-4195-af23-70df0334dd6e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:31:23 ha-181247 crio[3559]: time="2024-09-17 17:31:23.002476176Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d90c32f8-761f-46bf-b7c4-a4916114185c name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:31:23 ha-181247 crio[3559]: time="2024-09-17 17:31:23.002642660Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d90c32f8-761f-46bf-b7c4-a4916114185c name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:31:23 ha-181247 crio[3559]: time="2024-09-17 17:31:23.003502084Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:66e3c05e0a322c056ce6663fb5927823b5203e792db7564e24855819dd79380a,PodSandboxId:8a99278a4aabaeb673baebb3299b207a1cc469df6183497938cc4da48d0d989d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726594205451548352,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcef4cf0-61a6-4f9f-9644-f17f7f819237,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:902398e71d1c4ad20c5b15fa15132b981b81450766429108d5bfb5908d26e43e,PodSandboxId:5403d6c44b701aeb861aeca376475c318e2332bf44bdb2eb8778f6244592ef72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726594168450810396,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad9722f4b7cb935efee60829f463e82,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5d783c80592a10abe999e14e4f3929b183e45e6ec1aa38a049557fb506a780c,PodSandboxId:69fceaec5a1d1fd6679cf3637d991c5f2b901f5b5c497fc3dc6ec82cbc2e7611,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726594163774155936,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w8wxj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 681ace64-6c78-437e-9e9d-46edd2b4a8c4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3de5c9eb5bc6e0826bc6cee807a8461e9459d4567b45925c4142709163169c,PodSandboxId:331d57b91b4af9f8d6d1c82122eb151fd002653837435bbc31f43568066011af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726594163326854622,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afa944276158c101e1b388244401851,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ad4df81626cef60b73562b5608dfcc703fb4767e6831cde09375b17feb5d5c3,PodSandboxId:8a99278a4aabaeb673baebb3299b207a1cc469df6183497938cc4da48d0d989d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726594162442540876,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcef4cf0-61a6-4f9f-9644-f17f7f819237,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c401bf90a7a1a531d540c9091deb2112ea28bbeb9bacbedfe1f9edb84c5fab4,PodSandboxId:0122ff4dbdcc14cd2c57d3a9233ac49d8022f39a25a02c39ea32203931059bd3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726594144803901483,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aed6fcaf0b2bec2d4bdeb50696d03324,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e006a9b2aae6748542deecb0f28974d6d9aa2a82006af2640940b3f9b61863e3,PodSandboxId:ef2f4d54b977f375b920271720748727e7bf3c7f7923b2f3899e7e9c75a2861b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726594132127679444,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a075630a-48df-429f-98ef-49bca2d9dac5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:79aeef1c489439f99016dfe63b6c19ec3dafc13f649ee64cf83855ab0598a90a,PodSandboxId:b6c98a7fec7f5771c5418884361504798d252e0dec9fcf3f98c5d85c63a2adb1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726594130810348311,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lmg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1052e249-3530-4220-8214-0c36a02c4215,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3b0015ac8d2310e74efe8f4ade225b3ed047e43ccda61682d1ee8796c753d0f,PodSandboxId:392dae9735c73c55d4a36328420d772cb29c9dbfc311db5b79628fd69cd38590,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726594130553543051,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 882de4ca-d789-403e-a22e-22fbc776af10,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53ab2cc61c85bcd91d55627ca1c495f8df1e2af7877e16370d9bb58038c37d35,PodSandboxId:c8290f8fe843646b039a9cc7442e523f6d635bcf96f64c299984f542aff9c22b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726594130717475289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bdthh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ae9d00-44ce-47be-80c5-12144ff8c69b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea3f2b560fe5b38c04e3d8a4e416a5590512d910d1d5b1142735703ab8cb704e,PodSandboxId:37fc3cb1074825a7032531fd5d76c4c2de499e673326f174b628d6cc266127a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726594130503031471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-181247,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 48e891f6ff7af12b13f4bafa92c7341b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4062b287e38135972ca2a83f7e5e2c394cf5e89e0b663a3717db8e56f051a633,PodSandboxId:331d57b91b4af9f8d6d1c82122eb151fd002653837435bbc31f43568066011af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726594130358560852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 3afa944276158c101e1b388244401851,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ee2daa7460a214dbdb41e54393bd05a62faf7c1a45c86af80c54d03c10fc3d,PodSandboxId:7b562ed00e84eab3969ce84ff04b5df5b662d05fa11b30b55da4190ac62aa9d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726594130284032408,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5748534d2a3a40ee72c6688e8f4f184d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b475bb0d2d976d199c926f1d876639ddc076b8ccbd47e18c436cd8348a0b12,PodSandboxId:5403d6c44b701aeb861aeca376475c318e2332bf44bdb2eb8778f6244592ef72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726594130234849591,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad9722f4b7cb935efee60829f463e82,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1e590e905eaba85f23f252703d2acaae800370c02af50dfc79d700256e17f2f,PodSandboxId:032ab62b0ab6812b12acf668d671cc659557b22a8790aa1fc2cfa4494ec55dd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726593627164505554,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w8wxj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 681ace64-6c78-437e-9e9d-46edd2b4a8c4,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f192df08c3590e7bfd79aae969f3dd8f66f4d35701d32e3420611215f63553e5,PodSandboxId:4564f117340894dd0f7a94fcff0ee0f5325c4046162909def7023cd62482149e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726593490994501976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bdthh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ae9d00-44ce-47be-80c5-12144ff8c69b,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595bdaca307f1441f309d1bcf4b34051887f8d890e48b076244fe646ebd3d242,PodSandboxId:251b80e9b641b04b92ef1676da10ff73b6530adcb89c413d79c9dc235a3e2707,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726593490936203600,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lmg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1052e249-3530-4220-8214-0c36a02c4215,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3e79172e8670acd510e33bf9c5da16c5d3534ce4ffe595a8e325c11657359d,PodSandboxId:2c5d3e765b253f81a3020f932abb56db9dbde7085ba378402b5ad0ce2468bd4e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726593478702515823,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a075630a-48df-429f-98ef-49bca2d9dac5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d41e134288854fec45bd7f2346f9652bdd3a1e807f2fc2cbcd942906f8f16d2,PodSandboxId:030199bb820c578808c1be3b2896b11a2b2fd10c184eb736f1a637e365377055,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726593478343369849,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 882de4ca-d789-403e-a22e-22fbc776af10,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e131e7c4af3fcc40108c8700b714abff6a932db62448fb92d9eced1c2ada9a91,PodSandboxId:6c0fc2dc035f99358aade2656ae8ecd22f042954413615bd3314f4560fe0f22b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726593466010452800,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5748534d2a3a40ee72c6688e8f4f184d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b77bc3ea3167ff26e92f87afb0c5a5816aa025b692fb78393cc147efa615fb4,PodSandboxId:64083dac55fed338c25d3bc30e0f7fabdd26345b8ad9d8098355f4740009de72,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726593465866672350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48e891f6ff7af12b13f4bafa92c7341b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d90c32f8-761f-46bf-b7c4-a4916114185c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	66e3c05e0a322       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   8a99278a4aaba       storage-provisioner
	902398e71d1c4       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   2                   5403d6c44b701       kube-controller-manager-ha-181247
	f5d783c80592a       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   69fceaec5a1d1       busybox-7dff88458-w8wxj
	7d3de5c9eb5bc       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            3                   331d57b91b4af       kube-apiserver-ha-181247
	9ad4df81626ce       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   8a99278a4aaba       storage-provisioner
	3c401bf90a7a1       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   0122ff4dbdcc1       kube-vip-ha-181247
	e006a9b2aae67       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      2 minutes ago        Running             kube-proxy                1                   ef2f4d54b977f       kube-proxy-7rrxk
	79aeef1c48943       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   1                   b6c98a7fec7f5       coredns-7c65d6cfc9-5lmg4
	53ab2cc61c85b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   1                   c8290f8fe8436       coredns-7c65d6cfc9-bdthh
	e3b0015ac8d23       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               1                   392dae9735c73       kindnet-2tkbp
	ea3f2b560fe5b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      2 minutes ago        Running             kube-scheduler            1                   37fc3cb107482       kube-scheduler-ha-181247
	4062b287e3813       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago        Exited              kube-apiserver            2                   331d57b91b4af       kube-apiserver-ha-181247
	f3ee2daa7460a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   7b562ed00e84e       etcd-ha-181247
	37b475bb0d2d9       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      2 minutes ago        Exited              kube-controller-manager   1                   5403d6c44b701       kube-controller-manager-ha-181247
	c1e590e905eab       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   032ab62b0ab68       busybox-7dff88458-w8wxj
	f192df08c3590       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago       Exited              coredns                   0                   4564f11734089       coredns-7c65d6cfc9-bdthh
	595bdaca307f1       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago       Exited              coredns                   0                   251b80e9b641b       coredns-7c65d6cfc9-5lmg4
	aa3e79172e867       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago       Exited              kube-proxy                0                   2c5d3e765b253       kube-proxy-7rrxk
	8d41e13428885       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      13 minutes ago       Exited              kindnet-cni               0                   030199bb820c5       kindnet-2tkbp
	e131e7c4af3fc       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago       Exited              etcd                      0                   6c0fc2dc035f9       etcd-ha-181247
	2b77bc3ea3167       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago       Exited              kube-scheduler            0                   64083dac55fed       kube-scheduler-ha-181247
	
	
	==> coredns [53ab2cc61c85bcd91d55627ca1c495f8df1e2af7877e16370d9bb58038c37d35] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1955337142]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:28:57.426) (total time: 10001ms):
	Trace[1955337142]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:29:07.427)
	Trace[1955337142]: [10.001728716s] [10.001728716s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:56302->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:56302->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [595bdaca307f1441f309d1bcf4b34051887f8d890e48b076244fe646ebd3d242] <==
	[INFO] 10.244.0.4:58082 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.00083865s
	[INFO] 10.244.1.2:33599 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003427065s
	[INFO] 10.244.1.2:48415 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000218431s
	[INFO] 10.244.1.2:36800 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000118274s
	[INFO] 10.244.2.2:43997 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000248398s
	[INFO] 10.244.2.2:35973 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001811s
	[INFO] 10.244.2.2:49572 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000172284s
	[INFO] 10.244.0.4:47826 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002065843s
	[INFO] 10.244.0.4:36193 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000199582s
	[INFO] 10.244.0.4:50628 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110526s
	[INFO] 10.244.0.4:44724 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114759s
	[INFO] 10.244.0.4:42511 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083739s
	[INFO] 10.244.2.2:46937 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116808s
	[INFO] 10.244.2.2:44451 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000173075s
	[INFO] 10.244.0.4:40459 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064325s
	[INFO] 10.244.1.2:49457 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184596s
	[INFO] 10.244.1.2:38498 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000205346s
	[INFO] 10.244.2.2:59967 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130934s
	[INFO] 10.244.2.2:41589 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000130541s
	[INFO] 10.244.0.4:45130 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000138569s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1853&timeout=9m20s&timeoutSeconds=560&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1856&timeout=8m23s&timeoutSeconds=503&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1854&timeout=5m32s&timeoutSeconds=332&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [79aeef1c489439f99016dfe63b6c19ec3dafc13f649ee64cf83855ab0598a90a] <==
	Trace[1686718204]: [10.001280091s] [10.001280091s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[490376353]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:28:52.786) (total time: 10002ms):
	Trace[490376353]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (17:29:02.788)
	Trace[490376353]: [10.002109707s] [10.002109707s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:47226->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[416424229]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:29:05.715) (total time: 10050ms):
	Trace[416424229]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:47226->10.96.0.1:443: read: connection reset by peer 10050ms (17:29:15.765)
	Trace[416424229]: [10.050516494s] [10.050516494s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:47226->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f192df08c3590e7bfd79aae969f3dd8f66f4d35701d32e3420611215f63553e5] <==
	[INFO] 10.244.2.2:42284 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000132466s
	[INFO] 10.244.0.4:37678 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000213338s
	[INFO] 10.244.0.4:44751 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000122274s
	[INFO] 10.244.0.4:56988 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001602111s
	[INFO] 10.244.1.2:42868 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00026031s
	[INFO] 10.244.1.2:40978 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000206846s
	[INFO] 10.244.1.2:41313 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097139s
	[INFO] 10.244.1.2:50208 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000151609s
	[INFO] 10.244.2.2:49264 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143158s
	[INFO] 10.244.2.2:54921 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000162093s
	[INFO] 10.244.0.4:54768 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000211558s
	[INFO] 10.244.0.4:47021 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000048005s
	[INFO] 10.244.0.4:52698 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004567s
	[INFO] 10.244.1.2:39357 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000237795s
	[INFO] 10.244.1.2:48172 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000183611s
	[INFO] 10.244.2.2:56434 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000125357s
	[INFO] 10.244.2.2:37159 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000179695s
	[INFO] 10.244.0.4:40381 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150761s
	[INFO] 10.244.0.4:39726 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074302s
	[INFO] 10.244.0.4:39990 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097242s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1853&timeout=8m42s&timeoutSeconds=522&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	
	
	==> describe nodes <==
	Name:               ha-181247
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-181247
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=ha-181247
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T17_17_53_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 17:17:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181247
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:31:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:29:29 +0000   Tue, 17 Sep 2024 17:17:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:29:29 +0000   Tue, 17 Sep 2024 17:17:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:29:29 +0000   Tue, 17 Sep 2024 17:17:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:29:29 +0000   Tue, 17 Sep 2024 17:18:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    ha-181247
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fef45c02f40245a0a3ede964289ca350
	  System UUID:                fef45c02-f402-45a0-a3ed-e964289ca350
	  Boot ID:                    3253b46a-acef-407f-8fd6-3d5cae46a6bb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-w8wxj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-5lmg4             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-7c65d6cfc9-bdthh             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-181247                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-2tkbp                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-181247             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-181247    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-7rrxk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-181247             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-181247                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 108s                   kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-181247 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-181247 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-181247 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-181247 event: Registered Node ha-181247 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-181247 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-181247 event: Registered Node ha-181247 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-181247 event: Registered Node ha-181247 in Controller
	  Warning  ContainerGCFailed        3m31s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             2m52s (x3 over 3m41s)  kubelet          Node ha-181247 status is now: NodeNotReady
	  Normal   RegisteredNode           113s                   node-controller  Node ha-181247 event: Registered Node ha-181247 in Controller
	  Normal   RegisteredNode           112s                   node-controller  Node ha-181247 event: Registered Node ha-181247 in Controller
	  Normal   RegisteredNode           40s                    node-controller  Node ha-181247 event: Registered Node ha-181247 in Controller
	
	
	Name:               ha-181247-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-181247-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=ha-181247
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T17_18_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 17:18:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181247-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:31:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:30:15 +0000   Tue, 17 Sep 2024 17:29:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:30:15 +0000   Tue, 17 Sep 2024 17:29:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:30:15 +0000   Tue, 17 Sep 2024 17:29:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:30:15 +0000   Tue, 17 Sep 2024 17:29:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.11
	  Hostname:    ha-181247-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2585a68084874db38baf46d679282ed1
	  System UUID:                2585a680-8487-4db3-8baf-46d679282ed1
	  Boot ID:                    4746ebdc-02a2-4372-a8b4-1d642059f3bc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-96b8c                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-181247-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-qqpgm                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-181247-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-181247-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-xmfcj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-181247-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-181247-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 106s                   kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-181247-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-181247-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-181247-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-181247-m02 event: Registered Node ha-181247-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-181247-m02 event: Registered Node ha-181247-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-181247-m02 event: Registered Node ha-181247-m02 in Controller
	  Normal  NodeNotReady             9m2s                   node-controller  Node ha-181247-m02 status is now: NodeNotReady
	  Normal  Starting                 2m17s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m17s (x8 over 2m17s)  kubelet          Node ha-181247-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m17s (x8 over 2m17s)  kubelet          Node ha-181247-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m17s (x7 over 2m17s)  kubelet          Node ha-181247-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           113s                   node-controller  Node ha-181247-m02 event: Registered Node ha-181247-m02 in Controller
	  Normal  RegisteredNode           112s                   node-controller  Node ha-181247-m02 event: Registered Node ha-181247-m02 in Controller
	  Normal  RegisteredNode           40s                    node-controller  Node ha-181247-m02 event: Registered Node ha-181247-m02 in Controller
	
	
	Name:               ha-181247-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-181247-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=ha-181247
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T17_20_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 17:19:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181247-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:31:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:30:55 +0000   Tue, 17 Sep 2024 17:30:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:30:55 +0000   Tue, 17 Sep 2024 17:30:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:30:55 +0000   Tue, 17 Sep 2024 17:30:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:30:55 +0000   Tue, 17 Sep 2024 17:30:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.122
	  Hostname:    ha-181247-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2b42e80aa44a47aebcfcad072d252d58
	  System UUID:                2b42e80a-a44a-47ae-bcfc-ad072d252d58
	  Boot ID:                    1b13da43-33cf-49b7-8035-291556304f1c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-mxrbl                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-181247-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-tkbmg                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-181247-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-181247-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-42gpk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-181247-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-181247-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 40s                kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-181247-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-181247-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-181247-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ha-181247-m03 event: Registered Node ha-181247-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-181247-m03 event: Registered Node ha-181247-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-181247-m03 event: Registered Node ha-181247-m03 in Controller
	  Normal   RegisteredNode           113s               node-controller  Node ha-181247-m03 event: Registered Node ha-181247-m03 in Controller
	  Normal   RegisteredNode           112s               node-controller  Node ha-181247-m03 event: Registered Node ha-181247-m03 in Controller
	  Normal   NodeNotReady             73s                node-controller  Node ha-181247-m03 status is now: NodeNotReady
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  59s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 59s (x2 over 59s)  kubelet          Node ha-181247-m03 has been rebooted, boot id: 1b13da43-33cf-49b7-8035-291556304f1c
	  Normal   NodeHasSufficientMemory  59s (x3 over 59s)  kubelet          Node ha-181247-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x3 over 59s)  kubelet          Node ha-181247-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x3 over 59s)  kubelet          Node ha-181247-m03 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             59s                kubelet          Node ha-181247-m03 status is now: NodeNotReady
	  Normal   NodeReady                59s                kubelet          Node ha-181247-m03 status is now: NodeReady
	  Normal   RegisteredNode           40s                node-controller  Node ha-181247-m03 event: Registered Node ha-181247-m03 in Controller
	
	
	Name:               ha-181247-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-181247-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=ha-181247
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T17_21_01_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 17:21:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181247-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:31:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:31:15 +0000   Tue, 17 Sep 2024 17:31:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:31:15 +0000   Tue, 17 Sep 2024 17:31:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:31:15 +0000   Tue, 17 Sep 2024 17:31:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:31:15 +0000   Tue, 17 Sep 2024 17:31:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.63
	  Hostname:    ha-181247-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6b33a6f0f712480eacc4183b870e9eb2
	  System UUID:                6b33a6f0-f712-480e-acc4-183b870e9eb2
	  Boot ID:                    91bbae3b-b227-490a-aaac-245b32a23838
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-ntzg5       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-shlht    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-181247-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-181247-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-181247-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-181247-m04 event: Registered Node ha-181247-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-181247-m04 event: Registered Node ha-181247-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-181247-m04 event: Registered Node ha-181247-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-181247-m04 status is now: NodeReady
	  Normal   RegisteredNode           113s               node-controller  Node ha-181247-m04 event: Registered Node ha-181247-m04 in Controller
	  Normal   RegisteredNode           112s               node-controller  Node ha-181247-m04 event: Registered Node ha-181247-m04 in Controller
	  Normal   NodeNotReady             73s                node-controller  Node ha-181247-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           40s                node-controller  Node ha-181247-m04 event: Registered Node ha-181247-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 8s                 kubelet          Node ha-181247-m04 has been rebooted, boot id: 91bbae3b-b227-490a-aaac-245b32a23838
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)    kubelet          Node ha-181247-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)    kubelet          Node ha-181247-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)    kubelet          Node ha-181247-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                8s                 kubelet          Node ha-181247-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.348569] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.067341] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053278] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.203985] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.135345] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.305138] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +4.213157] systemd-fstab-generator[739]: Ignoring "noauto" option for root device
	[  +4.747183] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.062031] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.385150] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	[  +0.092383] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.405737] kauditd_printk_skb: 18 callbacks suppressed
	[Sep17 17:18] kauditd_printk_skb: 41 callbacks suppressed
	[ +43.240359] kauditd_printk_skb: 26 callbacks suppressed
	[Sep17 17:28] systemd-fstab-generator[3484]: Ignoring "noauto" option for root device
	[  +0.170335] systemd-fstab-generator[3496]: Ignoring "noauto" option for root device
	[  +0.206124] systemd-fstab-generator[3510]: Ignoring "noauto" option for root device
	[  +0.160017] systemd-fstab-generator[3522]: Ignoring "noauto" option for root device
	[  +0.305640] systemd-fstab-generator[3550]: Ignoring "noauto" option for root device
	[  +2.953177] systemd-fstab-generator[3645]: Ignoring "noauto" option for root device
	[  +6.570640] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.419348] kauditd_printk_skb: 85 callbacks suppressed
	[Sep17 17:29] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [e131e7c4af3fcc40108c8700b714abff6a932db62448fb92d9eced1c2ada9a91] <==
	{"level":"info","ts":"2024-09-17T17:27:07.965428Z","caller":"traceutil/trace.go:171","msg":"trace[481436337] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; }","duration":"926.393034ms","start":"2024-09-17T17:27:07.039031Z","end":"2024-09-17T17:27:07.965424Z","steps":["trace[481436337] 'agreement among raft nodes before linearized reading'  (duration: 886.38891ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T17:27:07.965441Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-17T17:27:07.039013Z","time spent":"926.423288ms","remote":"127.0.0.1:36716","response type":"/etcdserverpb.KV/Range","request count":0,"request size":87,"response count":0,"response size":0,"request content":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" limit:10000 "}
	2024/09/17 17:27:07 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-17T17:27:08.011128Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":6657043732157711009,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T17:27:08.036952Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.195:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-17T17:27:08.037009Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.195:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-17T17:27:08.037300Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"324857e3fe6e5c62","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-17T17:27:08.037454Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e73967f545c05a22"}
	{"level":"info","ts":"2024-09-17T17:27:08.037491Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e73967f545c05a22"}
	{"level":"info","ts":"2024-09-17T17:27:08.037537Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e73967f545c05a22"}
	{"level":"info","ts":"2024-09-17T17:27:08.037639Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22"}
	{"level":"info","ts":"2024-09-17T17:27:08.037688Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22"}
	{"level":"info","ts":"2024-09-17T17:27:08.037746Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22"}
	{"level":"info","ts":"2024-09-17T17:27:08.037774Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e73967f545c05a22"}
	{"level":"info","ts":"2024-09-17T17:27:08.037781Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e75aed46b631937d"}
	{"level":"info","ts":"2024-09-17T17:27:08.037790Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e75aed46b631937d"}
	{"level":"info","ts":"2024-09-17T17:27:08.037808Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e75aed46b631937d"}
	{"level":"info","ts":"2024-09-17T17:27:08.037928Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"324857e3fe6e5c62","remote-peer-id":"e75aed46b631937d"}
	{"level":"info","ts":"2024-09-17T17:27:08.037976Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"324857e3fe6e5c62","remote-peer-id":"e75aed46b631937d"}
	{"level":"info","ts":"2024-09-17T17:27:08.038007Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"324857e3fe6e5c62","remote-peer-id":"e75aed46b631937d"}
	{"level":"info","ts":"2024-09-17T17:27:08.038018Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e75aed46b631937d"}
	{"level":"info","ts":"2024-09-17T17:27:08.041025Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.195:2380"}
	{"level":"info","ts":"2024-09-17T17:27:08.041219Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.195:2380"}
	{"level":"info","ts":"2024-09-17T17:27:08.041250Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-181247","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.195:2380"],"advertise-client-urls":["https://192.168.39.195:2379"]}
	{"level":"warn","ts":"2024-09-17T17:27:08.041371Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.03220608s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	
	
	==> etcd [f3ee2daa7460a214dbdb41e54393bd05a62faf7c1a45c86af80c54d03c10fc3d] <==
	{"level":"warn","ts":"2024-09-17T17:30:22.796935Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.122:2380/version","remote-member-id":"e75aed46b631937d","error":"Get \"https://192.168.39.122:2380/version\": dial tcp 192.168.39.122:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:30:22.797206Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e75aed46b631937d","error":"Get \"https://192.168.39.122:2380/version\": dial tcp 192.168.39.122:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:30:26.213459Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e75aed46b631937d","rtt":"0s","error":"dial tcp 192.168.39.122:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:30:26.213605Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e75aed46b631937d","rtt":"0s","error":"dial tcp 192.168.39.122:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:30:26.799610Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.122:2380/version","remote-member-id":"e75aed46b631937d","error":"Get \"https://192.168.39.122:2380/version\": dial tcp 192.168.39.122:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:30:26.799787Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e75aed46b631937d","error":"Get \"https://192.168.39.122:2380/version\": dial tcp 192.168.39.122:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:30:30.801784Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.122:2380/version","remote-member-id":"e75aed46b631937d","error":"Get \"https://192.168.39.122:2380/version\": dial tcp 192.168.39.122:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:30:30.801887Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e75aed46b631937d","error":"Get \"https://192.168.39.122:2380/version\": dial tcp 192.168.39.122:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:30:31.213961Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e75aed46b631937d","rtt":"0s","error":"dial tcp 192.168.39.122:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:30:31.214089Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e75aed46b631937d","rtt":"0s","error":"dial tcp 192.168.39.122:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:30:34.804172Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.122:2380/version","remote-member-id":"e75aed46b631937d","error":"Get \"https://192.168.39.122:2380/version\": dial tcp 192.168.39.122:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:30:34.804347Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e75aed46b631937d","error":"Get \"https://192.168.39.122:2380/version\": dial tcp 192.168.39.122:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-17T17:30:35.632398Z","caller":"traceutil/trace.go:171","msg":"trace[1807690037] transaction","detail":"{read_only:false; response_revision:2378; number_of_response:1; }","duration":"132.388874ms","start":"2024-09-17T17:30:35.499972Z","end":"2024-09-17T17:30:35.632361Z","steps":["trace[1807690037] 'process raft request'  (duration: 132.258578ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T17:30:36.159863Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"e75aed46b631937d"}
	{"level":"info","ts":"2024-09-17T17:30:36.172622Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"324857e3fe6e5c62","remote-peer-id":"e75aed46b631937d"}
	{"level":"info","ts":"2024-09-17T17:30:36.177453Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"324857e3fe6e5c62","remote-peer-id":"e75aed46b631937d"}
	{"level":"info","ts":"2024-09-17T17:30:36.195818Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"324857e3fe6e5c62","to":"e75aed46b631937d","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-17T17:30:36.195890Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"324857e3fe6e5c62","remote-peer-id":"e75aed46b631937d"}
	{"level":"info","ts":"2024-09-17T17:30:36.202309Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"324857e3fe6e5c62","to":"e75aed46b631937d","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-17T17:30:36.202414Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"324857e3fe6e5c62","remote-peer-id":"e75aed46b631937d"}
	{"level":"warn","ts":"2024-09-17T17:30:36.216913Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e75aed46b631937d","rtt":"0s","error":"dial tcp 192.168.39.122:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:30:36.217017Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e75aed46b631937d","rtt":"0s","error":"dial tcp 192.168.39.122:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:30:37.278305Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.266368ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-ha-181247-m03\" ","response":"range_response_count:1 size:6074"}
	{"level":"info","ts":"2024-09-17T17:30:37.278377Z","caller":"traceutil/trace.go:171","msg":"trace[2110576087] range","detail":"{range_begin:/registry/pods/kube-system/etcd-ha-181247-m03; range_end:; response_count:1; response_revision:2384; }","duration":"103.397645ms","start":"2024-09-17T17:30:37.174967Z","end":"2024-09-17T17:30:37.278365Z","steps":["trace[2110576087] 'range keys from in-memory index tree'  (duration: 102.119639ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T17:30:44.722808Z","caller":"traceutil/trace.go:171","msg":"trace[774956040] transaction","detail":"{read_only:false; response_revision:2428; number_of_response:1; }","duration":"115.131679ms","start":"2024-09-17T17:30:44.607658Z","end":"2024-09-17T17:30:44.722789Z","steps":["trace[774956040] 'process raft request'  (duration: 114.172784ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:31:23 up 14 min,  0 users,  load average: 0.31, 0.51, 0.37
	Linux ha-181247 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8d41e134288854fec45bd7f2346f9652bdd3a1e807f2fc2cbcd942906f8f16d2] <==
	I0917 17:26:39.804251       1 main.go:295] Handling node with IPs: map[192.168.39.122:{}]
	I0917 17:26:39.804304       1 main.go:322] Node ha-181247-m03 has CIDR [10.244.2.0/24] 
	I0917 17:26:39.804468       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0917 17:26:39.804495       1 main.go:322] Node ha-181247-m04 has CIDR [10.244.3.0/24] 
	I0917 17:26:39.804550       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0917 17:26:39.804573       1 main.go:299] handling current node
	I0917 17:26:39.804585       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0917 17:26:39.804590       1 main.go:322] Node ha-181247-m02 has CIDR [10.244.1.0/24] 
	I0917 17:26:49.804697       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0917 17:26:49.804810       1 main.go:322] Node ha-181247-m02 has CIDR [10.244.1.0/24] 
	I0917 17:26:49.804952       1 main.go:295] Handling node with IPs: map[192.168.39.122:{}]
	I0917 17:26:49.805134       1 main.go:322] Node ha-181247-m03 has CIDR [10.244.2.0/24] 
	I0917 17:26:49.805235       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0917 17:26:49.805256       1 main.go:322] Node ha-181247-m04 has CIDR [10.244.3.0/24] 
	I0917 17:26:49.805360       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0917 17:26:49.805382       1 main.go:299] handling current node
	I0917 17:26:59.801480       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0917 17:26:59.801620       1 main.go:322] Node ha-181247-m04 has CIDR [10.244.3.0/24] 
	I0917 17:26:59.801776       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0917 17:26:59.801803       1 main.go:299] handling current node
	I0917 17:26:59.801834       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0917 17:26:59.801860       1 main.go:322] Node ha-181247-m02 has CIDR [10.244.1.0/24] 
	I0917 17:26:59.801934       1 main.go:295] Handling node with IPs: map[192.168.39.122:{}]
	I0917 17:26:59.801964       1 main.go:322] Node ha-181247-m03 has CIDR [10.244.2.0/24] 
	E0917 17:27:00.597790       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1856&timeout=5m32s&timeoutSeconds=332&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> kindnet [e3b0015ac8d2310e74efe8f4ade225b3ed047e43ccda61682d1ee8796c753d0f] <==
	I0917 17:30:52.016764       1 main.go:322] Node ha-181247-m02 has CIDR [10.244.1.0/24] 
	I0917 17:31:02.015833       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0917 17:31:02.015915       1 main.go:299] handling current node
	I0917 17:31:02.015968       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0917 17:31:02.015975       1 main.go:322] Node ha-181247-m02 has CIDR [10.244.1.0/24] 
	I0917 17:31:02.016167       1 main.go:295] Handling node with IPs: map[192.168.39.122:{}]
	I0917 17:31:02.016193       1 main.go:322] Node ha-181247-m03 has CIDR [10.244.2.0/24] 
	I0917 17:31:02.016241       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0917 17:31:02.016277       1 main.go:322] Node ha-181247-m04 has CIDR [10.244.3.0/24] 
	I0917 17:31:12.024797       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0917 17:31:12.025391       1 main.go:322] Node ha-181247-m04 has CIDR [10.244.3.0/24] 
	I0917 17:31:12.025771       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0917 17:31:12.025823       1 main.go:299] handling current node
	I0917 17:31:12.025854       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0917 17:31:12.025876       1 main.go:322] Node ha-181247-m02 has CIDR [10.244.1.0/24] 
	I0917 17:31:12.025964       1 main.go:295] Handling node with IPs: map[192.168.39.122:{}]
	I0917 17:31:12.025988       1 main.go:322] Node ha-181247-m03 has CIDR [10.244.2.0/24] 
	I0917 17:31:22.018609       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0917 17:31:22.018799       1 main.go:299] handling current node
	I0917 17:31:22.018843       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0917 17:31:22.018866       1 main.go:322] Node ha-181247-m02 has CIDR [10.244.1.0/24] 
	I0917 17:31:22.019013       1 main.go:295] Handling node with IPs: map[192.168.39.122:{}]
	I0917 17:31:22.019035       1 main.go:322] Node ha-181247-m03 has CIDR [10.244.2.0/24] 
	I0917 17:31:22.019208       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0917 17:31:22.019241       1 main.go:322] Node ha-181247-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [4062b287e38135972ca2a83f7e5e2c394cf5e89e0b663a3717db8e56f051a633] <==
	I0917 17:28:51.539527       1 options.go:228] external host was not specified, using 192.168.39.195
	I0917 17:28:51.541801       1 server.go:142] Version: v1.31.1
	I0917 17:28:51.541887       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:28:52.443002       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0917 17:28:52.485235       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0917 17:28:52.488451       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0917 17:28:52.488475       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0917 17:28:52.488743       1 instance.go:232] Using reconciler: lease
	W0917 17:29:12.443614       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0917 17:29:12.443779       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0917 17:29:12.492126       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [7d3de5c9eb5bc6e0826bc6cee807a8461e9459d4567b45925c4142709163169c] <==
	I0917 17:29:26.574349       1 dynamic_serving_content.go:135] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0917 17:29:26.571976       1 controller.go:119] Starting legacy_token_tracking_controller
	I0917 17:29:26.587199       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0917 17:29:26.689656       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0917 17:29:26.689894       1 aggregator.go:171] initial CRD sync complete...
	I0917 17:29:26.689942       1 autoregister_controller.go:144] Starting autoregister controller
	I0917 17:29:26.689984       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0917 17:29:26.731668       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0917 17:29:26.738343       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0917 17:29:26.738382       1 policy_source.go:224] refreshing policies
	I0917 17:29:26.772287       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0917 17:29:26.772365       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0917 17:29:26.773015       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0917 17:29:26.773164       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0917 17:29:26.773575       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0917 17:29:26.775102       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0917 17:29:26.779806       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0917 17:29:26.786833       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0917 17:29:26.787445       1 shared_informer.go:320] Caches are synced for configmaps
	I0917 17:29:26.790895       1 cache.go:39] Caches are synced for autoregister controller
	I0917 17:29:26.813025       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 17:29:27.576898       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0917 17:29:28.003287       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.195]
	I0917 17:29:28.004708       1 controller.go:615] quota admission added evaluator for: endpoints
	I0917 17:29:28.018401       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [37b475bb0d2d976d199c926f1d876639ddc076b8ccbd47e18c436cd8348a0b12] <==
	I0917 17:28:51.927032       1 serving.go:386] Generated self-signed cert in-memory
	I0917 17:28:52.554740       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0917 17:28:52.554834       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:28:52.556580       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0917 17:28:52.557363       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 17:28:52.557512       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0917 17:28:52.557596       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0917 17:29:13.500097       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.195:8443/healthz\": dial tcp 192.168.39.195:8443: connect: connection refused"
	
	
	==> kube-controller-manager [902398e71d1c4ad20c5b15fa15132b981b81450766429108d5bfb5908d26e43e] <==
	I0917 17:30:00.760350       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="88.976µs"
	I0917 17:30:10.159737       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:30:10.160312       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m03"
	I0917 17:30:10.198012       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:30:10.198456       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m03"
	I0917 17:30:10.301605       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="23.303723ms"
	I0917 17:30:10.301780       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="103.687µs"
	I0917 17:30:11.248600       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:30:15.071195       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m02"
	I0917 17:30:15.513358       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m03"
	I0917 17:30:21.328626       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m03"
	I0917 17:30:24.438034       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m03"
	I0917 17:30:24.449600       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m03"
	I0917 17:30:25.228701       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="68.171µs"
	I0917 17:30:25.431514       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m03"
	I0917 17:30:25.590845       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:30:40.842281       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.873545ms"
	I0917 17:30:40.842595       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="84.43µs"
	I0917 17:30:43.698201       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:30:43.789477       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:30:55.143874       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m03"
	I0917 17:31:15.491628       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-181247-m04"
	I0917 17:31:15.492859       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:31:15.510257       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:31:16.203030       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	
	
	==> kube-proxy [aa3e79172e8670acd510e33bf9c5da16c5d3534ce4ffe595a8e325c11657359d] <==
	W0917 17:25:53.014102       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0917 17:25:53.014181       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0917 17:25:53.013797       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-181247&resourceVersion=1837\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0917 17:26:00.373530       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	W0917 17:26:00.373596       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-181247&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0917 17:26:00.375120       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0917 17:26:00.375327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-181247&resourceVersion=1837\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0917 17:26:00.375413       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1830": dial tcp 192.168.39.254:8443: connect: no route to host
	E0917 17:26:00.375469       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1830\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0917 17:26:09.591378       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-181247&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0917 17:26:09.592301       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-181247&resourceVersion=1837\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0917 17:26:12.664182       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0917 17:26:12.664298       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0917 17:26:12.664518       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1830": dial tcp 192.168.39.254:8443: connect: no route to host
	E0917 17:26:12.664904       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1830\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0917 17:26:28.021738       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0917 17:26:28.022535       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0917 17:26:31.098459       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1830": dial tcp 192.168.39.254:8443: connect: no route to host
	E0917 17:26:31.098513       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1830\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0917 17:26:34.165943       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-181247&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0917 17:26:34.166134       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-181247&resourceVersion=1837\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0917 17:26:55.669916       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0917 17:26:55.670287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0917 17:27:07.967298       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-181247&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0917 17:27:07.967374       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-181247&resourceVersion=1837\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [e006a9b2aae6748542deecb0f28974d6d9aa2a82006af2640940b3f9b61863e3] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 17:28:55.477625       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-181247\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0917 17:28:58.549784       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-181247\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0917 17:29:01.622492       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-181247\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0917 17:29:07.766562       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-181247\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0917 17:29:16.982830       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-181247\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0917 17:29:35.156805       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.195"]
	E0917 17:29:35.158962       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 17:29:35.234478       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 17:29:35.234552       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 17:29:35.234592       1 server_linux.go:169] "Using iptables Proxier"
	I0917 17:29:35.237288       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 17:29:35.237942       1 server.go:483] "Version info" version="v1.31.1"
	I0917 17:29:35.237987       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:29:35.239804       1 config.go:199] "Starting service config controller"
	I0917 17:29:35.239870       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 17:29:35.239907       1 config.go:105] "Starting endpoint slice config controller"
	I0917 17:29:35.239929       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 17:29:35.240619       1 config.go:328] "Starting node config controller"
	I0917 17:29:35.240648       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 17:29:35.340512       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 17:29:35.340508       1 shared_informer.go:320] Caches are synced for service config
	I0917 17:29:35.340688       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2b77bc3ea3167ff26e92f87afb0c5a5816aa025b692fb78393cc147efa615fb4] <==
	E0917 17:21:01.481288       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-ntzg5\": pod kindnet-ntzg5 is already assigned to node \"ha-181247-m04\"" pod="kube-system/kindnet-ntzg5"
	I0917 17:21:01.481324       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ntzg5" node="ha-181247-m04"
	E0917 17:21:01.481718       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-wxx9b\": pod kube-proxy-wxx9b is already assigned to node \"ha-181247-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-wxx9b" node="ha-181247-m04"
	E0917 17:21:01.481771       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod be89da91-3d03-49d5-9c40-8f0a10a29dc4(kube-system/kube-proxy-wxx9b) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-wxx9b"
	E0917 17:21:01.481794       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-wxx9b\": pod kube-proxy-wxx9b is already assigned to node \"ha-181247-m04\"" pod="kube-system/kube-proxy-wxx9b"
	I0917 17:21:01.481828       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-wxx9b" node="ha-181247-m04"
	E0917 17:21:01.598636       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rjzts\": pod kindnet-rjzts is already assigned to node \"ha-181247-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-rjzts" node="ha-181247-m04"
	E0917 17:21:01.598783       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod df1f81cf-787e-4442-b864-71023978df35(kube-system/kindnet-rjzts) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-rjzts"
	E0917 17:21:01.598965       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rjzts\": pod kindnet-rjzts is already assigned to node \"ha-181247-m04\"" pod="kube-system/kindnet-rjzts"
	I0917 17:21:01.599124       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rjzts" node="ha-181247-m04"
	E0917 17:26:56.980855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0917 17:26:57.326322       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0917 17:26:57.933895       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0917 17:26:58.198268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0917 17:26:58.776492       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0917 17:27:00.039461       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0917 17:27:00.162384       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0917 17:27:01.885926       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0917 17:27:02.154029       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0917 17:27:02.342778       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0917 17:27:04.058585       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0917 17:27:04.656014       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0917 17:27:04.878521       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0917 17:27:06.198799       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0917 17:27:07.930610       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ea3f2b560fe5b38c04e3d8a4e416a5590512d910d1d5b1142735703ab8cb704e] <==
	W0917 17:29:21.216901       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.195:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0917 17:29:21.216997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.195:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.195:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:29:21.666156       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.195:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0917 17:29:21.666270       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.195:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.195:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:29:21.928199       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.195:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0917 17:29:21.928338       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.195:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.195:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:29:21.957700       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.195:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0917 17:29:21.957792       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.195:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.195:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:29:22.214553       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.195:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0917 17:29:22.214653       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.195:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.195:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:29:22.274806       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.195:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0917 17:29:22.274971       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.195:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.195:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:29:22.606810       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.195:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0917 17:29:22.606856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.195:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.195:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:29:22.616650       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.195:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0917 17:29:22.616733       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.195:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.195:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:29:22.985937       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.195:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0917 17:29:22.986024       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.195:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.195:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:29:23.468606       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.195:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0917 17:29:23.468648       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.195:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.195:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:29:26.602581       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0917 17:29:26.602654       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 17:29:26.603137       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 17:29:26.603376       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0917 17:29:31.917994       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 17 17:30:05 ha-181247 kubelet[1302]: I0917 17:30:05.427556    1302 scope.go:117] "RemoveContainer" containerID="9ad4df81626cef60b73562b5608dfcc703fb4767e6831cde09375b17feb5d5c3"
	Sep 17 17:30:11 ha-181247 kubelet[1302]: I0917 17:30:11.426437    1302 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-181247" podUID="45c79311-640f-4df4-8902-e3b09f11d417"
	Sep 17 17:30:11 ha-181247 kubelet[1302]: I0917 17:30:11.456657    1302 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-181247"
	Sep 17 17:30:11 ha-181247 kubelet[1302]: I0917 17:30:11.501114    1302 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-181247" podUID="45c79311-640f-4df4-8902-e3b09f11d417"
	Sep 17 17:30:12 ha-181247 kubelet[1302]: E0917 17:30:12.629983    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594212629125706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:30:12 ha-181247 kubelet[1302]: E0917 17:30:12.630099    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594212629125706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:30:22 ha-181247 kubelet[1302]: E0917 17:30:22.632022    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594222631493543,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:30:22 ha-181247 kubelet[1302]: E0917 17:30:22.632787    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594222631493543,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:30:32 ha-181247 kubelet[1302]: E0917 17:30:32.635708    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594232635349725,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:30:32 ha-181247 kubelet[1302]: E0917 17:30:32.635781    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594232635349725,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:30:42 ha-181247 kubelet[1302]: E0917 17:30:42.639131    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594242638285416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:30:42 ha-181247 kubelet[1302]: E0917 17:30:42.639190    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594242638285416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:30:52 ha-181247 kubelet[1302]: E0917 17:30:52.458593    1302 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 17 17:30:52 ha-181247 kubelet[1302]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 17 17:30:52 ha-181247 kubelet[1302]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 17 17:30:52 ha-181247 kubelet[1302]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 17 17:30:52 ha-181247 kubelet[1302]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 17 17:30:52 ha-181247 kubelet[1302]: E0917 17:30:52.641846    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594252641286384,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:30:52 ha-181247 kubelet[1302]: E0917 17:30:52.641903    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594252641286384,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:31:02 ha-181247 kubelet[1302]: E0917 17:31:02.644832    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594262644164814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:31:02 ha-181247 kubelet[1302]: E0917 17:31:02.644868    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594262644164814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:31:12 ha-181247 kubelet[1302]: E0917 17:31:12.647206    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594272646689000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:31:12 ha-181247 kubelet[1302]: E0917 17:31:12.647259    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594272646689000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:31:22 ha-181247 kubelet[1302]: E0917 17:31:22.649673    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594282648964567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:31:22 ha-181247 kubelet[1302]: E0917 17:31:22.649700    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594282648964567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 17:31:22.499786   37761 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19662-11085/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-181247 -n ha-181247
helpers_test.go:261: (dbg) Run:  kubectl --context ha-181247 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (379.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-181247 stop -v=7 --alsologtostderr: exit status 82 (2m0.479500846s)

                                                
                                                
-- stdout --
	* Stopping node "ha-181247-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:31:41.960526   38173 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:31:41.960973   38173 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:31:41.960992   38173 out.go:358] Setting ErrFile to fd 2...
	I0917 17:31:41.961001   38173 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:31:41.961549   38173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 17:31:41.962125   38173 out.go:352] Setting JSON to false
	I0917 17:31:41.962232   38173 mustload.go:65] Loading cluster: ha-181247
	I0917 17:31:41.962665   38173 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:31:41.962775   38173 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/config.json ...
	I0917 17:31:41.962986   38173 mustload.go:65] Loading cluster: ha-181247
	I0917 17:31:41.963150   38173 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:31:41.963182   38173 stop.go:39] StopHost: ha-181247-m04
	I0917 17:31:41.963559   38173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:31:41.963608   38173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:31:41.979383   38173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44531
	I0917 17:31:41.979825   38173 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:31:41.980354   38173 main.go:141] libmachine: Using API Version  1
	I0917 17:31:41.980389   38173 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:31:41.980718   38173 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:31:41.983283   38173 out.go:177] * Stopping node "ha-181247-m04"  ...
	I0917 17:31:41.984674   38173 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0917 17:31:41.984702   38173 main.go:141] libmachine: (ha-181247-m04) Calling .DriverName
	I0917 17:31:41.984925   38173 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0917 17:31:41.984954   38173 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHHostname
	I0917 17:31:41.987883   38173 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:31:41.988310   38173 main.go:141] libmachine: (ha-181247-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0a:d0", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:31:07 +0000 UTC Type:0 Mac:52:54:00:e5:0a:d0 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-181247-m04 Clientid:01:52:54:00:e5:0a:d0}
	I0917 17:31:41.988332   38173 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined IP address 192.168.39.63 and MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:31:41.988534   38173 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHPort
	I0917 17:31:41.988728   38173 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHKeyPath
	I0917 17:31:41.988884   38173 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHUsername
	I0917 17:31:41.989019   38173 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m04/id_rsa Username:docker}
	I0917 17:31:42.076343   38173 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0917 17:31:42.130632   38173 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0917 17:31:42.184603   38173 main.go:141] libmachine: Stopping "ha-181247-m04"...
	I0917 17:31:42.184643   38173 main.go:141] libmachine: (ha-181247-m04) Calling .GetState
	I0917 17:31:42.186217   38173 main.go:141] libmachine: (ha-181247-m04) Calling .Stop
	I0917 17:31:42.189763   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 0/120
	I0917 17:31:43.191066   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 1/120
	I0917 17:31:44.192599   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 2/120
	I0917 17:31:45.193962   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 3/120
	I0917 17:31:46.195773   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 4/120
	I0917 17:31:47.197920   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 5/120
	I0917 17:31:48.199301   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 6/120
	I0917 17:31:49.200930   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 7/120
	I0917 17:31:50.202289   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 8/120
	I0917 17:31:51.203710   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 9/120
	I0917 17:31:52.205273   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 10/120
	I0917 17:31:53.206670   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 11/120
	I0917 17:31:54.208211   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 12/120
	I0917 17:31:55.209844   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 13/120
	I0917 17:31:56.211603   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 14/120
	I0917 17:31:57.213773   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 15/120
	I0917 17:31:58.215386   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 16/120
	I0917 17:31:59.216855   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 17/120
	I0917 17:32:00.218336   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 18/120
	I0917 17:32:01.219914   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 19/120
	I0917 17:32:02.221811   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 20/120
	I0917 17:32:03.223189   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 21/120
	I0917 17:32:04.224483   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 22/120
	I0917 17:32:05.226006   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 23/120
	I0917 17:32:06.227661   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 24/120
	I0917 17:32:07.229858   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 25/120
	I0917 17:32:08.231159   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 26/120
	I0917 17:32:09.232719   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 27/120
	I0917 17:32:10.233880   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 28/120
	I0917 17:32:11.235925   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 29/120
	I0917 17:32:12.238080   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 30/120
	I0917 17:32:13.239696   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 31/120
	I0917 17:32:14.241159   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 32/120
	I0917 17:32:15.242735   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 33/120
	I0917 17:32:16.244162   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 34/120
	I0917 17:32:17.246111   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 35/120
	I0917 17:32:18.247694   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 36/120
	I0917 17:32:19.249807   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 37/120
	I0917 17:32:20.251158   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 38/120
	I0917 17:32:21.252551   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 39/120
	I0917 17:32:22.254514   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 40/120
	I0917 17:32:23.255789   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 41/120
	I0917 17:32:24.257908   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 42/120
	I0917 17:32:25.259962   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 43/120
	I0917 17:32:26.261186   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 44/120
	I0917 17:32:27.263321   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 45/120
	I0917 17:32:28.264882   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 46/120
	I0917 17:32:29.266331   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 47/120
	I0917 17:32:30.267698   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 48/120
	I0917 17:32:31.268927   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 49/120
	I0917 17:32:32.271133   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 50/120
	I0917 17:32:33.272915   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 51/120
	I0917 17:32:34.274345   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 52/120
	I0917 17:32:35.275677   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 53/120
	I0917 17:32:36.276916   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 54/120
	I0917 17:32:37.278862   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 55/120
	I0917 17:32:38.280366   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 56/120
	I0917 17:32:39.281611   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 57/120
	I0917 17:32:40.283610   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 58/120
	I0917 17:32:41.284931   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 59/120
	I0917 17:32:42.286605   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 60/120
	I0917 17:32:43.288020   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 61/120
	I0917 17:32:44.290203   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 62/120
	I0917 17:32:45.291740   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 63/120
	I0917 17:32:46.293489   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 64/120
	I0917 17:32:47.295562   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 65/120
	I0917 17:32:48.297083   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 66/120
	I0917 17:32:49.299454   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 67/120
	I0917 17:32:50.301031   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 68/120
	I0917 17:32:51.302537   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 69/120
	I0917 17:32:52.304934   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 70/120
	I0917 17:32:53.306903   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 71/120
	I0917 17:32:54.308257   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 72/120
	I0917 17:32:55.309833   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 73/120
	I0917 17:32:56.311186   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 74/120
	I0917 17:32:57.313028   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 75/120
	I0917 17:32:58.314376   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 76/120
	I0917 17:32:59.315805   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 77/120
	I0917 17:33:00.317248   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 78/120
	I0917 17:33:01.318974   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 79/120
	I0917 17:33:02.321195   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 80/120
	I0917 17:33:03.322515   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 81/120
	I0917 17:33:04.323864   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 82/120
	I0917 17:33:05.325842   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 83/120
	I0917 17:33:06.327697   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 84/120
	I0917 17:33:07.329956   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 85/120
	I0917 17:33:08.331694   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 86/120
	I0917 17:33:09.333302   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 87/120
	I0917 17:33:10.334716   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 88/120
	I0917 17:33:11.335923   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 89/120
	I0917 17:33:12.337958   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 90/120
	I0917 17:33:13.339374   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 91/120
	I0917 17:33:14.341274   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 92/120
	I0917 17:33:15.342926   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 93/120
	I0917 17:33:16.344286   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 94/120
	I0917 17:33:17.346506   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 95/120
	I0917 17:33:18.347818   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 96/120
	I0917 17:33:19.349261   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 97/120
	I0917 17:33:20.350827   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 98/120
	I0917 17:33:21.352120   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 99/120
	I0917 17:33:22.354507   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 100/120
	I0917 17:33:23.355852   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 101/120
	I0917 17:33:24.357114   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 102/120
	I0917 17:33:25.358564   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 103/120
	I0917 17:33:26.360354   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 104/120
	I0917 17:33:27.362471   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 105/120
	I0917 17:33:28.363900   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 106/120
	I0917 17:33:29.366230   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 107/120
	I0917 17:33:30.367563   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 108/120
	I0917 17:33:31.369022   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 109/120
	I0917 17:33:32.371182   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 110/120
	I0917 17:33:33.372863   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 111/120
	I0917 17:33:34.374352   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 112/120
	I0917 17:33:35.375622   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 113/120
	I0917 17:33:36.378084   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 114/120
	I0917 17:33:37.379938   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 115/120
	I0917 17:33:38.381275   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 116/120
	I0917 17:33:39.382723   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 117/120
	I0917 17:33:40.385056   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 118/120
	I0917 17:33:41.386643   38173 main.go:141] libmachine: (ha-181247-m04) Waiting for machine to stop 119/120
	I0917 17:33:42.387720   38173 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0917 17:33:42.387769   38173 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0917 17:33:42.390120   38173 out.go:201] 
	W0917 17:33:42.391741   38173 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0917 17:33:42.391759   38173 out.go:270] * 
	* 
	W0917 17:33:42.394964   38173 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 17:33:42.396384   38173 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-181247 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 status -v=7 --alsologtostderr
E0917 17:33:50.533611   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-181247 status -v=7 --alsologtostderr: exit status 3 (18.931686937s)

                                                
                                                
-- stdout --
	ha-181247
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-181247-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-181247-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:33:42.444192   38631 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:33:42.444305   38631 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:33:42.444320   38631 out.go:358] Setting ErrFile to fd 2...
	I0917 17:33:42.444326   38631 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:33:42.444515   38631 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 17:33:42.444731   38631 out.go:352] Setting JSON to false
	I0917 17:33:42.444758   38631 mustload.go:65] Loading cluster: ha-181247
	I0917 17:33:42.444803   38631 notify.go:220] Checking for updates...
	I0917 17:33:42.445223   38631 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:33:42.445264   38631 status.go:255] checking status of ha-181247 ...
	I0917 17:33:42.445716   38631 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:33:42.445776   38631 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:33:42.471026   38631 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43783
	I0917 17:33:42.471615   38631 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:33:42.472414   38631 main.go:141] libmachine: Using API Version  1
	I0917 17:33:42.472446   38631 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:33:42.472831   38631 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:33:42.473010   38631 main.go:141] libmachine: (ha-181247) Calling .GetState
	I0917 17:33:42.474779   38631 status.go:330] ha-181247 host status = "Running" (err=<nil>)
	I0917 17:33:42.474799   38631 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:33:42.475073   38631 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:33:42.475106   38631 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:33:42.490417   38631 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46287
	I0917 17:33:42.490868   38631 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:33:42.491365   38631 main.go:141] libmachine: Using API Version  1
	I0917 17:33:42.491387   38631 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:33:42.491784   38631 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:33:42.492005   38631 main.go:141] libmachine: (ha-181247) Calling .GetIP
	I0917 17:33:42.495013   38631 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:33:42.495430   38631 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:33:42.495458   38631 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:33:42.495633   38631 host.go:66] Checking if "ha-181247" exists ...
	I0917 17:33:42.495923   38631 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:33:42.495965   38631 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:33:42.511721   38631 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38929
	I0917 17:33:42.512145   38631 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:33:42.512706   38631 main.go:141] libmachine: Using API Version  1
	I0917 17:33:42.512724   38631 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:33:42.513065   38631 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:33:42.513281   38631 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:33:42.513491   38631 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:33:42.513528   38631 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:33:42.516622   38631 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:33:42.517050   38631 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:33:42.517083   38631 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:33:42.517206   38631 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:33:42.517432   38631 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:33:42.517576   38631 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:33:42.517687   38631 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:33:42.606763   38631 ssh_runner.go:195] Run: systemctl --version
	I0917 17:33:42.614462   38631 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:33:42.632271   38631 kubeconfig.go:125] found "ha-181247" server: "https://192.168.39.254:8443"
	I0917 17:33:42.632308   38631 api_server.go:166] Checking apiserver status ...
	I0917 17:33:42.632347   38631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:33:42.650116   38631 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4710/cgroup
	W0917 17:33:42.661349   38631 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4710/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 17:33:42.661417   38631 ssh_runner.go:195] Run: ls
	I0917 17:33:42.665991   38631 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0917 17:33:42.670622   38631 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0917 17:33:42.670649   38631 status.go:422] ha-181247 apiserver status = Running (err=<nil>)
	I0917 17:33:42.670661   38631 status.go:257] ha-181247 status: &{Name:ha-181247 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:33:42.670677   38631 status.go:255] checking status of ha-181247-m02 ...
	I0917 17:33:42.671064   38631 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:33:42.671114   38631 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:33:42.686556   38631 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40595
	I0917 17:33:42.687089   38631 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:33:42.687612   38631 main.go:141] libmachine: Using API Version  1
	I0917 17:33:42.687632   38631 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:33:42.687959   38631 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:33:42.688158   38631 main.go:141] libmachine: (ha-181247-m02) Calling .GetState
	I0917 17:33:42.689781   38631 status.go:330] ha-181247-m02 host status = "Running" (err=<nil>)
	I0917 17:33:42.689797   38631 host.go:66] Checking if "ha-181247-m02" exists ...
	I0917 17:33:42.690081   38631 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:33:42.690142   38631 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:33:42.705570   38631 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37051
	I0917 17:33:42.706078   38631 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:33:42.706656   38631 main.go:141] libmachine: Using API Version  1
	I0917 17:33:42.706683   38631 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:33:42.707051   38631 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:33:42.707251   38631 main.go:141] libmachine: (ha-181247-m02) Calling .GetIP
	I0917 17:33:42.710559   38631 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:33:42.711069   38631 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:28:55 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:33:42.711098   38631 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:33:42.711279   38631 host.go:66] Checking if "ha-181247-m02" exists ...
	I0917 17:33:42.711674   38631 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:33:42.711712   38631 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:33:42.730304   38631 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40567
	I0917 17:33:42.730763   38631 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:33:42.731369   38631 main.go:141] libmachine: Using API Version  1
	I0917 17:33:42.731392   38631 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:33:42.731908   38631 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:33:42.732124   38631 main.go:141] libmachine: (ha-181247-m02) Calling .DriverName
	I0917 17:33:42.732331   38631 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:33:42.732357   38631 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHHostname
	I0917 17:33:42.735814   38631 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:33:42.736293   38631 main.go:141] libmachine: (ha-181247-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:96", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:28:55 +0000 UTC Type:0 Mac:52:54:00:a4:df:96 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-181247-m02 Clientid:01:52:54:00:a4:df:96}
	I0917 17:33:42.736320   38631 main.go:141] libmachine: (ha-181247-m02) DBG | domain ha-181247-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:a4:df:96 in network mk-ha-181247
	I0917 17:33:42.736513   38631 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHPort
	I0917 17:33:42.736702   38631 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHKeyPath
	I0917 17:33:42.736854   38631 main.go:141] libmachine: (ha-181247-m02) Calling .GetSSHUsername
	I0917 17:33:42.737011   38631 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m02/id_rsa Username:docker}
	I0917 17:33:42.823110   38631 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:33:42.843426   38631 kubeconfig.go:125] found "ha-181247" server: "https://192.168.39.254:8443"
	I0917 17:33:42.843455   38631 api_server.go:166] Checking apiserver status ...
	I0917 17:33:42.843504   38631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:33:42.868208   38631 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1466/cgroup
	W0917 17:33:42.879925   38631 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1466/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 17:33:42.880000   38631 ssh_runner.go:195] Run: ls
	I0917 17:33:42.885647   38631 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0917 17:33:42.890149   38631 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0917 17:33:42.890178   38631 status.go:422] ha-181247-m02 apiserver status = Running (err=<nil>)
	I0917 17:33:42.890189   38631 status.go:257] ha-181247-m02 status: &{Name:ha-181247-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:33:42.890208   38631 status.go:255] checking status of ha-181247-m04 ...
	I0917 17:33:42.890528   38631 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:33:42.890567   38631 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:33:42.907520   38631 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38997
	I0917 17:33:42.908051   38631 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:33:42.908550   38631 main.go:141] libmachine: Using API Version  1
	I0917 17:33:42.908570   38631 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:33:42.908911   38631 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:33:42.909084   38631 main.go:141] libmachine: (ha-181247-m04) Calling .GetState
	I0917 17:33:42.910671   38631 status.go:330] ha-181247-m04 host status = "Running" (err=<nil>)
	I0917 17:33:42.910686   38631 host.go:66] Checking if "ha-181247-m04" exists ...
	I0917 17:33:42.910980   38631 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:33:42.911015   38631 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:33:42.926606   38631 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45457
	I0917 17:33:42.927026   38631 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:33:42.927621   38631 main.go:141] libmachine: Using API Version  1
	I0917 17:33:42.927649   38631 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:33:42.928026   38631 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:33:42.928289   38631 main.go:141] libmachine: (ha-181247-m04) Calling .GetIP
	I0917 17:33:42.931604   38631 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:33:42.932097   38631 main.go:141] libmachine: (ha-181247-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0a:d0", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:31:07 +0000 UTC Type:0 Mac:52:54:00:e5:0a:d0 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-181247-m04 Clientid:01:52:54:00:e5:0a:d0}
	I0917 17:33:42.932125   38631 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined IP address 192.168.39.63 and MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:33:42.932282   38631 host.go:66] Checking if "ha-181247-m04" exists ...
	I0917 17:33:42.932715   38631 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:33:42.932756   38631 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:33:42.949403   38631 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43987
	I0917 17:33:42.949882   38631 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:33:42.950314   38631 main.go:141] libmachine: Using API Version  1
	I0917 17:33:42.950351   38631 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:33:42.950644   38631 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:33:42.950847   38631 main.go:141] libmachine: (ha-181247-m04) Calling .DriverName
	I0917 17:33:42.951034   38631 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:33:42.951058   38631 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHHostname
	I0917 17:33:42.954457   38631 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:33:42.954949   38631 main.go:141] libmachine: (ha-181247-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:0a:d0", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:31:07 +0000 UTC Type:0 Mac:52:54:00:e5:0a:d0 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-181247-m04 Clientid:01:52:54:00:e5:0a:d0}
	I0917 17:33:42.954986   38631 main.go:141] libmachine: (ha-181247-m04) DBG | domain ha-181247-m04 has defined IP address 192.168.39.63 and MAC address 52:54:00:e5:0a:d0 in network mk-ha-181247
	I0917 17:33:42.955142   38631 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHPort
	I0917 17:33:42.955311   38631 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHKeyPath
	I0917 17:33:42.955468   38631 main.go:141] libmachine: (ha-181247-m04) Calling .GetSSHUsername
	I0917 17:33:42.955590   38631 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247-m04/id_rsa Username:docker}
	W0917 17:34:01.329448   38631 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.63:22: connect: no route to host
	W0917 17:34:01.329544   38631 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.63:22: connect: no route to host
	E0917 17:34:01.329559   38631 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.63:22: connect: no route to host
	I0917 17:34:01.329567   38631 status.go:257] ha-181247-m04 status: &{Name:ha-181247-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0917 17:34:01.329594   38631 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.63:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-181247 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-181247 -n ha-181247
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-181247 logs -n 25: (1.859548002s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-181247 ssh -n ha-181247-m02 sudo cat                                          | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-181247-m03_ha-181247-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-181247 cp ha-181247-m03:/home/docker/cp-test.txt                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m04:/home/docker/cp-test_ha-181247-m03_ha-181247-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n                                                                 | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n ha-181247-m04 sudo cat                                          | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-181247-m03_ha-181247-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-181247 cp testdata/cp-test.txt                                                | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n                                                                 | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-181247 cp ha-181247-m04:/home/docker/cp-test.txt                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3499385804/001/cp-test_ha-181247-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n                                                                 | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-181247 cp ha-181247-m04:/home/docker/cp-test.txt                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247:/home/docker/cp-test_ha-181247-m04_ha-181247.txt                       |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n                                                                 | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n ha-181247 sudo cat                                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-181247-m04_ha-181247.txt                                 |           |         |         |                     |                     |
	| cp      | ha-181247 cp ha-181247-m04:/home/docker/cp-test.txt                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m02:/home/docker/cp-test_ha-181247-m04_ha-181247-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n                                                                 | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n ha-181247-m02 sudo cat                                          | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-181247-m04_ha-181247-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-181247 cp ha-181247-m04:/home/docker/cp-test.txt                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m03:/home/docker/cp-test_ha-181247-m04_ha-181247-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n                                                                 | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | ha-181247-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-181247 ssh -n ha-181247-m03 sudo cat                                          | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC | 17 Sep 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-181247-m04_ha-181247-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-181247 node stop m02 -v=7                                                     | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-181247 node start m02 -v=7                                                    | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-181247 -v=7                                                           | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-181247 -v=7                                                                | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-181247 --wait=true -v=7                                                    | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:27 UTC | 17 Sep 24 17:31 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-181247                                                                | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:31 UTC |                     |
	| node    | ha-181247 node delete m03 -v=7                                                   | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:31 UTC | 17 Sep 24 17:31 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-181247 stop -v=7                                                              | ha-181247 | jenkins | v1.34.0 | 17 Sep 24 17:31 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 17:27:07
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 17:27:07.059381   36365 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:27:07.059669   36365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:27:07.059681   36365 out.go:358] Setting ErrFile to fd 2...
	I0917 17:27:07.059686   36365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:27:07.059923   36365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 17:27:07.060544   36365 out.go:352] Setting JSON to false
	I0917 17:27:07.061656   36365 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4142,"bootTime":1726589885,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 17:27:07.061761   36365 start.go:139] virtualization: kvm guest
	I0917 17:27:07.064168   36365 out.go:177] * [ha-181247] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 17:27:07.065879   36365 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 17:27:07.065890   36365 notify.go:220] Checking for updates...
	I0917 17:27:07.068433   36365 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 17:27:07.070027   36365 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 17:27:07.071316   36365 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 17:27:07.072756   36365 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 17:27:07.074232   36365 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 17:27:07.076170   36365 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:27:07.076317   36365 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 17:27:07.076843   36365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:27:07.076885   36365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:27:07.095271   36365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34271
	I0917 17:27:07.095691   36365 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:27:07.096280   36365 main.go:141] libmachine: Using API Version  1
	I0917 17:27:07.096312   36365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:27:07.096678   36365 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:27:07.096949   36365 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:27:07.136571   36365 out.go:177] * Using the kvm2 driver based on existing profile
	I0917 17:27:07.137942   36365 start.go:297] selected driver: kvm2
	I0917 17:27:07.137964   36365 start.go:901] validating driver "kvm2" against &{Name:ha-181247 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-181247 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.63 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:27:07.138114   36365 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 17:27:07.138495   36365 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 17:27:07.138599   36365 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19662-11085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0917 17:27:07.154907   36365 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0917 17:27:07.155585   36365 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 17:27:07.155627   36365 cni.go:84] Creating CNI manager for ""
	I0917 17:27:07.155693   36365 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 17:27:07.155783   36365 start.go:340] cluster config:
	{Name:ha-181247 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-181247 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.63 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:27:07.155991   36365 iso.go:125] acquiring lock: {Name:mkee7131e09b1c5eb94d106bda884bcbdbf6f906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 17:27:07.158391   36365 out.go:177] * Starting "ha-181247" primary control-plane node in "ha-181247" cluster
	I0917 17:27:07.159905   36365 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 17:27:07.159960   36365 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0917 17:27:07.159974   36365 cache.go:56] Caching tarball of preloaded images
	I0917 17:27:07.160079   36365 preload.go:172] Found /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 17:27:07.160092   36365 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0917 17:27:07.160241   36365 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/config.json ...
	I0917 17:27:07.160477   36365 start.go:360] acquireMachinesLock for ha-181247: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 17:27:07.160532   36365 start.go:364] duration metric: took 33.648µs to acquireMachinesLock for "ha-181247"
	I0917 17:27:07.160552   36365 start.go:96] Skipping create...Using existing machine configuration
	I0917 17:27:07.160560   36365 fix.go:54] fixHost starting: 
	I0917 17:27:07.160856   36365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:27:07.160896   36365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:27:07.176113   36365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42551
	I0917 17:27:07.176651   36365 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:27:07.177156   36365 main.go:141] libmachine: Using API Version  1
	I0917 17:27:07.177178   36365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:27:07.177521   36365 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:27:07.177724   36365 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:27:07.177883   36365 main.go:141] libmachine: (ha-181247) Calling .GetState
	I0917 17:27:07.179481   36365 fix.go:112] recreateIfNeeded on ha-181247: state=Running err=<nil>
	W0917 17:27:07.179498   36365 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 17:27:07.181707   36365 out.go:177] * Updating the running kvm2 "ha-181247" VM ...
	I0917 17:27:07.183167   36365 machine.go:93] provisionDockerMachine start ...
	I0917 17:27:07.183188   36365 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:27:07.183440   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:27:07.186012   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:27:07.186507   36365 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:27:07.186526   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:27:07.186805   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:27:07.187009   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:27:07.187171   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:27:07.187271   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:27:07.187398   36365 main.go:141] libmachine: Using SSH client type: native
	I0917 17:27:07.187650   36365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0917 17:27:07.187663   36365 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 17:27:07.302568   36365 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181247
	
	I0917 17:27:07.302602   36365 main.go:141] libmachine: (ha-181247) Calling .GetMachineName
	I0917 17:27:07.302861   36365 buildroot.go:166] provisioning hostname "ha-181247"
	I0917 17:27:07.302890   36365 main.go:141] libmachine: (ha-181247) Calling .GetMachineName
	I0917 17:27:07.303115   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:27:07.306335   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:27:07.306806   36365 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:27:07.306836   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:27:07.307024   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:27:07.307210   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:27:07.307416   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:27:07.307551   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:27:07.307706   36365 main.go:141] libmachine: Using SSH client type: native
	I0917 17:27:07.307974   36365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0917 17:27:07.307993   36365 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181247 && echo "ha-181247" | sudo tee /etc/hostname
	I0917 17:27:07.438076   36365 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181247
	
	I0917 17:27:07.438100   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:27:07.441155   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:27:07.441659   36365 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:27:07.441695   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:27:07.441850   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:27:07.442049   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:27:07.442205   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:27:07.442337   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:27:07.442501   36365 main.go:141] libmachine: Using SSH client type: native
	I0917 17:27:07.442673   36365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0917 17:27:07.442687   36365 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181247' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181247/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181247' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 17:27:07.555031   36365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 17:27:07.555071   36365 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 17:27:07.555090   36365 buildroot.go:174] setting up certificates
	I0917 17:27:07.555099   36365 provision.go:84] configureAuth start
	I0917 17:27:07.555107   36365 main.go:141] libmachine: (ha-181247) Calling .GetMachineName
	I0917 17:27:07.555370   36365 main.go:141] libmachine: (ha-181247) Calling .GetIP
	I0917 17:27:07.558099   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:27:07.558554   36365 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:27:07.558573   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:27:07.558770   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:27:07.561424   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:27:07.561798   36365 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:27:07.561829   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:27:07.562048   36365 provision.go:143] copyHostCerts
	I0917 17:27:07.562075   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 17:27:07.562111   36365 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 17:27:07.562121   36365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 17:27:07.562193   36365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 17:27:07.562268   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 17:27:07.562285   36365 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 17:27:07.562289   36365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 17:27:07.562325   36365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 17:27:07.562368   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 17:27:07.562382   36365 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 17:27:07.562390   36365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 17:27:07.562413   36365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 17:27:07.562470   36365 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.ha-181247 san=[127.0.0.1 192.168.39.195 ha-181247 localhost minikube]
	I0917 17:27:07.646706   36365 provision.go:177] copyRemoteCerts
	I0917 17:27:07.646768   36365 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 17:27:07.646792   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:27:07.649927   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:27:07.650353   36365 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:27:07.650383   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:27:07.650674   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:27:07.650898   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:27:07.651133   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:27:07.651310   36365 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:27:07.736834   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 17:27:07.736905   36365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 17:27:07.765943   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 17:27:07.766046   36365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0917 17:27:07.793860   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 17:27:07.793926   36365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 17:27:07.822875   36365 provision.go:87] duration metric: took 267.764697ms to configureAuth
	I0917 17:27:07.822913   36365 buildroot.go:189] setting minikube options for container-runtime
	I0917 17:27:07.823205   36365 config.go:182] Loaded profile config "ha-181247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:27:07.823299   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:27:07.826114   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:27:07.826599   36365 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:27:07.826630   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:27:07.826791   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:27:07.827005   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:27:07.827150   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:27:07.827303   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:27:07.827482   36365 main.go:141] libmachine: Using SSH client type: native
	I0917 17:27:07.827650   36365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0917 17:27:07.827671   36365 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 17:28:38.754051   36365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 17:28:38.754084   36365 machine.go:96] duration metric: took 1m31.570902352s to provisionDockerMachine
	I0917 17:28:38.754097   36365 start.go:293] postStartSetup for "ha-181247" (driver="kvm2")
	I0917 17:28:38.754111   36365 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 17:28:38.754129   36365 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:28:38.754474   36365 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 17:28:38.754508   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:28:38.757777   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:28:38.758268   36365 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:28:38.758295   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:28:38.758498   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:28:38.758702   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:28:38.759018   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:28:38.759210   36365 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:28:38.845570   36365 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 17:28:38.850043   36365 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 17:28:38.850072   36365 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 17:28:38.850137   36365 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 17:28:38.850231   36365 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 17:28:38.850242   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> /etc/ssl/certs/182592.pem
	I0917 17:28:38.850335   36365 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 17:28:38.860193   36365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 17:28:38.887078   36365 start.go:296] duration metric: took 132.965629ms for postStartSetup
	I0917 17:28:38.887132   36365 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:28:38.887445   36365 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0917 17:28:38.887471   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:28:38.890078   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:28:38.890516   36365 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:28:38.890540   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:28:38.890766   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:28:38.890944   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:28:38.891106   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:28:38.891209   36365 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	W0917 17:28:38.976129   36365 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0917 17:28:38.976162   36365 fix.go:56] duration metric: took 1m31.815604589s for fixHost
	I0917 17:28:38.976183   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:28:38.978870   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:28:38.979320   36365 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:28:38.979365   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:28:38.979461   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:28:38.979664   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:28:38.979805   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:28:38.979945   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:28:38.980090   36365 main.go:141] libmachine: Using SSH client type: native
	I0917 17:28:38.980267   36365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0917 17:28:38.980279   36365 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 17:28:39.094543   36365 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726594119.035454615
	
	I0917 17:28:39.094566   36365 fix.go:216] guest clock: 1726594119.035454615
	I0917 17:28:39.094575   36365 fix.go:229] Guest: 2024-09-17 17:28:39.035454615 +0000 UTC Remote: 2024-09-17 17:28:38.976169426 +0000 UTC m=+91.954213076 (delta=59.285189ms)
	I0917 17:28:39.094599   36365 fix.go:200] guest clock delta is within tolerance: 59.285189ms
	I0917 17:28:39.094605   36365 start.go:83] releasing machines lock for "ha-181247", held for 1m31.934061095s
	I0917 17:28:39.094632   36365 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:28:39.094904   36365 main.go:141] libmachine: (ha-181247) Calling .GetIP
	I0917 17:28:39.097681   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:28:39.098033   36365 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:28:39.098065   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:28:39.098208   36365 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:28:39.098937   36365 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:28:39.099140   36365 main.go:141] libmachine: (ha-181247) Calling .DriverName
	I0917 17:28:39.099264   36365 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 17:28:39.099313   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:28:39.099361   36365 ssh_runner.go:195] Run: cat /version.json
	I0917 17:28:39.099382   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHHostname
	I0917 17:28:39.101895   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:28:39.101921   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:28:39.102309   36365 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:28:39.102344   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:28:39.102373   36365 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:28:39.102419   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:28:39.102483   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:28:39.102700   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:28:39.102710   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHPort
	I0917 17:28:39.102904   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:28:39.102906   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHKeyPath
	I0917 17:28:39.103072   36365 main.go:141] libmachine: (ha-181247) Calling .GetSSHUsername
	I0917 17:28:39.103097   36365 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:28:39.103211   36365 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/ha-181247/id_rsa Username:docker}
	I0917 17:28:39.194762   36365 ssh_runner.go:195] Run: systemctl --version
	I0917 17:28:39.215412   36365 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 17:28:39.392490   36365 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 17:28:39.401799   36365 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 17:28:39.401871   36365 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 17:28:39.412258   36365 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 17:28:39.412283   36365 start.go:495] detecting cgroup driver to use...
	I0917 17:28:39.412336   36365 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 17:28:39.433268   36365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 17:28:39.449082   36365 docker.go:217] disabling cri-docker service (if available) ...
	I0917 17:28:39.449142   36365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 17:28:39.464610   36365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 17:28:39.479714   36365 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 17:28:39.645679   36365 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 17:28:39.809867   36365 docker.go:233] disabling docker service ...
	I0917 17:28:39.809948   36365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 17:28:39.831643   36365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 17:28:39.848306   36365 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 17:28:40.021274   36365 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 17:28:40.171988   36365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 17:28:40.188607   36365 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 17:28:40.209086   36365 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 17:28:40.209154   36365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:28:40.221652   36365 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 17:28:40.221736   36365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:28:40.233447   36365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:28:40.245864   36365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:28:40.257876   36365 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 17:28:40.269788   36365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:28:40.281777   36365 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:28:40.294126   36365 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:28:40.305395   36365 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 17:28:40.315744   36365 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 17:28:40.326347   36365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:28:40.472679   36365 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 17:28:42.893452   36365 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.420738973s)
	I0917 17:28:42.893486   36365 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 17:28:42.893538   36365 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 17:28:42.901847   36365 start.go:563] Will wait 60s for crictl version
	I0917 17:28:42.901905   36365 ssh_runner.go:195] Run: which crictl
	I0917 17:28:42.905812   36365 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 17:28:42.944093   36365 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 17:28:42.944199   36365 ssh_runner.go:195] Run: crio --version
	I0917 17:28:42.979559   36365 ssh_runner.go:195] Run: crio --version
	I0917 17:28:43.013266   36365 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 17:28:43.015132   36365 main.go:141] libmachine: (ha-181247) Calling .GetIP
	I0917 17:28:43.018395   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:28:43.018773   36365 main.go:141] libmachine: (ha-181247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:1e:14", ip: ""} in network mk-ha-181247: {Iface:virbr1 ExpiryTime:2024-09-17 18:17:27 +0000 UTC Type:0 Mac:52:54:00:51:1e:14 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-181247 Clientid:01:52:54:00:51:1e:14}
	I0917 17:28:43.018804   36365 main.go:141] libmachine: (ha-181247) DBG | domain ha-181247 has defined IP address 192.168.39.195 and MAC address 52:54:00:51:1e:14 in network mk-ha-181247
	I0917 17:28:43.018997   36365 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0917 17:28:43.025026   36365 kubeadm.go:883] updating cluster {Name:ha-181247 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-181247 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.63 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 17:28:43.025379   36365 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 17:28:43.025453   36365 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 17:28:43.075504   36365 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 17:28:43.075529   36365 crio.go:433] Images already preloaded, skipping extraction
	I0917 17:28:43.075585   36365 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 17:28:43.113122   36365 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 17:28:43.113148   36365 cache_images.go:84] Images are preloaded, skipping loading
	I0917 17:28:43.113160   36365 kubeadm.go:934] updating node { 192.168.39.195 8443 v1.31.1 crio true true} ...
	I0917 17:28:43.113285   36365 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-181247 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-181247 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 17:28:43.113365   36365 ssh_runner.go:195] Run: crio config
	I0917 17:28:43.167249   36365 cni.go:84] Creating CNI manager for ""
	I0917 17:28:43.167278   36365 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 17:28:43.167288   36365 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 17:28:43.167315   36365 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.195 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-181247 NodeName:ha-181247 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.195"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 17:28:43.167486   36365 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-181247"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.195
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.195"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 17:28:43.167510   36365 kube-vip.go:115] generating kube-vip config ...
	I0917 17:28:43.167561   36365 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0917 17:28:43.179878   36365 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0917 17:28:43.180003   36365 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 17:28:43.180088   36365 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 17:28:43.190295   36365 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 17:28:43.190385   36365 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 17:28:43.200596   36365 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0917 17:28:43.219373   36365 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 17:28:43.237790   36365 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0917 17:28:43.257661   36365 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0917 17:28:43.276695   36365 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0917 17:28:43.281865   36365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:28:43.429986   36365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 17:28:43.445803   36365 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247 for IP: 192.168.39.195
	I0917 17:28:43.445827   36365 certs.go:194] generating shared ca certs ...
	I0917 17:28:43.445843   36365 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:28:43.446017   36365 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 17:28:43.446072   36365 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 17:28:43.446087   36365 certs.go:256] generating profile certs ...
	I0917 17:28:43.446184   36365 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/client.key
	I0917 17:28:43.446219   36365 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.bcc9b76d
	I0917 17:28:43.446236   36365 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.bcc9b76d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.195 192.168.39.11 192.168.39.122 192.168.39.254]
	I0917 17:28:43.570763   36365 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.bcc9b76d ...
	I0917 17:28:43.570800   36365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.bcc9b76d: {Name:mk034a01997b55799b7e68b7917c6787739766d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:28:43.570981   36365 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.bcc9b76d ...
	I0917 17:28:43.570993   36365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.bcc9b76d: {Name:mk56769bdec63cb34da7404ed80355a546378f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:28:43.571066   36365 certs.go:381] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt.bcc9b76d -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt
	I0917 17:28:43.571233   36365 certs.go:385] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key.bcc9b76d -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key
	I0917 17:28:43.571375   36365 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.key
	I0917 17:28:43.571390   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 17:28:43.571404   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 17:28:43.571417   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 17:28:43.571429   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 17:28:43.571444   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 17:28:43.571457   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 17:28:43.571470   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 17:28:43.571482   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 17:28:43.571533   36365 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 17:28:43.571560   36365 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 17:28:43.571569   36365 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 17:28:43.571589   36365 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 17:28:43.571610   36365 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 17:28:43.571632   36365 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 17:28:43.571672   36365 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 17:28:43.571700   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> /usr/share/ca-certificates/182592.pem
	I0917 17:28:43.571714   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:28:43.571726   36365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem -> /usr/share/ca-certificates/18259.pem
	I0917 17:28:43.572347   36365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 17:28:43.600489   36365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 17:28:43.627271   36365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 17:28:43.653537   36365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 17:28:43.680440   36365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 17:28:43.706154   36365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 17:28:43.732565   36365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 17:28:43.758769   36365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/ha-181247/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 17:28:43.784917   36365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 17:28:43.811978   36365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 17:28:43.837533   36365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 17:28:43.863590   36365 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 17:28:43.881556   36365 ssh_runner.go:195] Run: openssl version
	I0917 17:28:43.887893   36365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 17:28:43.901761   36365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:28:43.906902   36365 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:28:43.906964   36365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:28:43.913335   36365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 17:28:43.923587   36365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 17:28:43.935186   36365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 17:28:43.939831   36365 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 17:28:43.939880   36365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 17:28:43.945844   36365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 17:28:43.956175   36365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 17:28:43.968811   36365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 17:28:43.973665   36365 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 17:28:43.973728   36365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 17:28:43.979691   36365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 17:28:43.990007   36365 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 17:28:43.995067   36365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 17:28:44.001113   36365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 17:28:44.007052   36365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 17:28:44.013269   36365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 17:28:44.019312   36365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 17:28:44.030381   36365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 17:28:44.036653   36365 kubeadm.go:392] StartCluster: {Name:ha-181247 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-181247 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.63 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:28:44.036781   36365 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 17:28:44.036839   36365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 17:28:44.081936   36365 cri.go:89] found id: "c0b6697ed71d9634546240444406aecec2623303f0c13d18dc5c2f4e4fe9559d"
	I0917 17:28:44.081965   36365 cri.go:89] found id: "5800f16007ffd726fc1ae2824192d18e0680d6772934633730106e07505d6321"
	I0917 17:28:44.081971   36365 cri.go:89] found id: "16a324568e3b36f8c61b1b0ff2dadbfd908eae0771d6c335b9ae9b62cf27023e"
	I0917 17:28:44.081976   36365 cri.go:89] found id: "f192df08c3590e7bfd79aae969f3dd8f66f4d35701d32e3420611215f63553e5"
	I0917 17:28:44.081980   36365 cri.go:89] found id: "595bdaca307f1441f309d1bcf4b34051887f8d890e48b076244fe646ebd3d242"
	I0917 17:28:44.081985   36365 cri.go:89] found id: "4c6e5f75c94800e99ebcedfae5efd792c106959333def0368e33a21ce4b57dba"
	I0917 17:28:44.081989   36365 cri.go:89] found id: "aa3e79172e8670acd510e33bf9c5da16c5d3534ce4ffe595a8e325c11657359d"
	I0917 17:28:44.081993   36365 cri.go:89] found id: "8d41e134288854fec45bd7f2346f9652bdd3a1e807f2fc2cbcd942906f8f16d2"
	I0917 17:28:44.081997   36365 cri.go:89] found id: "fe133e1d0be653fbf2459fedd510646aaea8936b333247d26b813696efb08ff5"
	I0917 17:28:44.082004   36365 cri.go:89] found id: "e131e7c4af3fcc40108c8700b714abff6a932db62448fb92d9eced1c2ada9a91"
	I0917 17:28:44.082008   36365 cri.go:89] found id: "1bd357b39ecdb7b929a836991acf48843872ad86112b5035be1a2d9f29d4256a"
	I0917 17:28:44.082012   36365 cri.go:89] found id: "2b77bc3ea3167ff26e92f87afb0c5a5816aa025b692fb78393cc147efa615fb4"
	I0917 17:28:44.082016   36365 cri.go:89] found id: "c48764653b9795d2cba4178792c492a672b05306c9e7af677049f5a787ecc32d"
	I0917 17:28:44.082020   36365 cri.go:89] found id: ""
	I0917 17:28:44.082061   36365 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 17 17:34:01 ha-181247 crio[3559]: time="2024-09-17 17:34:01.978655480Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594441978630845,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4869cdc0-329f-4532-bbec-8c8b129fbc00 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:34:01 ha-181247 crio[3559]: time="2024-09-17 17:34:01.979317266Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0cdef0e4-b7a4-4d39-9ed3-20c1fb48f874 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:34:01 ha-181247 crio[3559]: time="2024-09-17 17:34:01.979379118Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0cdef0e4-b7a4-4d39-9ed3-20c1fb48f874 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:34:01 ha-181247 crio[3559]: time="2024-09-17 17:34:01.979795827Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:66e3c05e0a322c056ce6663fb5927823b5203e792db7564e24855819dd79380a,PodSandboxId:8a99278a4aabaeb673baebb3299b207a1cc469df6183497938cc4da48d0d989d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726594205451548352,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcef4cf0-61a6-4f9f-9644-f17f7f819237,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:902398e71d1c4ad20c5b15fa15132b981b81450766429108d5bfb5908d26e43e,PodSandboxId:5403d6c44b701aeb861aeca376475c318e2332bf44bdb2eb8778f6244592ef72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726594168450810396,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad9722f4b7cb935efee60829f463e82,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5d783c80592a10abe999e14e4f3929b183e45e6ec1aa38a049557fb506a780c,PodSandboxId:69fceaec5a1d1fd6679cf3637d991c5f2b901f5b5c497fc3dc6ec82cbc2e7611,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726594163774155936,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w8wxj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 681ace64-6c78-437e-9e9d-46edd2b4a8c4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3de5c9eb5bc6e0826bc6cee807a8461e9459d4567b45925c4142709163169c,PodSandboxId:331d57b91b4af9f8d6d1c82122eb151fd002653837435bbc31f43568066011af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726594163326854622,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afa944276158c101e1b388244401851,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ad4df81626cef60b73562b5608dfcc703fb4767e6831cde09375b17feb5d5c3,PodSandboxId:8a99278a4aabaeb673baebb3299b207a1cc469df6183497938cc4da48d0d989d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726594162442540876,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcef4cf0-61a6-4f9f-9644-f17f7f819237,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c401bf90a7a1a531d540c9091deb2112ea28bbeb9bacbedfe1f9edb84c5fab4,PodSandboxId:0122ff4dbdcc14cd2c57d3a9233ac49d8022f39a25a02c39ea32203931059bd3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726594144803901483,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aed6fcaf0b2bec2d4bdeb50696d03324,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e006a9b2aae6748542deecb0f28974d6d9aa2a82006af2640940b3f9b61863e3,PodSandboxId:ef2f4d54b977f375b920271720748727e7bf3c7f7923b2f3899e7e9c75a2861b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726594132127679444,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a075630a-48df-429f-98ef-49bca2d9dac5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:79aeef1c489439f99016dfe63b6c19ec3dafc13f649ee64cf83855ab0598a90a,PodSandboxId:b6c98a7fec7f5771c5418884361504798d252e0dec9fcf3f98c5d85c63a2adb1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726594130810348311,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lmg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1052e249-3530-4220-8214-0c36a02c4215,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3b0015ac8d2310e74efe8f4ade225b3ed047e43ccda61682d1ee8796c753d0f,PodSandboxId:392dae9735c73c55d4a36328420d772cb29c9dbfc311db5b79628fd69cd38590,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726594130553543051,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 882de4ca-d789-403e-a22e-22fbc776af10,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53ab2cc61c85bcd91d55627ca1c495f8df1e2af7877e16370d9bb58038c37d35,PodSandboxId:c8290f8fe843646b039a9cc7442e523f6d635bcf96f64c299984f542aff9c22b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726594130717475289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bdthh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ae9d00-44ce-47be-80c5-12144ff8c69b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea3f2b560fe5b38c04e3d8a4e416a5590512d910d1d5b1142735703ab8cb704e,PodSandboxId:37fc3cb1074825a7032531fd5d76c4c2de499e673326f174b628d6cc266127a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726594130503031471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-181247,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 48e891f6ff7af12b13f4bafa92c7341b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4062b287e38135972ca2a83f7e5e2c394cf5e89e0b663a3717db8e56f051a633,PodSandboxId:331d57b91b4af9f8d6d1c82122eb151fd002653837435bbc31f43568066011af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726594130358560852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 3afa944276158c101e1b388244401851,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ee2daa7460a214dbdb41e54393bd05a62faf7c1a45c86af80c54d03c10fc3d,PodSandboxId:7b562ed00e84eab3969ce84ff04b5df5b662d05fa11b30b55da4190ac62aa9d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726594130284032408,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5748534d2a3a40ee72c6688e8f4f184d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b475bb0d2d976d199c926f1d876639ddc076b8ccbd47e18c436cd8348a0b12,PodSandboxId:5403d6c44b701aeb861aeca376475c318e2332bf44bdb2eb8778f6244592ef72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726594130234849591,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad9722f4b7cb935efee60829f463e82,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1e590e905eaba85f23f252703d2acaae800370c02af50dfc79d700256e17f2f,PodSandboxId:032ab62b0ab6812b12acf668d671cc659557b22a8790aa1fc2cfa4494ec55dd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726593627164505554,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w8wxj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 681ace64-6c78-437e-9e9d-46edd2b4a8c4,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f192df08c3590e7bfd79aae969f3dd8f66f4d35701d32e3420611215f63553e5,PodSandboxId:4564f117340894dd0f7a94fcff0ee0f5325c4046162909def7023cd62482149e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726593490994501976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bdthh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ae9d00-44ce-47be-80c5-12144ff8c69b,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595bdaca307f1441f309d1bcf4b34051887f8d890e48b076244fe646ebd3d242,PodSandboxId:251b80e9b641b04b92ef1676da10ff73b6530adcb89c413d79c9dc235a3e2707,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726593490936203600,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lmg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1052e249-3530-4220-8214-0c36a02c4215,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3e79172e8670acd510e33bf9c5da16c5d3534ce4ffe595a8e325c11657359d,PodSandboxId:2c5d3e765b253f81a3020f932abb56db9dbde7085ba378402b5ad0ce2468bd4e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726593478702515823,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a075630a-48df-429f-98ef-49bca2d9dac5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d41e134288854fec45bd7f2346f9652bdd3a1e807f2fc2cbcd942906f8f16d2,PodSandboxId:030199bb820c578808c1be3b2896b11a2b2fd10c184eb736f1a637e365377055,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726593478343369849,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 882de4ca-d789-403e-a22e-22fbc776af10,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e131e7c4af3fcc40108c8700b714abff6a932db62448fb92d9eced1c2ada9a91,PodSandboxId:6c0fc2dc035f99358aade2656ae8ecd22f042954413615bd3314f4560fe0f22b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726593466010452800,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5748534d2a3a40ee72c6688e8f4f184d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b77bc3ea3167ff26e92f87afb0c5a5816aa025b692fb78393cc147efa615fb4,PodSandboxId:64083dac55fed338c25d3bc30e0f7fabdd26345b8ad9d8098355f4740009de72,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726593465866672350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48e891f6ff7af12b13f4bafa92c7341b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0cdef0e4-b7a4-4d39-9ed3-20c1fb48f874 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:34:02 ha-181247 crio[3559]: time="2024-09-17 17:34:02.029632440Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=223de262-330f-454f-bf3f-4c644e755e1c name=/runtime.v1.RuntimeService/Version
	Sep 17 17:34:02 ha-181247 crio[3559]: time="2024-09-17 17:34:02.029712307Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=223de262-330f-454f-bf3f-4c644e755e1c name=/runtime.v1.RuntimeService/Version
	Sep 17 17:34:02 ha-181247 crio[3559]: time="2024-09-17 17:34:02.030872918Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b0f65594-3561-4463-9a7c-5e523272ae70 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:34:02 ha-181247 crio[3559]: time="2024-09-17 17:34:02.031386348Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594442031355818,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b0f65594-3561-4463-9a7c-5e523272ae70 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:34:02 ha-181247 crio[3559]: time="2024-09-17 17:34:02.032277675Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a3926b0-097c-465d-b170-dac0b7786009 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:34:02 ha-181247 crio[3559]: time="2024-09-17 17:34:02.032346721Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a3926b0-097c-465d-b170-dac0b7786009 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:34:02 ha-181247 crio[3559]: time="2024-09-17 17:34:02.032769881Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:66e3c05e0a322c056ce6663fb5927823b5203e792db7564e24855819dd79380a,PodSandboxId:8a99278a4aabaeb673baebb3299b207a1cc469df6183497938cc4da48d0d989d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726594205451548352,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcef4cf0-61a6-4f9f-9644-f17f7f819237,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:902398e71d1c4ad20c5b15fa15132b981b81450766429108d5bfb5908d26e43e,PodSandboxId:5403d6c44b701aeb861aeca376475c318e2332bf44bdb2eb8778f6244592ef72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726594168450810396,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad9722f4b7cb935efee60829f463e82,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5d783c80592a10abe999e14e4f3929b183e45e6ec1aa38a049557fb506a780c,PodSandboxId:69fceaec5a1d1fd6679cf3637d991c5f2b901f5b5c497fc3dc6ec82cbc2e7611,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726594163774155936,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w8wxj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 681ace64-6c78-437e-9e9d-46edd2b4a8c4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3de5c9eb5bc6e0826bc6cee807a8461e9459d4567b45925c4142709163169c,PodSandboxId:331d57b91b4af9f8d6d1c82122eb151fd002653837435bbc31f43568066011af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726594163326854622,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afa944276158c101e1b388244401851,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ad4df81626cef60b73562b5608dfcc703fb4767e6831cde09375b17feb5d5c3,PodSandboxId:8a99278a4aabaeb673baebb3299b207a1cc469df6183497938cc4da48d0d989d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726594162442540876,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcef4cf0-61a6-4f9f-9644-f17f7f819237,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c401bf90a7a1a531d540c9091deb2112ea28bbeb9bacbedfe1f9edb84c5fab4,PodSandboxId:0122ff4dbdcc14cd2c57d3a9233ac49d8022f39a25a02c39ea32203931059bd3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726594144803901483,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aed6fcaf0b2bec2d4bdeb50696d03324,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e006a9b2aae6748542deecb0f28974d6d9aa2a82006af2640940b3f9b61863e3,PodSandboxId:ef2f4d54b977f375b920271720748727e7bf3c7f7923b2f3899e7e9c75a2861b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726594132127679444,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a075630a-48df-429f-98ef-49bca2d9dac5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:79aeef1c489439f99016dfe63b6c19ec3dafc13f649ee64cf83855ab0598a90a,PodSandboxId:b6c98a7fec7f5771c5418884361504798d252e0dec9fcf3f98c5d85c63a2adb1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726594130810348311,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lmg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1052e249-3530-4220-8214-0c36a02c4215,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3b0015ac8d2310e74efe8f4ade225b3ed047e43ccda61682d1ee8796c753d0f,PodSandboxId:392dae9735c73c55d4a36328420d772cb29c9dbfc311db5b79628fd69cd38590,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726594130553543051,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 882de4ca-d789-403e-a22e-22fbc776af10,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53ab2cc61c85bcd91d55627ca1c495f8df1e2af7877e16370d9bb58038c37d35,PodSandboxId:c8290f8fe843646b039a9cc7442e523f6d635bcf96f64c299984f542aff9c22b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726594130717475289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bdthh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ae9d00-44ce-47be-80c5-12144ff8c69b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea3f2b560fe5b38c04e3d8a4e416a5590512d910d1d5b1142735703ab8cb704e,PodSandboxId:37fc3cb1074825a7032531fd5d76c4c2de499e673326f174b628d6cc266127a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726594130503031471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-181247,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 48e891f6ff7af12b13f4bafa92c7341b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4062b287e38135972ca2a83f7e5e2c394cf5e89e0b663a3717db8e56f051a633,PodSandboxId:331d57b91b4af9f8d6d1c82122eb151fd002653837435bbc31f43568066011af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726594130358560852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 3afa944276158c101e1b388244401851,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ee2daa7460a214dbdb41e54393bd05a62faf7c1a45c86af80c54d03c10fc3d,PodSandboxId:7b562ed00e84eab3969ce84ff04b5df5b662d05fa11b30b55da4190ac62aa9d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726594130284032408,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5748534d2a3a40ee72c6688e8f4f184d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b475bb0d2d976d199c926f1d876639ddc076b8ccbd47e18c436cd8348a0b12,PodSandboxId:5403d6c44b701aeb861aeca376475c318e2332bf44bdb2eb8778f6244592ef72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726594130234849591,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad9722f4b7cb935efee60829f463e82,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1e590e905eaba85f23f252703d2acaae800370c02af50dfc79d700256e17f2f,PodSandboxId:032ab62b0ab6812b12acf668d671cc659557b22a8790aa1fc2cfa4494ec55dd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726593627164505554,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w8wxj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 681ace64-6c78-437e-9e9d-46edd2b4a8c4,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f192df08c3590e7bfd79aae969f3dd8f66f4d35701d32e3420611215f63553e5,PodSandboxId:4564f117340894dd0f7a94fcff0ee0f5325c4046162909def7023cd62482149e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726593490994501976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bdthh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ae9d00-44ce-47be-80c5-12144ff8c69b,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595bdaca307f1441f309d1bcf4b34051887f8d890e48b076244fe646ebd3d242,PodSandboxId:251b80e9b641b04b92ef1676da10ff73b6530adcb89c413d79c9dc235a3e2707,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726593490936203600,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lmg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1052e249-3530-4220-8214-0c36a02c4215,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3e79172e8670acd510e33bf9c5da16c5d3534ce4ffe595a8e325c11657359d,PodSandboxId:2c5d3e765b253f81a3020f932abb56db9dbde7085ba378402b5ad0ce2468bd4e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726593478702515823,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a075630a-48df-429f-98ef-49bca2d9dac5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d41e134288854fec45bd7f2346f9652bdd3a1e807f2fc2cbcd942906f8f16d2,PodSandboxId:030199bb820c578808c1be3b2896b11a2b2fd10c184eb736f1a637e365377055,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726593478343369849,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 882de4ca-d789-403e-a22e-22fbc776af10,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e131e7c4af3fcc40108c8700b714abff6a932db62448fb92d9eced1c2ada9a91,PodSandboxId:6c0fc2dc035f99358aade2656ae8ecd22f042954413615bd3314f4560fe0f22b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726593466010452800,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5748534d2a3a40ee72c6688e8f4f184d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b77bc3ea3167ff26e92f87afb0c5a5816aa025b692fb78393cc147efa615fb4,PodSandboxId:64083dac55fed338c25d3bc30e0f7fabdd26345b8ad9d8098355f4740009de72,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726593465866672350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48e891f6ff7af12b13f4bafa92c7341b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9a3926b0-097c-465d-b170-dac0b7786009 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:34:02 ha-181247 crio[3559]: time="2024-09-17 17:34:02.082147475Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0d7c5eb5-61c3-49b3-8da0-9dab33ea40f4 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:34:02 ha-181247 crio[3559]: time="2024-09-17 17:34:02.082227366Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0d7c5eb5-61c3-49b3-8da0-9dab33ea40f4 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:34:02 ha-181247 crio[3559]: time="2024-09-17 17:34:02.083688563Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5ae73b24-a3dd-4902-813e-027e943a1037 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:34:02 ha-181247 crio[3559]: time="2024-09-17 17:34:02.084483716Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594442084452532,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5ae73b24-a3dd-4902-813e-027e943a1037 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:34:02 ha-181247 crio[3559]: time="2024-09-17 17:34:02.085322604Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=71f5f6bc-dbae-448b-8998-97d47c14bfd5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:34:02 ha-181247 crio[3559]: time="2024-09-17 17:34:02.085385399Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=71f5f6bc-dbae-448b-8998-97d47c14bfd5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:34:02 ha-181247 crio[3559]: time="2024-09-17 17:34:02.085820984Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:66e3c05e0a322c056ce6663fb5927823b5203e792db7564e24855819dd79380a,PodSandboxId:8a99278a4aabaeb673baebb3299b207a1cc469df6183497938cc4da48d0d989d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726594205451548352,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcef4cf0-61a6-4f9f-9644-f17f7f819237,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:902398e71d1c4ad20c5b15fa15132b981b81450766429108d5bfb5908d26e43e,PodSandboxId:5403d6c44b701aeb861aeca376475c318e2332bf44bdb2eb8778f6244592ef72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726594168450810396,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad9722f4b7cb935efee60829f463e82,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5d783c80592a10abe999e14e4f3929b183e45e6ec1aa38a049557fb506a780c,PodSandboxId:69fceaec5a1d1fd6679cf3637d991c5f2b901f5b5c497fc3dc6ec82cbc2e7611,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726594163774155936,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w8wxj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 681ace64-6c78-437e-9e9d-46edd2b4a8c4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3de5c9eb5bc6e0826bc6cee807a8461e9459d4567b45925c4142709163169c,PodSandboxId:331d57b91b4af9f8d6d1c82122eb151fd002653837435bbc31f43568066011af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726594163326854622,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afa944276158c101e1b388244401851,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ad4df81626cef60b73562b5608dfcc703fb4767e6831cde09375b17feb5d5c3,PodSandboxId:8a99278a4aabaeb673baebb3299b207a1cc469df6183497938cc4da48d0d989d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726594162442540876,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcef4cf0-61a6-4f9f-9644-f17f7f819237,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c401bf90a7a1a531d540c9091deb2112ea28bbeb9bacbedfe1f9edb84c5fab4,PodSandboxId:0122ff4dbdcc14cd2c57d3a9233ac49d8022f39a25a02c39ea32203931059bd3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726594144803901483,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aed6fcaf0b2bec2d4bdeb50696d03324,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e006a9b2aae6748542deecb0f28974d6d9aa2a82006af2640940b3f9b61863e3,PodSandboxId:ef2f4d54b977f375b920271720748727e7bf3c7f7923b2f3899e7e9c75a2861b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726594132127679444,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a075630a-48df-429f-98ef-49bca2d9dac5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:79aeef1c489439f99016dfe63b6c19ec3dafc13f649ee64cf83855ab0598a90a,PodSandboxId:b6c98a7fec7f5771c5418884361504798d252e0dec9fcf3f98c5d85c63a2adb1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726594130810348311,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lmg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1052e249-3530-4220-8214-0c36a02c4215,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3b0015ac8d2310e74efe8f4ade225b3ed047e43ccda61682d1ee8796c753d0f,PodSandboxId:392dae9735c73c55d4a36328420d772cb29c9dbfc311db5b79628fd69cd38590,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726594130553543051,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 882de4ca-d789-403e-a22e-22fbc776af10,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53ab2cc61c85bcd91d55627ca1c495f8df1e2af7877e16370d9bb58038c37d35,PodSandboxId:c8290f8fe843646b039a9cc7442e523f6d635bcf96f64c299984f542aff9c22b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726594130717475289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bdthh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ae9d00-44ce-47be-80c5-12144ff8c69b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea3f2b560fe5b38c04e3d8a4e416a5590512d910d1d5b1142735703ab8cb704e,PodSandboxId:37fc3cb1074825a7032531fd5d76c4c2de499e673326f174b628d6cc266127a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726594130503031471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-181247,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 48e891f6ff7af12b13f4bafa92c7341b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4062b287e38135972ca2a83f7e5e2c394cf5e89e0b663a3717db8e56f051a633,PodSandboxId:331d57b91b4af9f8d6d1c82122eb151fd002653837435bbc31f43568066011af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726594130358560852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 3afa944276158c101e1b388244401851,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ee2daa7460a214dbdb41e54393bd05a62faf7c1a45c86af80c54d03c10fc3d,PodSandboxId:7b562ed00e84eab3969ce84ff04b5df5b662d05fa11b30b55da4190ac62aa9d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726594130284032408,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5748534d2a3a40ee72c6688e8f4f184d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b475bb0d2d976d199c926f1d876639ddc076b8ccbd47e18c436cd8348a0b12,PodSandboxId:5403d6c44b701aeb861aeca376475c318e2332bf44bdb2eb8778f6244592ef72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726594130234849591,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad9722f4b7cb935efee60829f463e82,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1e590e905eaba85f23f252703d2acaae800370c02af50dfc79d700256e17f2f,PodSandboxId:032ab62b0ab6812b12acf668d671cc659557b22a8790aa1fc2cfa4494ec55dd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726593627164505554,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w8wxj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 681ace64-6c78-437e-9e9d-46edd2b4a8c4,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f192df08c3590e7bfd79aae969f3dd8f66f4d35701d32e3420611215f63553e5,PodSandboxId:4564f117340894dd0f7a94fcff0ee0f5325c4046162909def7023cd62482149e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726593490994501976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bdthh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ae9d00-44ce-47be-80c5-12144ff8c69b,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595bdaca307f1441f309d1bcf4b34051887f8d890e48b076244fe646ebd3d242,PodSandboxId:251b80e9b641b04b92ef1676da10ff73b6530adcb89c413d79c9dc235a3e2707,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726593490936203600,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lmg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1052e249-3530-4220-8214-0c36a02c4215,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3e79172e8670acd510e33bf9c5da16c5d3534ce4ffe595a8e325c11657359d,PodSandboxId:2c5d3e765b253f81a3020f932abb56db9dbde7085ba378402b5ad0ce2468bd4e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726593478702515823,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a075630a-48df-429f-98ef-49bca2d9dac5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d41e134288854fec45bd7f2346f9652bdd3a1e807f2fc2cbcd942906f8f16d2,PodSandboxId:030199bb820c578808c1be3b2896b11a2b2fd10c184eb736f1a637e365377055,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726593478343369849,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 882de4ca-d789-403e-a22e-22fbc776af10,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e131e7c4af3fcc40108c8700b714abff6a932db62448fb92d9eced1c2ada9a91,PodSandboxId:6c0fc2dc035f99358aade2656ae8ecd22f042954413615bd3314f4560fe0f22b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726593466010452800,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5748534d2a3a40ee72c6688e8f4f184d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b77bc3ea3167ff26e92f87afb0c5a5816aa025b692fb78393cc147efa615fb4,PodSandboxId:64083dac55fed338c25d3bc30e0f7fabdd26345b8ad9d8098355f4740009de72,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726593465866672350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48e891f6ff7af12b13f4bafa92c7341b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=71f5f6bc-dbae-448b-8998-97d47c14bfd5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:34:02 ha-181247 crio[3559]: time="2024-09-17 17:34:02.139903175Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=30b91355-4601-451c-bd6c-e0177793c393 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:34:02 ha-181247 crio[3559]: time="2024-09-17 17:34:02.139987102Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=30b91355-4601-451c-bd6c-e0177793c393 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:34:02 ha-181247 crio[3559]: time="2024-09-17 17:34:02.142005305Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d0b52835-93b4-42a4-88b6-ca03a97b57ac name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:34:02 ha-181247 crio[3559]: time="2024-09-17 17:34:02.142490807Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594442142465190,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d0b52835-93b4-42a4-88b6-ca03a97b57ac name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:34:02 ha-181247 crio[3559]: time="2024-09-17 17:34:02.143280215Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=88dbc449-f2cc-4c8b-94fa-f5a69f16ba68 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:34:02 ha-181247 crio[3559]: time="2024-09-17 17:34:02.143337672Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=88dbc449-f2cc-4c8b-94fa-f5a69f16ba68 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:34:02 ha-181247 crio[3559]: time="2024-09-17 17:34:02.143763517Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:66e3c05e0a322c056ce6663fb5927823b5203e792db7564e24855819dd79380a,PodSandboxId:8a99278a4aabaeb673baebb3299b207a1cc469df6183497938cc4da48d0d989d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726594205451548352,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcef4cf0-61a6-4f9f-9644-f17f7f819237,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:902398e71d1c4ad20c5b15fa15132b981b81450766429108d5bfb5908d26e43e,PodSandboxId:5403d6c44b701aeb861aeca376475c318e2332bf44bdb2eb8778f6244592ef72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726594168450810396,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad9722f4b7cb935efee60829f463e82,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5d783c80592a10abe999e14e4f3929b183e45e6ec1aa38a049557fb506a780c,PodSandboxId:69fceaec5a1d1fd6679cf3637d991c5f2b901f5b5c497fc3dc6ec82cbc2e7611,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726594163774155936,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w8wxj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 681ace64-6c78-437e-9e9d-46edd2b4a8c4,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3de5c9eb5bc6e0826bc6cee807a8461e9459d4567b45925c4142709163169c,PodSandboxId:331d57b91b4af9f8d6d1c82122eb151fd002653837435bbc31f43568066011af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726594163326854622,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afa944276158c101e1b388244401851,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ad4df81626cef60b73562b5608dfcc703fb4767e6831cde09375b17feb5d5c3,PodSandboxId:8a99278a4aabaeb673baebb3299b207a1cc469df6183497938cc4da48d0d989d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726594162442540876,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcef4cf0-61a6-4f9f-9644-f17f7f819237,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c401bf90a7a1a531d540c9091deb2112ea28bbeb9bacbedfe1f9edb84c5fab4,PodSandboxId:0122ff4dbdcc14cd2c57d3a9233ac49d8022f39a25a02c39ea32203931059bd3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726594144803901483,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aed6fcaf0b2bec2d4bdeb50696d03324,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e006a9b2aae6748542deecb0f28974d6d9aa2a82006af2640940b3f9b61863e3,PodSandboxId:ef2f4d54b977f375b920271720748727e7bf3c7f7923b2f3899e7e9c75a2861b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726594132127679444,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a075630a-48df-429f-98ef-49bca2d9dac5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:79aeef1c489439f99016dfe63b6c19ec3dafc13f649ee64cf83855ab0598a90a,PodSandboxId:b6c98a7fec7f5771c5418884361504798d252e0dec9fcf3f98c5d85c63a2adb1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726594130810348311,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lmg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1052e249-3530-4220-8214-0c36a02c4215,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3b0015ac8d2310e74efe8f4ade225b3ed047e43ccda61682d1ee8796c753d0f,PodSandboxId:392dae9735c73c55d4a36328420d772cb29c9dbfc311db5b79628fd69cd38590,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726594130553543051,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 882de4ca-d789-403e-a22e-22fbc776af10,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53ab2cc61c85bcd91d55627ca1c495f8df1e2af7877e16370d9bb58038c37d35,PodSandboxId:c8290f8fe843646b039a9cc7442e523f6d635bcf96f64c299984f542aff9c22b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726594130717475289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bdthh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ae9d00-44ce-47be-80c5-12144ff8c69b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea3f2b560fe5b38c04e3d8a4e416a5590512d910d1d5b1142735703ab8cb704e,PodSandboxId:37fc3cb1074825a7032531fd5d76c4c2de499e673326f174b628d6cc266127a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726594130503031471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-181247,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 48e891f6ff7af12b13f4bafa92c7341b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4062b287e38135972ca2a83f7e5e2c394cf5e89e0b663a3717db8e56f051a633,PodSandboxId:331d57b91b4af9f8d6d1c82122eb151fd002653837435bbc31f43568066011af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726594130358560852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 3afa944276158c101e1b388244401851,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ee2daa7460a214dbdb41e54393bd05a62faf7c1a45c86af80c54d03c10fc3d,PodSandboxId:7b562ed00e84eab3969ce84ff04b5df5b662d05fa11b30b55da4190ac62aa9d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726594130284032408,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5748534d2a3a40ee72c6688e8f4f184d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b475bb0d2d976d199c926f1d876639ddc076b8ccbd47e18c436cd8348a0b12,PodSandboxId:5403d6c44b701aeb861aeca376475c318e2332bf44bdb2eb8778f6244592ef72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726594130234849591,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad9722f4b7cb935efee60829f463e82,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1e590e905eaba85f23f252703d2acaae800370c02af50dfc79d700256e17f2f,PodSandboxId:032ab62b0ab6812b12acf668d671cc659557b22a8790aa1fc2cfa4494ec55dd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726593627164505554,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w8wxj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 681ace64-6c78-437e-9e9d-46edd2b4a8c4,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f192df08c3590e7bfd79aae969f3dd8f66f4d35701d32e3420611215f63553e5,PodSandboxId:4564f117340894dd0f7a94fcff0ee0f5325c4046162909def7023cd62482149e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726593490994501976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bdthh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ae9d00-44ce-47be-80c5-12144ff8c69b,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595bdaca307f1441f309d1bcf4b34051887f8d890e48b076244fe646ebd3d242,PodSandboxId:251b80e9b641b04b92ef1676da10ff73b6530adcb89c413d79c9dc235a3e2707,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726593490936203600,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5lmg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1052e249-3530-4220-8214-0c36a02c4215,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa3e79172e8670acd510e33bf9c5da16c5d3534ce4ffe595a8e325c11657359d,PodSandboxId:2c5d3e765b253f81a3020f932abb56db9dbde7085ba378402b5ad0ce2468bd4e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726593478702515823,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rrxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a075630a-48df-429f-98ef-49bca2d9dac5,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d41e134288854fec45bd7f2346f9652bdd3a1e807f2fc2cbcd942906f8f16d2,PodSandboxId:030199bb820c578808c1be3b2896b11a2b2fd10c184eb736f1a637e365377055,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726593478343369849,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tkbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 882de4ca-d789-403e-a22e-22fbc776af10,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e131e7c4af3fcc40108c8700b714abff6a932db62448fb92d9eced1c2ada9a91,PodSandboxId:6c0fc2dc035f99358aade2656ae8ecd22f042954413615bd3314f4560fe0f22b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726593466010452800,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5748534d2a3a40ee72c6688e8f4f184d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b77bc3ea3167ff26e92f87afb0c5a5816aa025b692fb78393cc147efa615fb4,PodSandboxId:64083dac55fed338c25d3bc30e0f7fabdd26345b8ad9d8098355f4740009de72,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726593465866672350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-181247,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48e891f6ff7af12b13f4bafa92c7341b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=88dbc449-f2cc-4c8b-94fa-f5a69f16ba68 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	66e3c05e0a322       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   8a99278a4aaba       storage-provisioner
	902398e71d1c4       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   2                   5403d6c44b701       kube-controller-manager-ha-181247
	f5d783c80592a       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   69fceaec5a1d1       busybox-7dff88458-w8wxj
	7d3de5c9eb5bc       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            3                   331d57b91b4af       kube-apiserver-ha-181247
	9ad4df81626ce       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   8a99278a4aaba       storage-provisioner
	3c401bf90a7a1       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   0122ff4dbdcc1       kube-vip-ha-181247
	e006a9b2aae67       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      5 minutes ago       Running             kube-proxy                1                   ef2f4d54b977f       kube-proxy-7rrxk
	79aeef1c48943       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   1                   b6c98a7fec7f5       coredns-7c65d6cfc9-5lmg4
	53ab2cc61c85b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   1                   c8290f8fe8436       coredns-7c65d6cfc9-bdthh
	e3b0015ac8d23       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               1                   392dae9735c73       kindnet-2tkbp
	ea3f2b560fe5b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      5 minutes ago       Running             kube-scheduler            1                   37fc3cb107482       kube-scheduler-ha-181247
	4062b287e3813       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      5 minutes ago       Exited              kube-apiserver            2                   331d57b91b4af       kube-apiserver-ha-181247
	f3ee2daa7460a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago       Running             etcd                      1                   7b562ed00e84e       etcd-ha-181247
	37b475bb0d2d9       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      5 minutes ago       Exited              kube-controller-manager   1                   5403d6c44b701       kube-controller-manager-ha-181247
	c1e590e905eab       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   032ab62b0ab68       busybox-7dff88458-w8wxj
	f192df08c3590       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   4564f11734089       coredns-7c65d6cfc9-bdthh
	595bdaca307f1       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   251b80e9b641b       coredns-7c65d6cfc9-5lmg4
	aa3e79172e867       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      16 minutes ago      Exited              kube-proxy                0                   2c5d3e765b253       kube-proxy-7rrxk
	8d41e13428885       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      16 minutes ago      Exited              kindnet-cni               0                   030199bb820c5       kindnet-2tkbp
	e131e7c4af3fc       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      16 minutes ago      Exited              etcd                      0                   6c0fc2dc035f9       etcd-ha-181247
	2b77bc3ea3167       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      16 minutes ago      Exited              kube-scheduler            0                   64083dac55fed       kube-scheduler-ha-181247
	
	
	==> coredns [53ab2cc61c85bcd91d55627ca1c495f8df1e2af7877e16370d9bb58038c37d35] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1955337142]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:28:57.426) (total time: 10001ms):
	Trace[1955337142]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:29:07.427)
	Trace[1955337142]: [10.001728716s] [10.001728716s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:56302->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:56302->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [595bdaca307f1441f309d1bcf4b34051887f8d890e48b076244fe646ebd3d242] <==
	[INFO] 10.244.0.4:58082 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.00083865s
	[INFO] 10.244.1.2:33599 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003427065s
	[INFO] 10.244.1.2:48415 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000218431s
	[INFO] 10.244.1.2:36800 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000118274s
	[INFO] 10.244.2.2:43997 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000248398s
	[INFO] 10.244.2.2:35973 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001811s
	[INFO] 10.244.2.2:49572 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000172284s
	[INFO] 10.244.0.4:47826 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002065843s
	[INFO] 10.244.0.4:36193 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000199582s
	[INFO] 10.244.0.4:50628 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110526s
	[INFO] 10.244.0.4:44724 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114759s
	[INFO] 10.244.0.4:42511 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083739s
	[INFO] 10.244.2.2:46937 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116808s
	[INFO] 10.244.2.2:44451 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000173075s
	[INFO] 10.244.0.4:40459 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064325s
	[INFO] 10.244.1.2:49457 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184596s
	[INFO] 10.244.1.2:38498 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000205346s
	[INFO] 10.244.2.2:59967 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130934s
	[INFO] 10.244.2.2:41589 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000130541s
	[INFO] 10.244.0.4:45130 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000138569s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1853&timeout=9m20s&timeoutSeconds=560&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1856&timeout=8m23s&timeoutSeconds=503&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1854&timeout=5m32s&timeoutSeconds=332&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [79aeef1c489439f99016dfe63b6c19ec3dafc13f649ee64cf83855ab0598a90a] <==
	Trace[1686718204]: [10.001280091s] [10.001280091s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[490376353]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:28:52.786) (total time: 10002ms):
	Trace[490376353]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (17:29:02.788)
	Trace[490376353]: [10.002109707s] [10.002109707s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:47226->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[416424229]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:29:05.715) (total time: 10050ms):
	Trace[416424229]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:47226->10.96.0.1:443: read: connection reset by peer 10050ms (17:29:15.765)
	Trace[416424229]: [10.050516494s] [10.050516494s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:47226->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f192df08c3590e7bfd79aae969f3dd8f66f4d35701d32e3420611215f63553e5] <==
	[INFO] 10.244.2.2:42284 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000132466s
	[INFO] 10.244.0.4:37678 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000213338s
	[INFO] 10.244.0.4:44751 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000122274s
	[INFO] 10.244.0.4:56988 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001602111s
	[INFO] 10.244.1.2:42868 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00026031s
	[INFO] 10.244.1.2:40978 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000206846s
	[INFO] 10.244.1.2:41313 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097139s
	[INFO] 10.244.1.2:50208 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000151609s
	[INFO] 10.244.2.2:49264 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143158s
	[INFO] 10.244.2.2:54921 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000162093s
	[INFO] 10.244.0.4:54768 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000211558s
	[INFO] 10.244.0.4:47021 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000048005s
	[INFO] 10.244.0.4:52698 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004567s
	[INFO] 10.244.1.2:39357 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000237795s
	[INFO] 10.244.1.2:48172 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000183611s
	[INFO] 10.244.2.2:56434 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000125357s
	[INFO] 10.244.2.2:37159 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000179695s
	[INFO] 10.244.0.4:40381 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150761s
	[INFO] 10.244.0.4:39726 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074302s
	[INFO] 10.244.0.4:39990 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097242s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1853&timeout=8m42s&timeoutSeconds=522&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	
	
	==> describe nodes <==
	Name:               ha-181247
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-181247
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=ha-181247
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T17_17_53_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 17:17:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181247
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:33:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:29:29 +0000   Tue, 17 Sep 2024 17:17:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:29:29 +0000   Tue, 17 Sep 2024 17:17:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:29:29 +0000   Tue, 17 Sep 2024 17:17:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:29:29 +0000   Tue, 17 Sep 2024 17:18:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    ha-181247
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fef45c02f40245a0a3ede964289ca350
	  System UUID:                fef45c02-f402-45a0-a3ed-e964289ca350
	  Boot ID:                    3253b46a-acef-407f-8fd6-3d5cae46a6bb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-w8wxj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-7c65d6cfc9-5lmg4             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-bdthh             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-ha-181247                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kindnet-2tkbp                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	  kube-system                 kube-apiserver-ha-181247             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-181247    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-7rrxk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-181247             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-181247                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m27s                  kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-181247 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-181247 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-181247 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           16m                    node-controller  Node ha-181247 event: Registered Node ha-181247 in Controller
	  Normal   NodeReady                15m                    kubelet          Node ha-181247 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-181247 event: Registered Node ha-181247 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-181247 event: Registered Node ha-181247 in Controller
	  Warning  ContainerGCFailed        6m10s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             5m31s (x3 over 6m20s)  kubelet          Node ha-181247 status is now: NodeNotReady
	  Normal   RegisteredNode           4m32s                  node-controller  Node ha-181247 event: Registered Node ha-181247 in Controller
	  Normal   RegisteredNode           4m31s                  node-controller  Node ha-181247 event: Registered Node ha-181247 in Controller
	  Normal   RegisteredNode           3m19s                  node-controller  Node ha-181247 event: Registered Node ha-181247 in Controller
	
	
	Name:               ha-181247-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-181247-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=ha-181247
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T17_18_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 17:18:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181247-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:33:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:30:15 +0000   Tue, 17 Sep 2024 17:29:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:30:15 +0000   Tue, 17 Sep 2024 17:29:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:30:15 +0000   Tue, 17 Sep 2024 17:29:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:30:15 +0000   Tue, 17 Sep 2024 17:29:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.11
	  Hostname:    ha-181247-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2585a68084874db38baf46d679282ed1
	  System UUID:                2585a680-8487-4db3-8baf-46d679282ed1
	  Boot ID:                    4746ebdc-02a2-4372-a8b4-1d642059f3bc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-96b8c                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-181247-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-qqpgm                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-181247-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-181247-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-xmfcj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-181247-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-181247-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m25s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-181247-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-181247-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-181247-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-181247-m02 event: Registered Node ha-181247-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-181247-m02 event: Registered Node ha-181247-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-181247-m02 event: Registered Node ha-181247-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-181247-m02 status is now: NodeNotReady
	  Normal  Starting                 4m56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m56s (x8 over 4m56s)  kubelet          Node ha-181247-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m56s (x8 over 4m56s)  kubelet          Node ha-181247-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m56s (x7 over 4m56s)  kubelet          Node ha-181247-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m32s                  node-controller  Node ha-181247-m02 event: Registered Node ha-181247-m02 in Controller
	  Normal  RegisteredNode           4m31s                  node-controller  Node ha-181247-m02 event: Registered Node ha-181247-m02 in Controller
	  Normal  RegisteredNode           3m19s                  node-controller  Node ha-181247-m02 event: Registered Node ha-181247-m02 in Controller
	
	
	Name:               ha-181247-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-181247-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=ha-181247
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T17_21_01_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 17:21:01 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181247-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:31:35 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 17 Sep 2024 17:31:15 +0000   Tue, 17 Sep 2024 17:32:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 17 Sep 2024 17:31:15 +0000   Tue, 17 Sep 2024 17:32:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 17 Sep 2024 17:31:15 +0000   Tue, 17 Sep 2024 17:32:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 17 Sep 2024 17:31:15 +0000   Tue, 17 Sep 2024 17:32:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.63
	  Hostname:    ha-181247-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6b33a6f0f712480eacc4183b870e9eb2
	  System UUID:                6b33a6f0-f712-480e-acc4-183b870e9eb2
	  Boot ID:                    91bbae3b-b227-490a-aaac-245b32a23838
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-p5w7r    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kindnet-ntzg5              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-proxy-shlht           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-181247-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-181247-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-181247-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-181247-m04 event: Registered Node ha-181247-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-181247-m04 event: Registered Node ha-181247-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-181247-m04 event: Registered Node ha-181247-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-181247-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m32s                  node-controller  Node ha-181247-m04 event: Registered Node ha-181247-m04 in Controller
	  Normal   RegisteredNode           4m31s                  node-controller  Node ha-181247-m04 event: Registered Node ha-181247-m04 in Controller
	  Normal   NodeNotReady             3m52s                  node-controller  Node ha-181247-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m19s                  node-controller  Node ha-181247-m04 event: Registered Node ha-181247-m04 in Controller
	  Normal   Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 2m47s                  kubelet          Node ha-181247-m04 has been rebooted, boot id: 91bbae3b-b227-490a-aaac-245b32a23838
	  Normal   NodeHasSufficientMemory  2m47s (x2 over 2m47s)  kubelet          Node ha-181247-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m47s (x2 over 2m47s)  kubelet          Node ha-181247-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x2 over 2m47s)  kubelet          Node ha-181247-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                2m47s                  kubelet          Node ha-181247-m04 status is now: NodeReady
	  Normal   NodeNotReady             106s                   node-controller  Node ha-181247-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.348569] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.067341] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053278] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.203985] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.135345] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.305138] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +4.213157] systemd-fstab-generator[739]: Ignoring "noauto" option for root device
	[  +4.747183] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.062031] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.385150] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	[  +0.092383] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.405737] kauditd_printk_skb: 18 callbacks suppressed
	[Sep17 17:18] kauditd_printk_skb: 41 callbacks suppressed
	[ +43.240359] kauditd_printk_skb: 26 callbacks suppressed
	[Sep17 17:28] systemd-fstab-generator[3484]: Ignoring "noauto" option for root device
	[  +0.170335] systemd-fstab-generator[3496]: Ignoring "noauto" option for root device
	[  +0.206124] systemd-fstab-generator[3510]: Ignoring "noauto" option for root device
	[  +0.160017] systemd-fstab-generator[3522]: Ignoring "noauto" option for root device
	[  +0.305640] systemd-fstab-generator[3550]: Ignoring "noauto" option for root device
	[  +2.953177] systemd-fstab-generator[3645]: Ignoring "noauto" option for root device
	[  +6.570640] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.419348] kauditd_printk_skb: 85 callbacks suppressed
	[Sep17 17:29] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [e131e7c4af3fcc40108c8700b714abff6a932db62448fb92d9eced1c2ada9a91] <==
	{"level":"info","ts":"2024-09-17T17:27:07.965428Z","caller":"traceutil/trace.go:171","msg":"trace[481436337] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; }","duration":"926.393034ms","start":"2024-09-17T17:27:07.039031Z","end":"2024-09-17T17:27:07.965424Z","steps":["trace[481436337] 'agreement among raft nodes before linearized reading'  (duration: 886.38891ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T17:27:07.965441Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-17T17:27:07.039013Z","time spent":"926.423288ms","remote":"127.0.0.1:36716","response type":"/etcdserverpb.KV/Range","request count":0,"request size":87,"response count":0,"response size":0,"request content":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" limit:10000 "}
	2024/09/17 17:27:07 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-17T17:27:08.011128Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":6657043732157711009,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-17T17:27:08.036952Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.195:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-17T17:27:08.037009Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.195:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-17T17:27:08.037300Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"324857e3fe6e5c62","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-17T17:27:08.037454Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e73967f545c05a22"}
	{"level":"info","ts":"2024-09-17T17:27:08.037491Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e73967f545c05a22"}
	{"level":"info","ts":"2024-09-17T17:27:08.037537Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e73967f545c05a22"}
	{"level":"info","ts":"2024-09-17T17:27:08.037639Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22"}
	{"level":"info","ts":"2024-09-17T17:27:08.037688Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22"}
	{"level":"info","ts":"2024-09-17T17:27:08.037746Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"324857e3fe6e5c62","remote-peer-id":"e73967f545c05a22"}
	{"level":"info","ts":"2024-09-17T17:27:08.037774Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e73967f545c05a22"}
	{"level":"info","ts":"2024-09-17T17:27:08.037781Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e75aed46b631937d"}
	{"level":"info","ts":"2024-09-17T17:27:08.037790Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e75aed46b631937d"}
	{"level":"info","ts":"2024-09-17T17:27:08.037808Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e75aed46b631937d"}
	{"level":"info","ts":"2024-09-17T17:27:08.037928Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"324857e3fe6e5c62","remote-peer-id":"e75aed46b631937d"}
	{"level":"info","ts":"2024-09-17T17:27:08.037976Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"324857e3fe6e5c62","remote-peer-id":"e75aed46b631937d"}
	{"level":"info","ts":"2024-09-17T17:27:08.038007Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"324857e3fe6e5c62","remote-peer-id":"e75aed46b631937d"}
	{"level":"info","ts":"2024-09-17T17:27:08.038018Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e75aed46b631937d"}
	{"level":"info","ts":"2024-09-17T17:27:08.041025Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.195:2380"}
	{"level":"info","ts":"2024-09-17T17:27:08.041219Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.195:2380"}
	{"level":"info","ts":"2024-09-17T17:27:08.041250Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-181247","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.195:2380"],"advertise-client-urls":["https://192.168.39.195:2379"]}
	{"level":"warn","ts":"2024-09-17T17:27:08.041371Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.03220608s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	
	
	==> etcd [f3ee2daa7460a214dbdb41e54393bd05a62faf7c1a45c86af80c54d03c10fc3d] <==
	{"level":"info","ts":"2024-09-17T17:30:36.202414Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"324857e3fe6e5c62","remote-peer-id":"e75aed46b631937d"}
	{"level":"warn","ts":"2024-09-17T17:30:36.216913Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e75aed46b631937d","rtt":"0s","error":"dial tcp 192.168.39.122:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:30:36.217017Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e75aed46b631937d","rtt":"0s","error":"dial tcp 192.168.39.122:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-17T17:30:37.278305Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.266368ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-ha-181247-m03\" ","response":"range_response_count:1 size:6074"}
	{"level":"info","ts":"2024-09-17T17:30:37.278377Z","caller":"traceutil/trace.go:171","msg":"trace[2110576087] range","detail":"{range_begin:/registry/pods/kube-system/etcd-ha-181247-m03; range_end:; response_count:1; response_revision:2384; }","duration":"103.397645ms","start":"2024-09-17T17:30:37.174967Z","end":"2024-09-17T17:30:37.278365Z","steps":["trace[2110576087] 'range keys from in-memory index tree'  (duration: 102.119639ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T17:30:44.722808Z","caller":"traceutil/trace.go:171","msg":"trace[774956040] transaction","detail":"{read_only:false; response_revision:2428; number_of_response:1; }","duration":"115.131679ms","start":"2024-09-17T17:30:44.607658Z","end":"2024-09-17T17:30:44.722789Z","steps":["trace[774956040] 'process raft request'  (duration: 114.172784ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T17:31:28.857842Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"324857e3fe6e5c62 switched to configuration voters=(3623242536957402210 16661462599568742946)"}
	{"level":"info","ts":"2024-09-17T17:31:28.861694Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"e260bcd32c6c8b35","local-member-id":"324857e3fe6e5c62","removed-remote-peer-id":"e75aed46b631937d","removed-remote-peer-urls":["https://192.168.39.122:2380"]}
	{"level":"info","ts":"2024-09-17T17:31:28.861894Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e75aed46b631937d"}
	{"level":"warn","ts":"2024-09-17T17:31:28.862091Z","caller":"etcdserver/server.go:987","msg":"rejected Raft message from removed member","local-member-id":"324857e3fe6e5c62","removed-member-id":"e75aed46b631937d"}
	{"level":"warn","ts":"2024-09-17T17:31:28.862182Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2024-09-17T17:31:28.862405Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e75aed46b631937d"}
	{"level":"info","ts":"2024-09-17T17:31:28.862466Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e75aed46b631937d"}
	{"level":"warn","ts":"2024-09-17T17:31:28.862900Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e75aed46b631937d"}
	{"level":"info","ts":"2024-09-17T17:31:28.863024Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e75aed46b631937d"}
	{"level":"info","ts":"2024-09-17T17:31:28.864167Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"324857e3fe6e5c62","remote-peer-id":"e75aed46b631937d"}
	{"level":"warn","ts":"2024-09-17T17:31:28.864424Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"324857e3fe6e5c62","remote-peer-id":"e75aed46b631937d","error":"context canceled"}
	{"level":"warn","ts":"2024-09-17T17:31:28.864556Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"e75aed46b631937d","error":"failed to read e75aed46b631937d on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-17T17:31:28.864641Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"324857e3fe6e5c62","remote-peer-id":"e75aed46b631937d"}
	{"level":"warn","ts":"2024-09-17T17:31:28.864865Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"324857e3fe6e5c62","remote-peer-id":"e75aed46b631937d","error":"context canceled"}
	{"level":"info","ts":"2024-09-17T17:31:28.864930Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"324857e3fe6e5c62","remote-peer-id":"e75aed46b631937d"}
	{"level":"info","ts":"2024-09-17T17:31:28.864977Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e75aed46b631937d"}
	{"level":"info","ts":"2024-09-17T17:31:28.865015Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"324857e3fe6e5c62","removed-remote-peer-id":"e75aed46b631937d"}
	{"level":"warn","ts":"2024-09-17T17:31:28.886142Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"324857e3fe6e5c62","remote-peer-id-stream-handler":"324857e3fe6e5c62","remote-peer-id-from":"e75aed46b631937d"}
	{"level":"warn","ts":"2024-09-17T17:31:28.886846Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"324857e3fe6e5c62","remote-peer-id-stream-handler":"324857e3fe6e5c62","remote-peer-id-from":"e75aed46b631937d"}
	
	
	==> kernel <==
	 17:34:02 up 16 min,  0 users,  load average: 0.36, 0.42, 0.36
	Linux ha-181247 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8d41e134288854fec45bd7f2346f9652bdd3a1e807f2fc2cbcd942906f8f16d2] <==
	I0917 17:26:39.804251       1 main.go:295] Handling node with IPs: map[192.168.39.122:{}]
	I0917 17:26:39.804304       1 main.go:322] Node ha-181247-m03 has CIDR [10.244.2.0/24] 
	I0917 17:26:39.804468       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0917 17:26:39.804495       1 main.go:322] Node ha-181247-m04 has CIDR [10.244.3.0/24] 
	I0917 17:26:39.804550       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0917 17:26:39.804573       1 main.go:299] handling current node
	I0917 17:26:39.804585       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0917 17:26:39.804590       1 main.go:322] Node ha-181247-m02 has CIDR [10.244.1.0/24] 
	I0917 17:26:49.804697       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0917 17:26:49.804810       1 main.go:322] Node ha-181247-m02 has CIDR [10.244.1.0/24] 
	I0917 17:26:49.804952       1 main.go:295] Handling node with IPs: map[192.168.39.122:{}]
	I0917 17:26:49.805134       1 main.go:322] Node ha-181247-m03 has CIDR [10.244.2.0/24] 
	I0917 17:26:49.805235       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0917 17:26:49.805256       1 main.go:322] Node ha-181247-m04 has CIDR [10.244.3.0/24] 
	I0917 17:26:49.805360       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0917 17:26:49.805382       1 main.go:299] handling current node
	I0917 17:26:59.801480       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0917 17:26:59.801620       1 main.go:322] Node ha-181247-m04 has CIDR [10.244.3.0/24] 
	I0917 17:26:59.801776       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0917 17:26:59.801803       1 main.go:299] handling current node
	I0917 17:26:59.801834       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0917 17:26:59.801860       1 main.go:322] Node ha-181247-m02 has CIDR [10.244.1.0/24] 
	I0917 17:26:59.801934       1 main.go:295] Handling node with IPs: map[192.168.39.122:{}]
	I0917 17:26:59.801964       1 main.go:322] Node ha-181247-m03 has CIDR [10.244.2.0/24] 
	E0917 17:27:00.597790       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1856&timeout=5m32s&timeoutSeconds=332&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> kindnet [e3b0015ac8d2310e74efe8f4ade225b3ed047e43ccda61682d1ee8796c753d0f] <==
	I0917 17:33:22.024965       1 main.go:322] Node ha-181247-m04 has CIDR [10.244.3.0/24] 
	I0917 17:33:32.021282       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0917 17:33:32.021353       1 main.go:299] handling current node
	I0917 17:33:32.021378       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0917 17:33:32.021383       1 main.go:322] Node ha-181247-m02 has CIDR [10.244.1.0/24] 
	I0917 17:33:32.021558       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0917 17:33:32.021584       1 main.go:322] Node ha-181247-m04 has CIDR [10.244.3.0/24] 
	I0917 17:33:42.015957       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0917 17:33:42.016030       1 main.go:322] Node ha-181247-m04 has CIDR [10.244.3.0/24] 
	I0917 17:33:42.016947       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0917 17:33:42.017020       1 main.go:299] handling current node
	I0917 17:33:42.017099       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0917 17:33:42.017124       1 main.go:322] Node ha-181247-m02 has CIDR [10.244.1.0/24] 
	I0917 17:33:52.016709       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0917 17:33:52.016820       1 main.go:322] Node ha-181247-m04 has CIDR [10.244.3.0/24] 
	I0917 17:33:52.017013       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0917 17:33:52.017163       1 main.go:299] handling current node
	I0917 17:33:52.017235       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0917 17:33:52.017255       1 main.go:322] Node ha-181247-m02 has CIDR [10.244.1.0/24] 
	I0917 17:34:02.024399       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0917 17:34:02.024448       1 main.go:299] handling current node
	I0917 17:34:02.024463       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0917 17:34:02.024468       1 main.go:322] Node ha-181247-m02 has CIDR [10.244.1.0/24] 
	I0917 17:34:02.024615       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0917 17:34:02.024620       1 main.go:322] Node ha-181247-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [4062b287e38135972ca2a83f7e5e2c394cf5e89e0b663a3717db8e56f051a633] <==
	I0917 17:28:51.539527       1 options.go:228] external host was not specified, using 192.168.39.195
	I0917 17:28:51.541801       1 server.go:142] Version: v1.31.1
	I0917 17:28:51.541887       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:28:52.443002       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0917 17:28:52.485235       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0917 17:28:52.488451       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0917 17:28:52.488475       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0917 17:28:52.488743       1 instance.go:232] Using reconciler: lease
	W0917 17:29:12.443614       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0917 17:29:12.443779       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0917 17:29:12.492126       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [7d3de5c9eb5bc6e0826bc6cee807a8461e9459d4567b45925c4142709163169c] <==
	I0917 17:29:26.571976       1 controller.go:119] Starting legacy_token_tracking_controller
	I0917 17:29:26.587199       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0917 17:29:26.689656       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0917 17:29:26.689894       1 aggregator.go:171] initial CRD sync complete...
	I0917 17:29:26.689942       1 autoregister_controller.go:144] Starting autoregister controller
	I0917 17:29:26.689984       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0917 17:29:26.731668       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0917 17:29:26.738343       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0917 17:29:26.738382       1 policy_source.go:224] refreshing policies
	I0917 17:29:26.772287       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0917 17:29:26.772365       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0917 17:29:26.773015       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0917 17:29:26.773164       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0917 17:29:26.773575       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0917 17:29:26.775102       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0917 17:29:26.779806       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0917 17:29:26.786833       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0917 17:29:26.787445       1 shared_informer.go:320] Caches are synced for configmaps
	I0917 17:29:26.790895       1 cache.go:39] Caches are synced for autoregister controller
	I0917 17:29:26.813025       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 17:29:27.576898       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0917 17:29:28.003287       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.195]
	I0917 17:29:28.004708       1 controller.go:615] quota admission added evaluator for: endpoints
	I0917 17:29:28.018401       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	W0917 17:31:38.021493       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.11 192.168.39.195]
	
	
	==> kube-controller-manager [37b475bb0d2d976d199c926f1d876639ddc076b8ccbd47e18c436cd8348a0b12] <==
	I0917 17:28:51.927032       1 serving.go:386] Generated self-signed cert in-memory
	I0917 17:28:52.554740       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0917 17:28:52.554834       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:28:52.556580       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0917 17:28:52.557363       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 17:28:52.557512       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0917 17:28:52.557596       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0917 17:29:13.500097       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.195:8443/healthz\": dial tcp 192.168.39.195:8443: connect: connection refused"
	
	
	==> kube-controller-manager [902398e71d1c4ad20c5b15fa15132b981b81450766429108d5bfb5908d26e43e] <==
	I0917 17:32:16.232397       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:32:16.256527       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:32:16.320751       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.780049ms"
	I0917 17:32:16.320873       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.269µs"
	I0917 17:32:20.552842       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	I0917 17:32:21.412988       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-181247-m04"
	E0917 17:32:31.232750       1 gc_controller.go:151] "Failed to get node" err="node \"ha-181247-m03\" not found" logger="pod-garbage-collector-controller" node="ha-181247-m03"
	E0917 17:32:31.232856       1 gc_controller.go:151] "Failed to get node" err="node \"ha-181247-m03\" not found" logger="pod-garbage-collector-controller" node="ha-181247-m03"
	E0917 17:32:31.232887       1 gc_controller.go:151] "Failed to get node" err="node \"ha-181247-m03\" not found" logger="pod-garbage-collector-controller" node="ha-181247-m03"
	E0917 17:32:31.232911       1 gc_controller.go:151] "Failed to get node" err="node \"ha-181247-m03\" not found" logger="pod-garbage-collector-controller" node="ha-181247-m03"
	E0917 17:32:31.232936       1 gc_controller.go:151] "Failed to get node" err="node \"ha-181247-m03\" not found" logger="pod-garbage-collector-controller" node="ha-181247-m03"
	I0917 17:32:31.245437       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-42gpk"
	I0917 17:32:31.274029       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-42gpk"
	I0917 17:32:31.274179       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-181247-m03"
	I0917 17:32:31.320097       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-181247-m03"
	I0917 17:32:31.320137       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-181247-m03"
	I0917 17:32:31.352447       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-181247-m03"
	I0917 17:32:31.352492       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-181247-m03"
	I0917 17:32:31.384363       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-181247-m03"
	I0917 17:32:31.384829       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-tkbmg"
	I0917 17:32:31.421242       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-tkbmg"
	I0917 17:32:31.421292       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-181247-m03"
	I0917 17:32:31.452960       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-181247-m03"
	I0917 17:32:31.453205       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-181247-m03"
	I0917 17:32:31.506545       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-181247-m03"
	
	
	==> kube-proxy [aa3e79172e8670acd510e33bf9c5da16c5d3534ce4ffe595a8e325c11657359d] <==
	W0917 17:25:53.014102       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0917 17:25:53.014181       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0917 17:25:53.013797       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-181247&resourceVersion=1837\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0917 17:26:00.373530       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	W0917 17:26:00.373596       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-181247&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0917 17:26:00.375120       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0917 17:26:00.375327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-181247&resourceVersion=1837\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0917 17:26:00.375413       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1830": dial tcp 192.168.39.254:8443: connect: no route to host
	E0917 17:26:00.375469       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1830\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0917 17:26:09.591378       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-181247&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0917 17:26:09.592301       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-181247&resourceVersion=1837\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0917 17:26:12.664182       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0917 17:26:12.664298       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0917 17:26:12.664518       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1830": dial tcp 192.168.39.254:8443: connect: no route to host
	E0917 17:26:12.664904       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1830\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0917 17:26:28.021738       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0917 17:26:28.022535       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0917 17:26:31.098459       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1830": dial tcp 192.168.39.254:8443: connect: no route to host
	E0917 17:26:31.098513       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1830\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0917 17:26:34.165943       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-181247&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0917 17:26:34.166134       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-181247&resourceVersion=1837\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0917 17:26:55.669916       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0917 17:26:55.670287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0917 17:27:07.967298       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-181247&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0917 17:27:07.967374       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-181247&resourceVersion=1837\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [e006a9b2aae6748542deecb0f28974d6d9aa2a82006af2640940b3f9b61863e3] <==
	E0917 17:28:55.477625       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-181247\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0917 17:28:58.549784       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-181247\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0917 17:29:01.622492       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-181247\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0917 17:29:07.766562       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-181247\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0917 17:29:16.982830       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-181247\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0917 17:29:35.156805       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.195"]
	E0917 17:29:35.158962       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 17:29:35.234478       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 17:29:35.234552       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 17:29:35.234592       1 server_linux.go:169] "Using iptables Proxier"
	I0917 17:29:35.237288       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 17:29:35.237942       1 server.go:483] "Version info" version="v1.31.1"
	I0917 17:29:35.237987       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:29:35.239804       1 config.go:199] "Starting service config controller"
	I0917 17:29:35.239870       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 17:29:35.239907       1 config.go:105] "Starting endpoint slice config controller"
	I0917 17:29:35.239929       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 17:29:35.240619       1 config.go:328] "Starting node config controller"
	I0917 17:29:35.240648       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 17:29:35.340512       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 17:29:35.340508       1 shared_informer.go:320] Caches are synced for service config
	I0917 17:29:35.340688       1 shared_informer.go:320] Caches are synced for node config
	W0917 17:32:23.038246       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0917 17:32:23.038210       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0917 17:32:23.038212       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-scheduler [2b77bc3ea3167ff26e92f87afb0c5a5816aa025b692fb78393cc147efa615fb4] <==
	E0917 17:21:01.481288       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-ntzg5\": pod kindnet-ntzg5 is already assigned to node \"ha-181247-m04\"" pod="kube-system/kindnet-ntzg5"
	I0917 17:21:01.481324       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ntzg5" node="ha-181247-m04"
	E0917 17:21:01.481718       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-wxx9b\": pod kube-proxy-wxx9b is already assigned to node \"ha-181247-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-wxx9b" node="ha-181247-m04"
	E0917 17:21:01.481771       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod be89da91-3d03-49d5-9c40-8f0a10a29dc4(kube-system/kube-proxy-wxx9b) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-wxx9b"
	E0917 17:21:01.481794       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-wxx9b\": pod kube-proxy-wxx9b is already assigned to node \"ha-181247-m04\"" pod="kube-system/kube-proxy-wxx9b"
	I0917 17:21:01.481828       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-wxx9b" node="ha-181247-m04"
	E0917 17:21:01.598636       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rjzts\": pod kindnet-rjzts is already assigned to node \"ha-181247-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-rjzts" node="ha-181247-m04"
	E0917 17:21:01.598783       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod df1f81cf-787e-4442-b864-71023978df35(kube-system/kindnet-rjzts) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-rjzts"
	E0917 17:21:01.598965       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rjzts\": pod kindnet-rjzts is already assigned to node \"ha-181247-m04\"" pod="kube-system/kindnet-rjzts"
	I0917 17:21:01.599124       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rjzts" node="ha-181247-m04"
	E0917 17:26:56.980855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0917 17:26:57.326322       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0917 17:26:57.933895       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0917 17:26:58.198268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0917 17:26:58.776492       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0917 17:27:00.039461       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0917 17:27:00.162384       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0917 17:27:01.885926       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0917 17:27:02.154029       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0917 17:27:02.342778       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0917 17:27:04.058585       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0917 17:27:04.656014       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0917 17:27:04.878521       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0917 17:27:06.198799       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0917 17:27:07.930610       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ea3f2b560fe5b38c04e3d8a4e416a5590512d910d1d5b1142735703ab8cb704e] <==
	W0917 17:29:21.216901       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.195:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0917 17:29:21.216997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.195:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.195:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:29:21.666156       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.195:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0917 17:29:21.666270       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.195:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.195:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:29:21.928199       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.195:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0917 17:29:21.928338       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.195:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.195:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:29:21.957700       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.195:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0917 17:29:21.957792       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.195:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.195:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:29:22.214553       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.195:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0917 17:29:22.214653       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.195:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.195:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:29:22.274806       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.195:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0917 17:29:22.274971       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.195:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.195:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:29:22.606810       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.195:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0917 17:29:22.606856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.195:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.195:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:29:22.616650       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.195:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0917 17:29:22.616733       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.195:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.195:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:29:22.985937       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.195:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0917 17:29:22.986024       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.195:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.195:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:29:23.468606       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.195:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.195:8443: connect: connection refused
	E0917 17:29:23.468648       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.195:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.195:8443: connect: connection refused" logger="UnhandledError"
	W0917 17:29:26.602581       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0917 17:29:26.602654       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 17:29:26.603137       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 17:29:26.603376       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0917 17:29:31.917994       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 17 17:32:52 ha-181247 kubelet[1302]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 17 17:32:52 ha-181247 kubelet[1302]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 17 17:32:52 ha-181247 kubelet[1302]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 17 17:32:52 ha-181247 kubelet[1302]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 17 17:32:52 ha-181247 kubelet[1302]: E0917 17:32:52.679016    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594372678355777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:32:52 ha-181247 kubelet[1302]: E0917 17:32:52.679095    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594372678355777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:33:02 ha-181247 kubelet[1302]: E0917 17:33:02.681412    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594382680867713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:33:02 ha-181247 kubelet[1302]: E0917 17:33:02.681983    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594382680867713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:33:12 ha-181247 kubelet[1302]: E0917 17:33:12.684489    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594392683742174,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:33:12 ha-181247 kubelet[1302]: E0917 17:33:12.684530    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594392683742174,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:33:22 ha-181247 kubelet[1302]: E0917 17:33:22.686901    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594402686392065,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:33:22 ha-181247 kubelet[1302]: E0917 17:33:22.687386    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594402686392065,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:33:32 ha-181247 kubelet[1302]: E0917 17:33:32.689780    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594412689412354,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:33:32 ha-181247 kubelet[1302]: E0917 17:33:32.689826    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594412689412354,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:33:42 ha-181247 kubelet[1302]: E0917 17:33:42.691591    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594422691302789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:33:42 ha-181247 kubelet[1302]: E0917 17:33:42.691633    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594422691302789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:33:52 ha-181247 kubelet[1302]: E0917 17:33:52.459913    1302 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 17 17:33:52 ha-181247 kubelet[1302]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 17 17:33:52 ha-181247 kubelet[1302]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 17 17:33:52 ha-181247 kubelet[1302]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 17 17:33:52 ha-181247 kubelet[1302]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 17 17:33:52 ha-181247 kubelet[1302]: E0917 17:33:52.693897    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594432693637667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:33:52 ha-181247 kubelet[1302]: E0917 17:33:52.693942    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594432693637667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:34:02 ha-181247 kubelet[1302]: E0917 17:34:02.695936    1302 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594442695440697,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:34:02 ha-181247 kubelet[1302]: E0917 17:34:02.695975    1302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726594442695440697,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 17:34:01.671636   38791 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19662-11085/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-181247 -n ha-181247
helpers_test.go:261: (dbg) Run:  kubectl --context ha-181247 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.93s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (331.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-178778
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-178778
E0917 17:48:50.535190   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-178778: exit status 82 (2m1.905980836s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-178778-m03"  ...
	* Stopping node "multinode-178778-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-178778" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-178778 --wait=true -v=8 --alsologtostderr
E0917 17:51:24.984796   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-178778 --wait=true -v=8 --alsologtostderr: (3m27.646335655s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-178778
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-178778 -n multinode-178778
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 logs -n 25
E0917 17:53:50.532451   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-178778 logs -n 25: (1.570286279s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-178778 ssh -n                                                                 | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | multinode-178778-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-178778 cp multinode-178778-m02:/home/docker/cp-test.txt                       | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile460922367/001/cp-test_multinode-178778-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-178778 ssh -n                                                                 | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | multinode-178778-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-178778 cp multinode-178778-m02:/home/docker/cp-test.txt                       | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | multinode-178778:/home/docker/cp-test_multinode-178778-m02_multinode-178778.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-178778 ssh -n                                                                 | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | multinode-178778-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-178778 ssh -n multinode-178778 sudo cat                                       | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | /home/docker/cp-test_multinode-178778-m02_multinode-178778.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-178778 cp multinode-178778-m02:/home/docker/cp-test.txt                       | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | multinode-178778-m03:/home/docker/cp-test_multinode-178778-m02_multinode-178778-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-178778 ssh -n                                                                 | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | multinode-178778-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-178778 ssh -n multinode-178778-m03 sudo cat                                   | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | /home/docker/cp-test_multinode-178778-m02_multinode-178778-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-178778 cp testdata/cp-test.txt                                                | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | multinode-178778-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-178778 ssh -n                                                                 | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | multinode-178778-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-178778 cp multinode-178778-m03:/home/docker/cp-test.txt                       | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile460922367/001/cp-test_multinode-178778-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-178778 ssh -n                                                                 | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | multinode-178778-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-178778 cp multinode-178778-m03:/home/docker/cp-test.txt                       | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | multinode-178778:/home/docker/cp-test_multinode-178778-m03_multinode-178778.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-178778 ssh -n                                                                 | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | multinode-178778-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-178778 ssh -n multinode-178778 sudo cat                                       | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | /home/docker/cp-test_multinode-178778-m03_multinode-178778.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-178778 cp multinode-178778-m03:/home/docker/cp-test.txt                       | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | multinode-178778-m02:/home/docker/cp-test_multinode-178778-m03_multinode-178778-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-178778 ssh -n                                                                 | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | multinode-178778-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-178778 ssh -n multinode-178778-m02 sudo cat                                   | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | /home/docker/cp-test_multinode-178778-m03_multinode-178778-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-178778 node stop m03                                                          | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	| node    | multinode-178778 node start                                                             | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:48 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-178778                                                                | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:48 UTC |                     |
	| stop    | -p multinode-178778                                                                     | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:48 UTC |                     |
	| start   | -p multinode-178778                                                                     | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:50 UTC | 17 Sep 24 17:53 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-178778                                                                | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:53 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 17:50:22
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 17:50:22.407405   47940 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:50:22.407818   47940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:50:22.407828   47940 out.go:358] Setting ErrFile to fd 2...
	I0917 17:50:22.407835   47940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:50:22.408119   47940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 17:50:22.408728   47940 out.go:352] Setting JSON to false
	I0917 17:50:22.409750   47940 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5537,"bootTime":1726589885,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 17:50:22.409851   47940 start.go:139] virtualization: kvm guest
	I0917 17:50:22.412162   47940 out.go:177] * [multinode-178778] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 17:50:22.413560   47940 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 17:50:22.413561   47940 notify.go:220] Checking for updates...
	I0917 17:50:22.414900   47940 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 17:50:22.416235   47940 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 17:50:22.417819   47940 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 17:50:22.419325   47940 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 17:50:22.420562   47940 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 17:50:22.422347   47940 config.go:182] Loaded profile config "multinode-178778": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:50:22.422477   47940 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 17:50:22.423141   47940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:50:22.423201   47940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:50:22.440370   47940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35517
	I0917 17:50:22.440937   47940 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:50:22.441574   47940 main.go:141] libmachine: Using API Version  1
	I0917 17:50:22.441596   47940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:50:22.441962   47940 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:50:22.442181   47940 main.go:141] libmachine: (multinode-178778) Calling .DriverName
	I0917 17:50:22.479596   47940 out.go:177] * Using the kvm2 driver based on existing profile
	I0917 17:50:22.480923   47940 start.go:297] selected driver: kvm2
	I0917 17:50:22.480947   47940 start.go:901] validating driver "kvm2" against &{Name:multinode-178778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-178778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.62 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:50:22.481107   47940 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 17:50:22.481483   47940 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 17:50:22.481577   47940 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19662-11085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0917 17:50:22.497866   47940 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0917 17:50:22.498603   47940 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 17:50:22.498652   47940 cni.go:84] Creating CNI manager for ""
	I0917 17:50:22.498706   47940 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 17:50:22.498775   47940 start.go:340] cluster config:
	{Name:multinode-178778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-178778 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.62 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:50:22.498963   47940 iso.go:125] acquiring lock: {Name:mkee7131e09b1c5eb94d106bda884bcbdbf6f906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 17:50:22.500937   47940 out.go:177] * Starting "multinode-178778" primary control-plane node in "multinode-178778" cluster
	I0917 17:50:22.502213   47940 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 17:50:22.502256   47940 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0917 17:50:22.502263   47940 cache.go:56] Caching tarball of preloaded images
	I0917 17:50:22.502380   47940 preload.go:172] Found /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 17:50:22.502395   47940 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0917 17:50:22.502518   47940 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/multinode-178778/config.json ...
	I0917 17:50:22.502707   47940 start.go:360] acquireMachinesLock for multinode-178778: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 17:50:22.502753   47940 start.go:364] duration metric: took 25.965µs to acquireMachinesLock for "multinode-178778"
	I0917 17:50:22.502772   47940 start.go:96] Skipping create...Using existing machine configuration
	I0917 17:50:22.502780   47940 fix.go:54] fixHost starting: 
	I0917 17:50:22.503030   47940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:50:22.503065   47940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:50:22.517839   47940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36845
	I0917 17:50:22.518243   47940 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:50:22.518737   47940 main.go:141] libmachine: Using API Version  1
	I0917 17:50:22.518759   47940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:50:22.519056   47940 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:50:22.519217   47940 main.go:141] libmachine: (multinode-178778) Calling .DriverName
	I0917 17:50:22.519397   47940 main.go:141] libmachine: (multinode-178778) Calling .GetState
	I0917 17:50:22.520935   47940 fix.go:112] recreateIfNeeded on multinode-178778: state=Running err=<nil>
	W0917 17:50:22.520956   47940 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 17:50:22.523004   47940 out.go:177] * Updating the running kvm2 "multinode-178778" VM ...
	I0917 17:50:22.524521   47940 machine.go:93] provisionDockerMachine start ...
	I0917 17:50:22.524565   47940 main.go:141] libmachine: (multinode-178778) Calling .DriverName
	I0917 17:50:22.524803   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHHostname
	I0917 17:50:22.527466   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:50:22.527945   47940 main.go:141] libmachine: (multinode-178778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:92:d1", ip: ""} in network mk-multinode-178778: {Iface:virbr1 ExpiryTime:2024-09-17 18:45:00 +0000 UTC Type:0 Mac:52:54:00:c4:92:d1 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-178778 Clientid:01:52:54:00:c4:92:d1}
	I0917 17:50:22.527965   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined IP address 192.168.39.35 and MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:50:22.528148   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHPort
	I0917 17:50:22.528321   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHKeyPath
	I0917 17:50:22.528475   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHKeyPath
	I0917 17:50:22.528598   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHUsername
	I0917 17:50:22.528766   47940 main.go:141] libmachine: Using SSH client type: native
	I0917 17:50:22.528961   47940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0917 17:50:22.528976   47940 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 17:50:22.646608   47940 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-178778
	
	I0917 17:50:22.646633   47940 main.go:141] libmachine: (multinode-178778) Calling .GetMachineName
	I0917 17:50:22.646910   47940 buildroot.go:166] provisioning hostname "multinode-178778"
	I0917 17:50:22.646935   47940 main.go:141] libmachine: (multinode-178778) Calling .GetMachineName
	I0917 17:50:22.647114   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHHostname
	I0917 17:50:22.649595   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:50:22.650098   47940 main.go:141] libmachine: (multinode-178778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:92:d1", ip: ""} in network mk-multinode-178778: {Iface:virbr1 ExpiryTime:2024-09-17 18:45:00 +0000 UTC Type:0 Mac:52:54:00:c4:92:d1 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-178778 Clientid:01:52:54:00:c4:92:d1}
	I0917 17:50:22.650124   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined IP address 192.168.39.35 and MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:50:22.650313   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHPort
	I0917 17:50:22.650490   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHKeyPath
	I0917 17:50:22.650654   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHKeyPath
	I0917 17:50:22.650788   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHUsername
	I0917 17:50:22.650920   47940 main.go:141] libmachine: Using SSH client type: native
	I0917 17:50:22.651122   47940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0917 17:50:22.651139   47940 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-178778 && echo "multinode-178778" | sudo tee /etc/hostname
	I0917 17:50:22.785094   47940 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-178778
	
	I0917 17:50:22.785129   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHHostname
	I0917 17:50:22.787581   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:50:22.787915   47940 main.go:141] libmachine: (multinode-178778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:92:d1", ip: ""} in network mk-multinode-178778: {Iface:virbr1 ExpiryTime:2024-09-17 18:45:00 +0000 UTC Type:0 Mac:52:54:00:c4:92:d1 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-178778 Clientid:01:52:54:00:c4:92:d1}
	I0917 17:50:22.787948   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined IP address 192.168.39.35 and MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:50:22.788099   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHPort
	I0917 17:50:22.788284   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHKeyPath
	I0917 17:50:22.788480   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHKeyPath
	I0917 17:50:22.788609   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHUsername
	I0917 17:50:22.788759   47940 main.go:141] libmachine: Using SSH client type: native
	I0917 17:50:22.788971   47940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0917 17:50:22.788993   47940 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-178778' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-178778/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-178778' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 17:50:22.906436   47940 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 17:50:22.906465   47940 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 17:50:22.906507   47940 buildroot.go:174] setting up certificates
	I0917 17:50:22.906517   47940 provision.go:84] configureAuth start
	I0917 17:50:22.906533   47940 main.go:141] libmachine: (multinode-178778) Calling .GetMachineName
	I0917 17:50:22.906811   47940 main.go:141] libmachine: (multinode-178778) Calling .GetIP
	I0917 17:50:22.909670   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:50:22.909998   47940 main.go:141] libmachine: (multinode-178778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:92:d1", ip: ""} in network mk-multinode-178778: {Iface:virbr1 ExpiryTime:2024-09-17 18:45:00 +0000 UTC Type:0 Mac:52:54:00:c4:92:d1 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-178778 Clientid:01:52:54:00:c4:92:d1}
	I0917 17:50:22.910037   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined IP address 192.168.39.35 and MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:50:22.910194   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHHostname
	I0917 17:50:22.912444   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:50:22.912736   47940 main.go:141] libmachine: (multinode-178778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:92:d1", ip: ""} in network mk-multinode-178778: {Iface:virbr1 ExpiryTime:2024-09-17 18:45:00 +0000 UTC Type:0 Mac:52:54:00:c4:92:d1 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-178778 Clientid:01:52:54:00:c4:92:d1}
	I0917 17:50:22.912763   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined IP address 192.168.39.35 and MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:50:22.912874   47940 provision.go:143] copyHostCerts
	I0917 17:50:22.912915   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 17:50:22.912972   47940 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 17:50:22.912986   47940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 17:50:22.913075   47940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 17:50:22.913189   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 17:50:22.913216   47940 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 17:50:22.913224   47940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 17:50:22.913282   47940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 17:50:22.913353   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 17:50:22.913376   47940 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 17:50:22.913383   47940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 17:50:22.913421   47940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 17:50:22.913494   47940 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.multinode-178778 san=[127.0.0.1 192.168.39.35 localhost minikube multinode-178778]
	I0917 17:50:23.514652   47940 provision.go:177] copyRemoteCerts
	I0917 17:50:23.514714   47940 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 17:50:23.514755   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHHostname
	I0917 17:50:23.517680   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:50:23.518020   47940 main.go:141] libmachine: (multinode-178778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:92:d1", ip: ""} in network mk-multinode-178778: {Iface:virbr1 ExpiryTime:2024-09-17 18:45:00 +0000 UTC Type:0 Mac:52:54:00:c4:92:d1 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-178778 Clientid:01:52:54:00:c4:92:d1}
	I0917 17:50:23.518051   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined IP address 192.168.39.35 and MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:50:23.518301   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHPort
	I0917 17:50:23.518512   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHKeyPath
	I0917 17:50:23.518718   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHUsername
	I0917 17:50:23.518819   47940 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/multinode-178778/id_rsa Username:docker}
	I0917 17:50:23.604458   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 17:50:23.604569   47940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 17:50:23.632425   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 17:50:23.632490   47940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0917 17:50:23.661014   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 17:50:23.661089   47940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 17:50:23.690763   47940 provision.go:87] duration metric: took 784.229418ms to configureAuth
	I0917 17:50:23.690792   47940 buildroot.go:189] setting minikube options for container-runtime
	I0917 17:50:23.691065   47940 config.go:182] Loaded profile config "multinode-178778": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:50:23.691164   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHHostname
	I0917 17:50:23.693876   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:50:23.694243   47940 main.go:141] libmachine: (multinode-178778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:92:d1", ip: ""} in network mk-multinode-178778: {Iface:virbr1 ExpiryTime:2024-09-17 18:45:00 +0000 UTC Type:0 Mac:52:54:00:c4:92:d1 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-178778 Clientid:01:52:54:00:c4:92:d1}
	I0917 17:50:23.694267   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined IP address 192.168.39.35 and MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:50:23.694418   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHPort
	I0917 17:50:23.694599   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHKeyPath
	I0917 17:50:23.694718   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHKeyPath
	I0917 17:50:23.694835   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHUsername
	I0917 17:50:23.694977   47940 main.go:141] libmachine: Using SSH client type: native
	I0917 17:50:23.695169   47940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0917 17:50:23.695187   47940 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 17:51:54.523792   47940 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 17:51:54.523829   47940 machine.go:96] duration metric: took 1m31.999273185s to provisionDockerMachine
	I0917 17:51:54.523844   47940 start.go:293] postStartSetup for "multinode-178778" (driver="kvm2")
	I0917 17:51:54.523857   47940 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 17:51:54.523883   47940 main.go:141] libmachine: (multinode-178778) Calling .DriverName
	I0917 17:51:54.524194   47940 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 17:51:54.524240   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHHostname
	I0917 17:51:54.527742   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:51:54.528229   47940 main.go:141] libmachine: (multinode-178778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:92:d1", ip: ""} in network mk-multinode-178778: {Iface:virbr1 ExpiryTime:2024-09-17 18:45:00 +0000 UTC Type:0 Mac:52:54:00:c4:92:d1 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-178778 Clientid:01:52:54:00:c4:92:d1}
	I0917 17:51:54.528264   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined IP address 192.168.39.35 and MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:51:54.528453   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHPort
	I0917 17:51:54.528636   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHKeyPath
	I0917 17:51:54.528793   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHUsername
	I0917 17:51:54.528908   47940 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/multinode-178778/id_rsa Username:docker}
	I0917 17:51:54.617719   47940 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 17:51:54.622164   47940 command_runner.go:130] > NAME=Buildroot
	I0917 17:51:54.622181   47940 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0917 17:51:54.622185   47940 command_runner.go:130] > ID=buildroot
	I0917 17:51:54.622190   47940 command_runner.go:130] > VERSION_ID=2023.02.9
	I0917 17:51:54.622195   47940 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0917 17:51:54.622240   47940 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 17:51:54.622258   47940 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 17:51:54.622323   47940 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 17:51:54.622424   47940 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 17:51:54.622437   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> /etc/ssl/certs/182592.pem
	I0917 17:51:54.622549   47940 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 17:51:54.632861   47940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 17:51:54.659506   47940 start.go:296] duration metric: took 135.647767ms for postStartSetup
	I0917 17:51:54.659547   47940 fix.go:56] duration metric: took 1m32.1567664s for fixHost
	I0917 17:51:54.659569   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHHostname
	I0917 17:51:54.662213   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:51:54.662664   47940 main.go:141] libmachine: (multinode-178778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:92:d1", ip: ""} in network mk-multinode-178778: {Iface:virbr1 ExpiryTime:2024-09-17 18:45:00 +0000 UTC Type:0 Mac:52:54:00:c4:92:d1 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-178778 Clientid:01:52:54:00:c4:92:d1}
	I0917 17:51:54.662694   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined IP address 192.168.39.35 and MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:51:54.662876   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHPort
	I0917 17:51:54.663056   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHKeyPath
	I0917 17:51:54.663202   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHKeyPath
	I0917 17:51:54.663346   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHUsername
	I0917 17:51:54.663510   47940 main.go:141] libmachine: Using SSH client type: native
	I0917 17:51:54.663686   47940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0917 17:51:54.663699   47940 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 17:51:54.778470   47940 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726595514.742790022
	
	I0917 17:51:54.778492   47940 fix.go:216] guest clock: 1726595514.742790022
	I0917 17:51:54.778499   47940 fix.go:229] Guest: 2024-09-17 17:51:54.742790022 +0000 UTC Remote: 2024-09-17 17:51:54.659551225 +0000 UTC m=+92.289658942 (delta=83.238797ms)
	I0917 17:51:54.778517   47940 fix.go:200] guest clock delta is within tolerance: 83.238797ms
	I0917 17:51:54.778522   47940 start.go:83] releasing machines lock for "multinode-178778", held for 1m32.275759155s
	I0917 17:51:54.778543   47940 main.go:141] libmachine: (multinode-178778) Calling .DriverName
	I0917 17:51:54.778808   47940 main.go:141] libmachine: (multinode-178778) Calling .GetIP
	I0917 17:51:54.781489   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:51:54.781845   47940 main.go:141] libmachine: (multinode-178778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:92:d1", ip: ""} in network mk-multinode-178778: {Iface:virbr1 ExpiryTime:2024-09-17 18:45:00 +0000 UTC Type:0 Mac:52:54:00:c4:92:d1 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-178778 Clientid:01:52:54:00:c4:92:d1}
	I0917 17:51:54.781867   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined IP address 192.168.39.35 and MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:51:54.782014   47940 main.go:141] libmachine: (multinode-178778) Calling .DriverName
	I0917 17:51:54.782514   47940 main.go:141] libmachine: (multinode-178778) Calling .DriverName
	I0917 17:51:54.782706   47940 main.go:141] libmachine: (multinode-178778) Calling .DriverName
	I0917 17:51:54.782792   47940 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 17:51:54.782847   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHHostname
	I0917 17:51:54.782984   47940 ssh_runner.go:195] Run: cat /version.json
	I0917 17:51:54.783009   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHHostname
	I0917 17:51:54.785493   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:51:54.785758   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:51:54.785869   47940 main.go:141] libmachine: (multinode-178778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:92:d1", ip: ""} in network mk-multinode-178778: {Iface:virbr1 ExpiryTime:2024-09-17 18:45:00 +0000 UTC Type:0 Mac:52:54:00:c4:92:d1 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-178778 Clientid:01:52:54:00:c4:92:d1}
	I0917 17:51:54.785908   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined IP address 192.168.39.35 and MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:51:54.786056   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHPort
	I0917 17:51:54.786157   47940 main.go:141] libmachine: (multinode-178778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:92:d1", ip: ""} in network mk-multinode-178778: {Iface:virbr1 ExpiryTime:2024-09-17 18:45:00 +0000 UTC Type:0 Mac:52:54:00:c4:92:d1 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-178778 Clientid:01:52:54:00:c4:92:d1}
	I0917 17:51:54.786184   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined IP address 192.168.39.35 and MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:51:54.786225   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHKeyPath
	I0917 17:51:54.786310   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHPort
	I0917 17:51:54.786400   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHUsername
	I0917 17:51:54.786460   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHKeyPath
	I0917 17:51:54.786520   47940 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/multinode-178778/id_rsa Username:docker}
	I0917 17:51:54.786568   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHUsername
	I0917 17:51:54.786690   47940 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/multinode-178778/id_rsa Username:docker}
	I0917 17:51:54.867371   47940 command_runner.go:130] > {"iso_version": "v1.34.0-1726481713-19649", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "fcd4ba3dbb1ef408e3a4b79c864df2496ddd3848"}
	I0917 17:51:54.867619   47940 ssh_runner.go:195] Run: systemctl --version
	I0917 17:51:54.891627   47940 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0917 17:51:54.891687   47940 command_runner.go:130] > systemd 252 (252)
	I0917 17:51:54.891717   47940 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0917 17:51:54.891789   47940 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 17:51:55.051696   47940 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 17:51:55.061152   47940 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0917 17:51:55.061605   47940 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 17:51:55.061693   47940 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 17:51:55.071924   47940 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 17:51:55.071951   47940 start.go:495] detecting cgroup driver to use...
	I0917 17:51:55.072037   47940 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 17:51:55.090114   47940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 17:51:55.105787   47940 docker.go:217] disabling cri-docker service (if available) ...
	I0917 17:51:55.105871   47940 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 17:51:55.122024   47940 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 17:51:55.160309   47940 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 17:51:55.312660   47940 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 17:51:55.456580   47940 docker.go:233] disabling docker service ...
	I0917 17:51:55.456676   47940 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 17:51:55.475785   47940 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 17:51:55.490838   47940 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 17:51:55.646741   47940 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 17:51:55.799053   47940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 17:51:55.813687   47940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 17:51:55.834677   47940 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0917 17:51:55.835163   47940 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 17:51:55.835228   47940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:51:55.847702   47940 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 17:51:55.847785   47940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:51:55.858755   47940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:51:55.870055   47940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:51:55.881763   47940 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 17:51:55.893710   47940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:51:55.905239   47940 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:51:55.918113   47940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:51:55.929558   47940 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 17:51:55.940665   47940 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0917 17:51:55.940764   47940 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 17:51:55.951426   47940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:51:56.096913   47940 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 17:52:04.220347   47940 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.123392368s)
	I0917 17:52:04.220384   47940 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 17:52:04.220449   47940 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 17:52:04.226170   47940 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0917 17:52:04.226191   47940 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0917 17:52:04.226209   47940 command_runner.go:130] > Device: 0,22	Inode: 1328        Links: 1
	I0917 17:52:04.226218   47940 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0917 17:52:04.226225   47940 command_runner.go:130] > Access: 2024-09-17 17:52:04.065518657 +0000
	I0917 17:52:04.226235   47940 command_runner.go:130] > Modify: 2024-09-17 17:52:04.065518657 +0000
	I0917 17:52:04.226246   47940 command_runner.go:130] > Change: 2024-09-17 17:52:04.065518657 +0000
	I0917 17:52:04.226254   47940 command_runner.go:130] >  Birth: -
	I0917 17:52:04.226484   47940 start.go:563] Will wait 60s for crictl version
	I0917 17:52:04.226535   47940 ssh_runner.go:195] Run: which crictl
	I0917 17:52:04.230478   47940 command_runner.go:130] > /usr/bin/crictl
	I0917 17:52:04.230623   47940 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 17:52:04.276134   47940 command_runner.go:130] > Version:  0.1.0
	I0917 17:52:04.276156   47940 command_runner.go:130] > RuntimeName:  cri-o
	I0917 17:52:04.276162   47940 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0917 17:52:04.276167   47940 command_runner.go:130] > RuntimeApiVersion:  v1
	I0917 17:52:04.276188   47940 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 17:52:04.276246   47940 ssh_runner.go:195] Run: crio --version
	I0917 17:52:04.309612   47940 command_runner.go:130] > crio version 1.29.1
	I0917 17:52:04.309637   47940 command_runner.go:130] > Version:        1.29.1
	I0917 17:52:04.309649   47940 command_runner.go:130] > GitCommit:      unknown
	I0917 17:52:04.309657   47940 command_runner.go:130] > GitCommitDate:  unknown
	I0917 17:52:04.309664   47940 command_runner.go:130] > GitTreeState:   clean
	I0917 17:52:04.309679   47940 command_runner.go:130] > BuildDate:      2024-09-16T15:42:14Z
	I0917 17:52:04.309686   47940 command_runner.go:130] > GoVersion:      go1.21.6
	I0917 17:52:04.309691   47940 command_runner.go:130] > Compiler:       gc
	I0917 17:52:04.309697   47940 command_runner.go:130] > Platform:       linux/amd64
	I0917 17:52:04.309702   47940 command_runner.go:130] > Linkmode:       dynamic
	I0917 17:52:04.309707   47940 command_runner.go:130] > BuildTags:      
	I0917 17:52:04.309712   47940 command_runner.go:130] >   containers_image_ostree_stub
	I0917 17:52:04.309716   47940 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0917 17:52:04.309721   47940 command_runner.go:130] >   btrfs_noversion
	I0917 17:52:04.309729   47940 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0917 17:52:04.309738   47940 command_runner.go:130] >   libdm_no_deferred_remove
	I0917 17:52:04.309746   47940 command_runner.go:130] >   seccomp
	I0917 17:52:04.309756   47940 command_runner.go:130] > LDFlags:          unknown
	I0917 17:52:04.309762   47940 command_runner.go:130] > SeccompEnabled:   true
	I0917 17:52:04.309770   47940 command_runner.go:130] > AppArmorEnabled:  false
	I0917 17:52:04.310961   47940 ssh_runner.go:195] Run: crio --version
	I0917 17:52:04.343998   47940 command_runner.go:130] > crio version 1.29.1
	I0917 17:52:04.344024   47940 command_runner.go:130] > Version:        1.29.1
	I0917 17:52:04.344031   47940 command_runner.go:130] > GitCommit:      unknown
	I0917 17:52:04.344036   47940 command_runner.go:130] > GitCommitDate:  unknown
	I0917 17:52:04.344040   47940 command_runner.go:130] > GitTreeState:   clean
	I0917 17:52:04.344055   47940 command_runner.go:130] > BuildDate:      2024-09-16T15:42:14Z
	I0917 17:52:04.344062   47940 command_runner.go:130] > GoVersion:      go1.21.6
	I0917 17:52:04.344067   47940 command_runner.go:130] > Compiler:       gc
	I0917 17:52:04.344078   47940 command_runner.go:130] > Platform:       linux/amd64
	I0917 17:52:04.344088   47940 command_runner.go:130] > Linkmode:       dynamic
	I0917 17:52:04.344092   47940 command_runner.go:130] > BuildTags:      
	I0917 17:52:04.344098   47940 command_runner.go:130] >   containers_image_ostree_stub
	I0917 17:52:04.344105   47940 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0917 17:52:04.344111   47940 command_runner.go:130] >   btrfs_noversion
	I0917 17:52:04.344116   47940 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0917 17:52:04.344121   47940 command_runner.go:130] >   libdm_no_deferred_remove
	I0917 17:52:04.344125   47940 command_runner.go:130] >   seccomp
	I0917 17:52:04.344130   47940 command_runner.go:130] > LDFlags:          unknown
	I0917 17:52:04.344133   47940 command_runner.go:130] > SeccompEnabled:   true
	I0917 17:52:04.344138   47940 command_runner.go:130] > AppArmorEnabled:  false
	I0917 17:52:04.347937   47940 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 17:52:04.349686   47940 main.go:141] libmachine: (multinode-178778) Calling .GetIP
	I0917 17:52:04.352706   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:52:04.353146   47940 main.go:141] libmachine: (multinode-178778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:92:d1", ip: ""} in network mk-multinode-178778: {Iface:virbr1 ExpiryTime:2024-09-17 18:45:00 +0000 UTC Type:0 Mac:52:54:00:c4:92:d1 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-178778 Clientid:01:52:54:00:c4:92:d1}
	I0917 17:52:04.353173   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined IP address 192.168.39.35 and MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:52:04.353413   47940 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0917 17:52:04.358007   47940 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0917 17:52:04.358134   47940 kubeadm.go:883] updating cluster {Name:multinode-178778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-178778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.62 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 17:52:04.358315   47940 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 17:52:04.358360   47940 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 17:52:04.410095   47940 command_runner.go:130] > {
	I0917 17:52:04.410120   47940 command_runner.go:130] >   "images": [
	I0917 17:52:04.410125   47940 command_runner.go:130] >     {
	I0917 17:52:04.410136   47940 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0917 17:52:04.410142   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.410151   47940 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0917 17:52:04.410156   47940 command_runner.go:130] >       ],
	I0917 17:52:04.410161   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.410172   47940 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0917 17:52:04.410181   47940 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0917 17:52:04.410186   47940 command_runner.go:130] >       ],
	I0917 17:52:04.410193   47940 command_runner.go:130] >       "size": "87190579",
	I0917 17:52:04.410199   47940 command_runner.go:130] >       "uid": null,
	I0917 17:52:04.410206   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.410221   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.410230   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.410236   47940 command_runner.go:130] >     },
	I0917 17:52:04.410241   47940 command_runner.go:130] >     {
	I0917 17:52:04.410251   47940 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0917 17:52:04.410260   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.410268   47940 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0917 17:52:04.410275   47940 command_runner.go:130] >       ],
	I0917 17:52:04.410283   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.410296   47940 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0917 17:52:04.410310   47940 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0917 17:52:04.410316   47940 command_runner.go:130] >       ],
	I0917 17:52:04.410323   47940 command_runner.go:130] >       "size": "1363676",
	I0917 17:52:04.410330   47940 command_runner.go:130] >       "uid": null,
	I0917 17:52:04.410345   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.410354   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.410361   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.410367   47940 command_runner.go:130] >     },
	I0917 17:52:04.410373   47940 command_runner.go:130] >     {
	I0917 17:52:04.410384   47940 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0917 17:52:04.410394   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.410404   47940 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0917 17:52:04.410424   47940 command_runner.go:130] >       ],
	I0917 17:52:04.410431   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.410448   47940 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0917 17:52:04.410466   47940 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0917 17:52:04.410473   47940 command_runner.go:130] >       ],
	I0917 17:52:04.410485   47940 command_runner.go:130] >       "size": "31470524",
	I0917 17:52:04.410495   47940 command_runner.go:130] >       "uid": null,
	I0917 17:52:04.410502   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.410512   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.410521   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.410530   47940 command_runner.go:130] >     },
	I0917 17:52:04.410538   47940 command_runner.go:130] >     {
	I0917 17:52:04.410550   47940 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0917 17:52:04.410558   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.410570   47940 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0917 17:52:04.410579   47940 command_runner.go:130] >       ],
	I0917 17:52:04.410586   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.410601   47940 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0917 17:52:04.410622   47940 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0917 17:52:04.410629   47940 command_runner.go:130] >       ],
	I0917 17:52:04.410639   47940 command_runner.go:130] >       "size": "63273227",
	I0917 17:52:04.410646   47940 command_runner.go:130] >       "uid": null,
	I0917 17:52:04.410656   47940 command_runner.go:130] >       "username": "nonroot",
	I0917 17:52:04.410663   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.410674   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.410682   47940 command_runner.go:130] >     },
	I0917 17:52:04.410688   47940 command_runner.go:130] >     {
	I0917 17:52:04.410699   47940 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0917 17:52:04.410707   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.410717   47940 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0917 17:52:04.410726   47940 command_runner.go:130] >       ],
	I0917 17:52:04.410735   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.410748   47940 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0917 17:52:04.410763   47940 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0917 17:52:04.410772   47940 command_runner.go:130] >       ],
	I0917 17:52:04.410779   47940 command_runner.go:130] >       "size": "149009664",
	I0917 17:52:04.410787   47940 command_runner.go:130] >       "uid": {
	I0917 17:52:04.410795   47940 command_runner.go:130] >         "value": "0"
	I0917 17:52:04.410803   47940 command_runner.go:130] >       },
	I0917 17:52:04.410810   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.410819   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.410828   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.410834   47940 command_runner.go:130] >     },
	I0917 17:52:04.410839   47940 command_runner.go:130] >     {
	I0917 17:52:04.410856   47940 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0917 17:52:04.410865   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.410874   47940 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0917 17:52:04.410883   47940 command_runner.go:130] >       ],
	I0917 17:52:04.410890   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.410905   47940 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0917 17:52:04.410921   47940 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0917 17:52:04.410930   47940 command_runner.go:130] >       ],
	I0917 17:52:04.410942   47940 command_runner.go:130] >       "size": "95237600",
	I0917 17:52:04.410950   47940 command_runner.go:130] >       "uid": {
	I0917 17:52:04.410958   47940 command_runner.go:130] >         "value": "0"
	I0917 17:52:04.410965   47940 command_runner.go:130] >       },
	I0917 17:52:04.410975   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.410982   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.410991   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.410997   47940 command_runner.go:130] >     },
	I0917 17:52:04.411004   47940 command_runner.go:130] >     {
	I0917 17:52:04.411016   47940 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0917 17:52:04.411024   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.411034   47940 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0917 17:52:04.411043   47940 command_runner.go:130] >       ],
	I0917 17:52:04.411049   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.411065   47940 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0917 17:52:04.411080   47940 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0917 17:52:04.411089   47940 command_runner.go:130] >       ],
	I0917 17:52:04.411096   47940 command_runner.go:130] >       "size": "89437508",
	I0917 17:52:04.411105   47940 command_runner.go:130] >       "uid": {
	I0917 17:52:04.411112   47940 command_runner.go:130] >         "value": "0"
	I0917 17:52:04.411121   47940 command_runner.go:130] >       },
	I0917 17:52:04.411128   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.411137   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.411144   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.411152   47940 command_runner.go:130] >     },
	I0917 17:52:04.411166   47940 command_runner.go:130] >     {
	I0917 17:52:04.411179   47940 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0917 17:52:04.411188   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.411196   47940 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0917 17:52:04.411205   47940 command_runner.go:130] >       ],
	I0917 17:52:04.411213   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.411238   47940 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0917 17:52:04.411254   47940 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0917 17:52:04.411263   47940 command_runner.go:130] >       ],
	I0917 17:52:04.411271   47940 command_runner.go:130] >       "size": "92733849",
	I0917 17:52:04.411280   47940 command_runner.go:130] >       "uid": null,
	I0917 17:52:04.411286   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.411291   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.411297   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.411303   47940 command_runner.go:130] >     },
	I0917 17:52:04.411311   47940 command_runner.go:130] >     {
	I0917 17:52:04.411322   47940 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0917 17:52:04.411329   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.411338   47940 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0917 17:52:04.411346   47940 command_runner.go:130] >       ],
	I0917 17:52:04.411354   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.411369   47940 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0917 17:52:04.411412   47940 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0917 17:52:04.411425   47940 command_runner.go:130] >       ],
	I0917 17:52:04.411432   47940 command_runner.go:130] >       "size": "68420934",
	I0917 17:52:04.411439   47940 command_runner.go:130] >       "uid": {
	I0917 17:52:04.411450   47940 command_runner.go:130] >         "value": "0"
	I0917 17:52:04.411457   47940 command_runner.go:130] >       },
	I0917 17:52:04.411466   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.411474   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.411483   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.411490   47940 command_runner.go:130] >     },
	I0917 17:52:04.411498   47940 command_runner.go:130] >     {
	I0917 17:52:04.411511   47940 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0917 17:52:04.411521   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.411530   47940 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0917 17:52:04.411538   47940 command_runner.go:130] >       ],
	I0917 17:52:04.411546   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.411560   47940 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0917 17:52:04.411575   47940 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0917 17:52:04.411584   47940 command_runner.go:130] >       ],
	I0917 17:52:04.411592   47940 command_runner.go:130] >       "size": "742080",
	I0917 17:52:04.411601   47940 command_runner.go:130] >       "uid": {
	I0917 17:52:04.411608   47940 command_runner.go:130] >         "value": "65535"
	I0917 17:52:04.411616   47940 command_runner.go:130] >       },
	I0917 17:52:04.411623   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.411633   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.411642   47940 command_runner.go:130] >       "pinned": true
	I0917 17:52:04.411649   47940 command_runner.go:130] >     }
	I0917 17:52:04.411659   47940 command_runner.go:130] >   ]
	I0917 17:52:04.411664   47940 command_runner.go:130] > }
	I0917 17:52:04.411846   47940 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 17:52:04.411859   47940 crio.go:433] Images already preloaded, skipping extraction
	I0917 17:52:04.411924   47940 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 17:52:04.448053   47940 command_runner.go:130] > {
	I0917 17:52:04.448080   47940 command_runner.go:130] >   "images": [
	I0917 17:52:04.448086   47940 command_runner.go:130] >     {
	I0917 17:52:04.448097   47940 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0917 17:52:04.448104   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.448113   47940 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0917 17:52:04.448119   47940 command_runner.go:130] >       ],
	I0917 17:52:04.448125   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.448136   47940 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0917 17:52:04.448146   47940 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0917 17:52:04.448151   47940 command_runner.go:130] >       ],
	I0917 17:52:04.448159   47940 command_runner.go:130] >       "size": "87190579",
	I0917 17:52:04.448166   47940 command_runner.go:130] >       "uid": null,
	I0917 17:52:04.448179   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.448188   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.448198   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.448213   47940 command_runner.go:130] >     },
	I0917 17:52:04.448222   47940 command_runner.go:130] >     {
	I0917 17:52:04.448232   47940 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0917 17:52:04.448238   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.448247   47940 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0917 17:52:04.448256   47940 command_runner.go:130] >       ],
	I0917 17:52:04.448263   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.448276   47940 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0917 17:52:04.448291   47940 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0917 17:52:04.448300   47940 command_runner.go:130] >       ],
	I0917 17:52:04.448311   47940 command_runner.go:130] >       "size": "1363676",
	I0917 17:52:04.448321   47940 command_runner.go:130] >       "uid": null,
	I0917 17:52:04.448349   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.448358   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.448364   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.448370   47940 command_runner.go:130] >     },
	I0917 17:52:04.448376   47940 command_runner.go:130] >     {
	I0917 17:52:04.448387   47940 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0917 17:52:04.448401   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.448412   47940 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0917 17:52:04.448422   47940 command_runner.go:130] >       ],
	I0917 17:52:04.448430   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.448446   47940 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0917 17:52:04.448461   47940 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0917 17:52:04.448471   47940 command_runner.go:130] >       ],
	I0917 17:52:04.448478   47940 command_runner.go:130] >       "size": "31470524",
	I0917 17:52:04.448487   47940 command_runner.go:130] >       "uid": null,
	I0917 17:52:04.448493   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.448502   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.448510   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.448518   47940 command_runner.go:130] >     },
	I0917 17:52:04.448525   47940 command_runner.go:130] >     {
	I0917 17:52:04.448539   47940 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0917 17:52:04.448556   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.448568   47940 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0917 17:52:04.448576   47940 command_runner.go:130] >       ],
	I0917 17:52:04.448583   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.448596   47940 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0917 17:52:04.448620   47940 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0917 17:52:04.448629   47940 command_runner.go:130] >       ],
	I0917 17:52:04.448637   47940 command_runner.go:130] >       "size": "63273227",
	I0917 17:52:04.448647   47940 command_runner.go:130] >       "uid": null,
	I0917 17:52:04.448655   47940 command_runner.go:130] >       "username": "nonroot",
	I0917 17:52:04.448663   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.448671   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.448679   47940 command_runner.go:130] >     },
	I0917 17:52:04.448686   47940 command_runner.go:130] >     {
	I0917 17:52:04.448699   47940 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0917 17:52:04.448709   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.448718   47940 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0917 17:52:04.448725   47940 command_runner.go:130] >       ],
	I0917 17:52:04.448733   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.448748   47940 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0917 17:52:04.448762   47940 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0917 17:52:04.448771   47940 command_runner.go:130] >       ],
	I0917 17:52:04.448778   47940 command_runner.go:130] >       "size": "149009664",
	I0917 17:52:04.448786   47940 command_runner.go:130] >       "uid": {
	I0917 17:52:04.448794   47940 command_runner.go:130] >         "value": "0"
	I0917 17:52:04.448802   47940 command_runner.go:130] >       },
	I0917 17:52:04.448810   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.448819   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.448830   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.448838   47940 command_runner.go:130] >     },
	I0917 17:52:04.448845   47940 command_runner.go:130] >     {
	I0917 17:52:04.448858   47940 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0917 17:52:04.448868   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.448885   47940 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0917 17:52:04.448894   47940 command_runner.go:130] >       ],
	I0917 17:52:04.448902   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.448916   47940 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0917 17:52:04.448932   47940 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0917 17:52:04.448940   47940 command_runner.go:130] >       ],
	I0917 17:52:04.448947   47940 command_runner.go:130] >       "size": "95237600",
	I0917 17:52:04.448955   47940 command_runner.go:130] >       "uid": {
	I0917 17:52:04.448962   47940 command_runner.go:130] >         "value": "0"
	I0917 17:52:04.448969   47940 command_runner.go:130] >       },
	I0917 17:52:04.448976   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.448986   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.448995   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.449001   47940 command_runner.go:130] >     },
	I0917 17:52:04.449009   47940 command_runner.go:130] >     {
	I0917 17:52:04.449022   47940 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0917 17:52:04.449032   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.449041   47940 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0917 17:52:04.449050   47940 command_runner.go:130] >       ],
	I0917 17:52:04.449058   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.449077   47940 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0917 17:52:04.449093   47940 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0917 17:52:04.449102   47940 command_runner.go:130] >       ],
	I0917 17:52:04.449110   47940 command_runner.go:130] >       "size": "89437508",
	I0917 17:52:04.449118   47940 command_runner.go:130] >       "uid": {
	I0917 17:52:04.449125   47940 command_runner.go:130] >         "value": "0"
	I0917 17:52:04.449134   47940 command_runner.go:130] >       },
	I0917 17:52:04.449142   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.449150   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.449158   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.449166   47940 command_runner.go:130] >     },
	I0917 17:52:04.449172   47940 command_runner.go:130] >     {
	I0917 17:52:04.449185   47940 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0917 17:52:04.449195   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.449204   47940 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0917 17:52:04.449212   47940 command_runner.go:130] >       ],
	I0917 17:52:04.449220   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.449255   47940 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0917 17:52:04.449270   47940 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0917 17:52:04.449277   47940 command_runner.go:130] >       ],
	I0917 17:52:04.449288   47940 command_runner.go:130] >       "size": "92733849",
	I0917 17:52:04.449297   47940 command_runner.go:130] >       "uid": null,
	I0917 17:52:04.449305   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.449314   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.449321   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.449340   47940 command_runner.go:130] >     },
	I0917 17:52:04.449348   47940 command_runner.go:130] >     {
	I0917 17:52:04.449359   47940 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0917 17:52:04.449368   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.449377   47940 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0917 17:52:04.449385   47940 command_runner.go:130] >       ],
	I0917 17:52:04.449393   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.449408   47940 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0917 17:52:04.449424   47940 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0917 17:52:04.449432   47940 command_runner.go:130] >       ],
	I0917 17:52:04.449443   47940 command_runner.go:130] >       "size": "68420934",
	I0917 17:52:04.449450   47940 command_runner.go:130] >       "uid": {
	I0917 17:52:04.449459   47940 command_runner.go:130] >         "value": "0"
	I0917 17:52:04.449468   47940 command_runner.go:130] >       },
	I0917 17:52:04.449477   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.449486   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.449494   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.449502   47940 command_runner.go:130] >     },
	I0917 17:52:04.449509   47940 command_runner.go:130] >     {
	I0917 17:52:04.449522   47940 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0917 17:52:04.449531   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.449544   47940 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0917 17:52:04.449552   47940 command_runner.go:130] >       ],
	I0917 17:52:04.449560   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.449575   47940 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0917 17:52:04.449590   47940 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0917 17:52:04.449599   47940 command_runner.go:130] >       ],
	I0917 17:52:04.449606   47940 command_runner.go:130] >       "size": "742080",
	I0917 17:52:04.449614   47940 command_runner.go:130] >       "uid": {
	I0917 17:52:04.449623   47940 command_runner.go:130] >         "value": "65535"
	I0917 17:52:04.449632   47940 command_runner.go:130] >       },
	I0917 17:52:04.449641   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.449649   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.449656   47940 command_runner.go:130] >       "pinned": true
	I0917 17:52:04.449664   47940 command_runner.go:130] >     }
	I0917 17:52:04.449670   47940 command_runner.go:130] >   ]
	I0917 17:52:04.449678   47940 command_runner.go:130] > }
	I0917 17:52:04.449801   47940 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 17:52:04.449814   47940 cache_images.go:84] Images are preloaded, skipping loading
	I0917 17:52:04.449824   47940 kubeadm.go:934] updating node { 192.168.39.35 8443 v1.31.1 crio true true} ...
	I0917 17:52:04.449950   47940 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-178778 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.35
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-178778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 17:52:04.450038   47940 ssh_runner.go:195] Run: crio config
	I0917 17:52:04.493383   47940 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0917 17:52:04.493415   47940 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0917 17:52:04.493425   47940 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0917 17:52:04.493429   47940 command_runner.go:130] > #
	I0917 17:52:04.493440   47940 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0917 17:52:04.493448   47940 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0917 17:52:04.493456   47940 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0917 17:52:04.493464   47940 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0917 17:52:04.493470   47940 command_runner.go:130] > # reload'.
	I0917 17:52:04.493480   47940 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0917 17:52:04.493489   47940 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0917 17:52:04.493499   47940 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0917 17:52:04.493533   47940 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0917 17:52:04.493543   47940 command_runner.go:130] > [crio]
	I0917 17:52:04.493555   47940 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0917 17:52:04.493563   47940 command_runner.go:130] > # containers images, in this directory.
	I0917 17:52:04.493574   47940 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0917 17:52:04.493593   47940 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0917 17:52:04.493604   47940 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0917 17:52:04.493617   47940 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0917 17:52:04.493833   47940 command_runner.go:130] > # imagestore = ""
	I0917 17:52:04.493848   47940 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0917 17:52:04.493858   47940 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0917 17:52:04.494461   47940 command_runner.go:130] > storage_driver = "overlay"
	I0917 17:52:04.494484   47940 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0917 17:52:04.494495   47940 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0917 17:52:04.494502   47940 command_runner.go:130] > storage_option = [
	I0917 17:52:04.495007   47940 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0917 17:52:04.495019   47940 command_runner.go:130] > ]
	I0917 17:52:04.495030   47940 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0917 17:52:04.495039   47940 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0917 17:52:04.495047   47940 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0917 17:52:04.495058   47940 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0917 17:52:04.495074   47940 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0917 17:52:04.495083   47940 command_runner.go:130] > # always happen on a node reboot
	I0917 17:52:04.495107   47940 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0917 17:52:04.495124   47940 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0917 17:52:04.495135   47940 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0917 17:52:04.495147   47940 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0917 17:52:04.495157   47940 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0917 17:52:04.495172   47940 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0917 17:52:04.495189   47940 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0917 17:52:04.495199   47940 command_runner.go:130] > # internal_wipe = true
	I0917 17:52:04.495216   47940 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0917 17:52:04.495228   47940 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0917 17:52:04.495236   47940 command_runner.go:130] > # internal_repair = false
	I0917 17:52:04.495246   47940 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0917 17:52:04.495256   47940 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0917 17:52:04.495266   47940 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0917 17:52:04.495278   47940 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0917 17:52:04.495289   47940 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0917 17:52:04.495298   47940 command_runner.go:130] > [crio.api]
	I0917 17:52:04.495317   47940 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0917 17:52:04.495344   47940 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0917 17:52:04.495357   47940 command_runner.go:130] > # IP address on which the stream server will listen.
	I0917 17:52:04.495367   47940 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0917 17:52:04.495382   47940 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0917 17:52:04.495393   47940 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0917 17:52:04.495400   47940 command_runner.go:130] > # stream_port = "0"
	I0917 17:52:04.495412   47940 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0917 17:52:04.495423   47940 command_runner.go:130] > # stream_enable_tls = false
	I0917 17:52:04.495436   47940 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0917 17:52:04.495447   47940 command_runner.go:130] > # stream_idle_timeout = ""
	I0917 17:52:04.495460   47940 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0917 17:52:04.495474   47940 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0917 17:52:04.495483   47940 command_runner.go:130] > # minutes.
	I0917 17:52:04.495493   47940 command_runner.go:130] > # stream_tls_cert = ""
	I0917 17:52:04.495506   47940 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0917 17:52:04.495534   47940 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0917 17:52:04.495544   47940 command_runner.go:130] > # stream_tls_key = ""
	I0917 17:52:04.495556   47940 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0917 17:52:04.495570   47940 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0917 17:52:04.495598   47940 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0917 17:52:04.495607   47940 command_runner.go:130] > # stream_tls_ca = ""
	I0917 17:52:04.495620   47940 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0917 17:52:04.495630   47940 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0917 17:52:04.495645   47940 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0917 17:52:04.495657   47940 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0917 17:52:04.495669   47940 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0917 17:52:04.495681   47940 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0917 17:52:04.495690   47940 command_runner.go:130] > [crio.runtime]
	I0917 17:52:04.495700   47940 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0917 17:52:04.495712   47940 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0917 17:52:04.495721   47940 command_runner.go:130] > # "nofile=1024:2048"
	I0917 17:52:04.495731   47940 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0917 17:52:04.495740   47940 command_runner.go:130] > # default_ulimits = [
	I0917 17:52:04.495745   47940 command_runner.go:130] > # ]
	I0917 17:52:04.495753   47940 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0917 17:52:04.495759   47940 command_runner.go:130] > # no_pivot = false
	I0917 17:52:04.495768   47940 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0917 17:52:04.495781   47940 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0917 17:52:04.495791   47940 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0917 17:52:04.495801   47940 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0917 17:52:04.495811   47940 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0917 17:52:04.495822   47940 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0917 17:52:04.495833   47940 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0917 17:52:04.495842   47940 command_runner.go:130] > # Cgroup setting for conmon
	I0917 17:52:04.495854   47940 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0917 17:52:04.495864   47940 command_runner.go:130] > conmon_cgroup = "pod"
	I0917 17:52:04.495876   47940 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0917 17:52:04.495886   47940 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0917 17:52:04.495908   47940 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0917 17:52:04.495918   47940 command_runner.go:130] > conmon_env = [
	I0917 17:52:04.495928   47940 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0917 17:52:04.495936   47940 command_runner.go:130] > ]
	I0917 17:52:04.495945   47940 command_runner.go:130] > # Additional environment variables to set for all the
	I0917 17:52:04.495956   47940 command_runner.go:130] > # containers. These are overridden if set in the
	I0917 17:52:04.495966   47940 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0917 17:52:04.495976   47940 command_runner.go:130] > # default_env = [
	I0917 17:52:04.495985   47940 command_runner.go:130] > # ]
	I0917 17:52:04.495994   47940 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0917 17:52:04.496011   47940 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0917 17:52:04.496020   47940 command_runner.go:130] > # selinux = false
	I0917 17:52:04.496032   47940 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0917 17:52:04.496044   47940 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0917 17:52:04.496057   47940 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0917 17:52:04.496067   47940 command_runner.go:130] > # seccomp_profile = ""
	I0917 17:52:04.496080   47940 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0917 17:52:04.496092   47940 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0917 17:52:04.496105   47940 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0917 17:52:04.496113   47940 command_runner.go:130] > # which might increase security.
	I0917 17:52:04.496124   47940 command_runner.go:130] > # This option is currently deprecated,
	I0917 17:52:04.496137   47940 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0917 17:52:04.496148   47940 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0917 17:52:04.496160   47940 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0917 17:52:04.496173   47940 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0917 17:52:04.496185   47940 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0917 17:52:04.496197   47940 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0917 17:52:04.496207   47940 command_runner.go:130] > # This option supports live configuration reload.
	I0917 17:52:04.496219   47940 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0917 17:52:04.496232   47940 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0917 17:52:04.496242   47940 command_runner.go:130] > # the cgroup blockio controller.
	I0917 17:52:04.496251   47940 command_runner.go:130] > # blockio_config_file = ""
	I0917 17:52:04.496264   47940 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0917 17:52:04.496281   47940 command_runner.go:130] > # blockio parameters.
	I0917 17:52:04.496292   47940 command_runner.go:130] > # blockio_reload = false
	I0917 17:52:04.496306   47940 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0917 17:52:04.496316   47940 command_runner.go:130] > # irqbalance daemon.
	I0917 17:52:04.496325   47940 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0917 17:52:04.496343   47940 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0917 17:52:04.496358   47940 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0917 17:52:04.496377   47940 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0917 17:52:04.496390   47940 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0917 17:52:04.496403   47940 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0917 17:52:04.496413   47940 command_runner.go:130] > # This option supports live configuration reload.
	I0917 17:52:04.496423   47940 command_runner.go:130] > # rdt_config_file = ""
	I0917 17:52:04.496433   47940 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0917 17:52:04.496443   47940 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0917 17:52:04.496473   47940 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0917 17:52:04.496483   47940 command_runner.go:130] > # separate_pull_cgroup = ""
	I0917 17:52:04.496494   47940 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0917 17:52:04.496507   47940 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0917 17:52:04.496516   47940 command_runner.go:130] > # will be added.
	I0917 17:52:04.496527   47940 command_runner.go:130] > # default_capabilities = [
	I0917 17:52:04.496534   47940 command_runner.go:130] > # 	"CHOWN",
	I0917 17:52:04.496544   47940 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0917 17:52:04.496553   47940 command_runner.go:130] > # 	"FSETID",
	I0917 17:52:04.496560   47940 command_runner.go:130] > # 	"FOWNER",
	I0917 17:52:04.496568   47940 command_runner.go:130] > # 	"SETGID",
	I0917 17:52:04.496575   47940 command_runner.go:130] > # 	"SETUID",
	I0917 17:52:04.496587   47940 command_runner.go:130] > # 	"SETPCAP",
	I0917 17:52:04.496595   47940 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0917 17:52:04.496604   47940 command_runner.go:130] > # 	"KILL",
	I0917 17:52:04.496611   47940 command_runner.go:130] > # ]
	I0917 17:52:04.496626   47940 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0917 17:52:04.496639   47940 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0917 17:52:04.496650   47940 command_runner.go:130] > # add_inheritable_capabilities = false
	I0917 17:52:04.496668   47940 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0917 17:52:04.496680   47940 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0917 17:52:04.496687   47940 command_runner.go:130] > default_sysctls = [
	I0917 17:52:04.496696   47940 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0917 17:52:04.496704   47940 command_runner.go:130] > ]
	I0917 17:52:04.496713   47940 command_runner.go:130] > # List of devices on the host that a
	I0917 17:52:04.496726   47940 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0917 17:52:04.496735   47940 command_runner.go:130] > # allowed_devices = [
	I0917 17:52:04.496742   47940 command_runner.go:130] > # 	"/dev/fuse",
	I0917 17:52:04.496750   47940 command_runner.go:130] > # ]
	I0917 17:52:04.496759   47940 command_runner.go:130] > # List of additional devices. specified as
	I0917 17:52:04.496773   47940 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0917 17:52:04.496783   47940 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0917 17:52:04.496796   47940 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0917 17:52:04.496806   47940 command_runner.go:130] > # additional_devices = [
	I0917 17:52:04.496814   47940 command_runner.go:130] > # ]
	I0917 17:52:04.496824   47940 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0917 17:52:04.496833   47940 command_runner.go:130] > # cdi_spec_dirs = [
	I0917 17:52:04.496839   47940 command_runner.go:130] > # 	"/etc/cdi",
	I0917 17:52:04.496850   47940 command_runner.go:130] > # 	"/var/run/cdi",
	I0917 17:52:04.496858   47940 command_runner.go:130] > # ]
	I0917 17:52:04.496869   47940 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0917 17:52:04.496883   47940 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0917 17:52:04.496892   47940 command_runner.go:130] > # Defaults to false.
	I0917 17:52:04.496903   47940 command_runner.go:130] > # device_ownership_from_security_context = false
	I0917 17:52:04.496918   47940 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0917 17:52:04.496930   47940 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0917 17:52:04.496937   47940 command_runner.go:130] > # hooks_dir = [
	I0917 17:52:04.496949   47940 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0917 17:52:04.496957   47940 command_runner.go:130] > # ]
	I0917 17:52:04.496969   47940 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0917 17:52:04.496982   47940 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0917 17:52:04.496993   47940 command_runner.go:130] > # its default mounts from the following two files:
	I0917 17:52:04.497007   47940 command_runner.go:130] > #
	I0917 17:52:04.497021   47940 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0917 17:52:04.497035   47940 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0917 17:52:04.497047   47940 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0917 17:52:04.497054   47940 command_runner.go:130] > #
	I0917 17:52:04.497064   47940 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0917 17:52:04.497076   47940 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0917 17:52:04.497087   47940 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0917 17:52:04.497099   47940 command_runner.go:130] > #      only add mounts it finds in this file.
	I0917 17:52:04.497107   47940 command_runner.go:130] > #
	I0917 17:52:04.497115   47940 command_runner.go:130] > # default_mounts_file = ""
	I0917 17:52:04.497126   47940 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0917 17:52:04.497141   47940 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0917 17:52:04.497151   47940 command_runner.go:130] > pids_limit = 1024
	I0917 17:52:04.497162   47940 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0917 17:52:04.497176   47940 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0917 17:52:04.497189   47940 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0917 17:52:04.497203   47940 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0917 17:52:04.497212   47940 command_runner.go:130] > # log_size_max = -1
	I0917 17:52:04.497225   47940 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0917 17:52:04.497247   47940 command_runner.go:130] > # log_to_journald = false
	I0917 17:52:04.497259   47940 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0917 17:52:04.497270   47940 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0917 17:52:04.497282   47940 command_runner.go:130] > # Path to directory for container attach sockets.
	I0917 17:52:04.497294   47940 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0917 17:52:04.497306   47940 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0917 17:52:04.497316   47940 command_runner.go:130] > # bind_mount_prefix = ""
	I0917 17:52:04.497333   47940 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0917 17:52:04.497342   47940 command_runner.go:130] > # read_only = false
	I0917 17:52:04.497356   47940 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0917 17:52:04.497369   47940 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0917 17:52:04.497376   47940 command_runner.go:130] > # live configuration reload.
	I0917 17:52:04.497386   47940 command_runner.go:130] > # log_level = "info"
	I0917 17:52:04.497399   47940 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0917 17:52:04.497411   47940 command_runner.go:130] > # This option supports live configuration reload.
	I0917 17:52:04.497420   47940 command_runner.go:130] > # log_filter = ""
	I0917 17:52:04.497431   47940 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0917 17:52:04.497444   47940 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0917 17:52:04.497454   47940 command_runner.go:130] > # separated by comma.
	I0917 17:52:04.497468   47940 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0917 17:52:04.497478   47940 command_runner.go:130] > # uid_mappings = ""
	I0917 17:52:04.497490   47940 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0917 17:52:04.497503   47940 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0917 17:52:04.497512   47940 command_runner.go:130] > # separated by comma.
	I0917 17:52:04.497525   47940 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0917 17:52:04.497534   47940 command_runner.go:130] > # gid_mappings = ""
	I0917 17:52:04.497545   47940 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0917 17:52:04.497558   47940 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0917 17:52:04.497571   47940 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0917 17:52:04.497585   47940 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0917 17:52:04.497596   47940 command_runner.go:130] > # minimum_mappable_uid = -1
	I0917 17:52:04.497608   47940 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0917 17:52:04.497621   47940 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0917 17:52:04.497634   47940 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0917 17:52:04.497647   47940 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0917 17:52:04.497657   47940 command_runner.go:130] > # minimum_mappable_gid = -1
	I0917 17:52:04.497670   47940 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0917 17:52:04.497683   47940 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0917 17:52:04.497696   47940 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0917 17:52:04.497707   47940 command_runner.go:130] > # ctr_stop_timeout = 30
	I0917 17:52:04.497719   47940 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0917 17:52:04.497731   47940 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0917 17:52:04.497739   47940 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0917 17:52:04.497750   47940 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0917 17:52:04.497757   47940 command_runner.go:130] > drop_infra_ctr = false
	I0917 17:52:04.497769   47940 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0917 17:52:04.497782   47940 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0917 17:52:04.497797   47940 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0917 17:52:04.497807   47940 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0917 17:52:04.497822   47940 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0917 17:52:04.497835   47940 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0917 17:52:04.497847   47940 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0917 17:52:04.497859   47940 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0917 17:52:04.497870   47940 command_runner.go:130] > # shared_cpuset = ""
	I0917 17:52:04.497882   47940 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0917 17:52:04.497893   47940 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0917 17:52:04.497901   47940 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0917 17:52:04.497913   47940 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0917 17:52:04.497921   47940 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0917 17:52:04.497933   47940 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0917 17:52:04.497947   47940 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0917 17:52:04.497957   47940 command_runner.go:130] > # enable_criu_support = false
	I0917 17:52:04.497968   47940 command_runner.go:130] > # Enable/disable the generation of the container,
	I0917 17:52:04.497980   47940 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0917 17:52:04.497989   47940 command_runner.go:130] > # enable_pod_events = false
	I0917 17:52:04.498001   47940 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0917 17:52:04.498014   47940 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0917 17:52:04.498024   47940 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0917 17:52:04.498033   47940 command_runner.go:130] > # default_runtime = "runc"
	I0917 17:52:04.498044   47940 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0917 17:52:04.498059   47940 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0917 17:52:04.498077   47940 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0917 17:52:04.498088   47940 command_runner.go:130] > # creation as a file is not desired either.
	I0917 17:52:04.498102   47940 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0917 17:52:04.498113   47940 command_runner.go:130] > # the hostname is being managed dynamically.
	I0917 17:52:04.498122   47940 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0917 17:52:04.498130   47940 command_runner.go:130] > # ]
	I0917 17:52:04.498143   47940 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0917 17:52:04.498157   47940 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0917 17:52:04.498182   47940 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0917 17:52:04.498194   47940 command_runner.go:130] > # Each entry in the table should follow the format:
	I0917 17:52:04.498200   47940 command_runner.go:130] > #
	I0917 17:52:04.498210   47940 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0917 17:52:04.498221   47940 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0917 17:52:04.498279   47940 command_runner.go:130] > # runtime_type = "oci"
	I0917 17:52:04.498289   47940 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0917 17:52:04.498295   47940 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0917 17:52:04.498302   47940 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0917 17:52:04.498310   47940 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0917 17:52:04.498325   47940 command_runner.go:130] > # monitor_env = []
	I0917 17:52:04.498340   47940 command_runner.go:130] > # privileged_without_host_devices = false
	I0917 17:52:04.498350   47940 command_runner.go:130] > # allowed_annotations = []
	I0917 17:52:04.498363   47940 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0917 17:52:04.498372   47940 command_runner.go:130] > # Where:
	I0917 17:52:04.498382   47940 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0917 17:52:04.498396   47940 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0917 17:52:04.498409   47940 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0917 17:52:04.498423   47940 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0917 17:52:04.498432   47940 command_runner.go:130] > #   in $PATH.
	I0917 17:52:04.498443   47940 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0917 17:52:04.498453   47940 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0917 17:52:04.498467   47940 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0917 17:52:04.498476   47940 command_runner.go:130] > #   state.
	I0917 17:52:04.498488   47940 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0917 17:52:04.498501   47940 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0917 17:52:04.498514   47940 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0917 17:52:04.498526   47940 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0917 17:52:04.498540   47940 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0917 17:52:04.498553   47940 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0917 17:52:04.498564   47940 command_runner.go:130] > #   The currently recognized values are:
	I0917 17:52:04.498576   47940 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0917 17:52:04.498590   47940 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0917 17:52:04.498610   47940 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0917 17:52:04.498623   47940 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0917 17:52:04.498638   47940 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0917 17:52:04.498651   47940 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0917 17:52:04.498665   47940 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0917 17:52:04.498678   47940 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0917 17:52:04.498688   47940 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0917 17:52:04.498701   47940 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0917 17:52:04.498711   47940 command_runner.go:130] > #   deprecated option "conmon".
	I0917 17:52:04.498724   47940 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0917 17:52:04.498735   47940 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0917 17:52:04.498750   47940 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0917 17:52:04.498761   47940 command_runner.go:130] > #   should be moved to the container's cgroup
	I0917 17:52:04.498774   47940 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0917 17:52:04.498785   47940 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0917 17:52:04.498800   47940 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0917 17:52:04.498811   47940 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0917 17:52:04.498819   47940 command_runner.go:130] > #
	I0917 17:52:04.498828   47940 command_runner.go:130] > # Using the seccomp notifier feature:
	I0917 17:52:04.498835   47940 command_runner.go:130] > #
	I0917 17:52:04.498845   47940 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0917 17:52:04.498859   47940 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0917 17:52:04.498867   47940 command_runner.go:130] > #
	I0917 17:52:04.498878   47940 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0917 17:52:04.498892   47940 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0917 17:52:04.498899   47940 command_runner.go:130] > #
	I0917 17:52:04.498910   47940 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0917 17:52:04.498919   47940 command_runner.go:130] > # feature.
	I0917 17:52:04.498924   47940 command_runner.go:130] > #
	I0917 17:52:04.498937   47940 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0917 17:52:04.498950   47940 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0917 17:52:04.498963   47940 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0917 17:52:04.498976   47940 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0917 17:52:04.498997   47940 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0917 17:52:04.499005   47940 command_runner.go:130] > #
	I0917 17:52:04.499016   47940 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0917 17:52:04.499031   47940 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0917 17:52:04.499039   47940 command_runner.go:130] > #
	I0917 17:52:04.499050   47940 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0917 17:52:04.499062   47940 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0917 17:52:04.499070   47940 command_runner.go:130] > #
	I0917 17:52:04.499081   47940 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0917 17:52:04.499092   47940 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0917 17:52:04.499099   47940 command_runner.go:130] > # limitation.
	I0917 17:52:04.499108   47940 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0917 17:52:04.499117   47940 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0917 17:52:04.499127   47940 command_runner.go:130] > runtime_type = "oci"
	I0917 17:52:04.499135   47940 command_runner.go:130] > runtime_root = "/run/runc"
	I0917 17:52:04.499145   47940 command_runner.go:130] > runtime_config_path = ""
	I0917 17:52:04.499155   47940 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0917 17:52:04.499165   47940 command_runner.go:130] > monitor_cgroup = "pod"
	I0917 17:52:04.499174   47940 command_runner.go:130] > monitor_exec_cgroup = ""
	I0917 17:52:04.499182   47940 command_runner.go:130] > monitor_env = [
	I0917 17:52:04.499195   47940 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0917 17:52:04.499203   47940 command_runner.go:130] > ]
	I0917 17:52:04.499212   47940 command_runner.go:130] > privileged_without_host_devices = false
	I0917 17:52:04.499226   47940 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0917 17:52:04.499237   47940 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0917 17:52:04.499253   47940 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0917 17:52:04.499267   47940 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0917 17:52:04.499281   47940 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0917 17:52:04.499294   47940 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0917 17:52:04.499312   47940 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0917 17:52:04.499332   47940 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0917 17:52:04.499344   47940 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0917 17:52:04.499359   47940 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0917 17:52:04.499376   47940 command_runner.go:130] > # Example:
	I0917 17:52:04.499387   47940 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0917 17:52:04.499396   47940 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0917 17:52:04.499407   47940 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0917 17:52:04.499419   47940 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0917 17:52:04.499428   47940 command_runner.go:130] > # cpuset = 0
	I0917 17:52:04.499435   47940 command_runner.go:130] > # cpushares = "0-1"
	I0917 17:52:04.499445   47940 command_runner.go:130] > # Where:
	I0917 17:52:04.499453   47940 command_runner.go:130] > # The workload name is workload-type.
	I0917 17:52:04.499468   47940 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0917 17:52:04.499481   47940 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0917 17:52:04.499493   47940 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0917 17:52:04.499509   47940 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0917 17:52:04.499521   47940 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0917 17:52:04.499531   47940 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0917 17:52:04.499545   47940 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0917 17:52:04.499556   47940 command_runner.go:130] > # Default value is set to true
	I0917 17:52:04.499566   47940 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0917 17:52:04.499577   47940 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0917 17:52:04.499588   47940 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0917 17:52:04.499596   47940 command_runner.go:130] > # Default value is set to 'false'
	I0917 17:52:04.499607   47940 command_runner.go:130] > # disable_hostport_mapping = false
	I0917 17:52:04.499619   47940 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0917 17:52:04.499626   47940 command_runner.go:130] > #
	I0917 17:52:04.499636   47940 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0917 17:52:04.499647   47940 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0917 17:52:04.499655   47940 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0917 17:52:04.499664   47940 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0917 17:52:04.499672   47940 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0917 17:52:04.499678   47940 command_runner.go:130] > [crio.image]
	I0917 17:52:04.499687   47940 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0917 17:52:04.499695   47940 command_runner.go:130] > # default_transport = "docker://"
	I0917 17:52:04.499705   47940 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0917 17:52:04.499723   47940 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0917 17:52:04.499730   47940 command_runner.go:130] > # global_auth_file = ""
	I0917 17:52:04.499738   47940 command_runner.go:130] > # The image used to instantiate infra containers.
	I0917 17:52:04.499747   47940 command_runner.go:130] > # This option supports live configuration reload.
	I0917 17:52:04.499756   47940 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0917 17:52:04.499766   47940 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0917 17:52:04.499776   47940 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0917 17:52:04.499784   47940 command_runner.go:130] > # This option supports live configuration reload.
	I0917 17:52:04.499791   47940 command_runner.go:130] > # pause_image_auth_file = ""
	I0917 17:52:04.499800   47940 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0917 17:52:04.499810   47940 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0917 17:52:04.499825   47940 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0917 17:52:04.499835   47940 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0917 17:52:04.499842   47940 command_runner.go:130] > # pause_command = "/pause"
	I0917 17:52:04.499851   47940 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0917 17:52:04.499860   47940 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0917 17:52:04.499870   47940 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0917 17:52:04.499885   47940 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0917 17:52:04.499898   47940 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0917 17:52:04.499911   47940 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0917 17:52:04.499921   47940 command_runner.go:130] > # pinned_images = [
	I0917 17:52:04.499929   47940 command_runner.go:130] > # ]
	I0917 17:52:04.499939   47940 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0917 17:52:04.499952   47940 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0917 17:52:04.499966   47940 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0917 17:52:04.499980   47940 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0917 17:52:04.499993   47940 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0917 17:52:04.500003   47940 command_runner.go:130] > # signature_policy = ""
	I0917 17:52:04.500014   47940 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0917 17:52:04.500025   47940 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0917 17:52:04.500039   47940 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0917 17:52:04.500052   47940 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0917 17:52:04.500065   47940 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0917 17:52:04.500083   47940 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0917 17:52:04.500096   47940 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0917 17:52:04.500109   47940 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0917 17:52:04.500119   47940 command_runner.go:130] > # changing them here.
	I0917 17:52:04.500128   47940 command_runner.go:130] > # insecure_registries = [
	I0917 17:52:04.500136   47940 command_runner.go:130] > # ]
	I0917 17:52:04.500148   47940 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0917 17:52:04.500159   47940 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0917 17:52:04.500170   47940 command_runner.go:130] > # image_volumes = "mkdir"
	I0917 17:52:04.500180   47940 command_runner.go:130] > # Temporary directory to use for storing big files
	I0917 17:52:04.500190   47940 command_runner.go:130] > # big_files_temporary_dir = ""
	I0917 17:52:04.500200   47940 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0917 17:52:04.500210   47940 command_runner.go:130] > # CNI plugins.
	I0917 17:52:04.500217   47940 command_runner.go:130] > [crio.network]
	I0917 17:52:04.500229   47940 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0917 17:52:04.500241   47940 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0917 17:52:04.500250   47940 command_runner.go:130] > # cni_default_network = ""
	I0917 17:52:04.500262   47940 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0917 17:52:04.500271   47940 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0917 17:52:04.500281   47940 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0917 17:52:04.500290   47940 command_runner.go:130] > # plugin_dirs = [
	I0917 17:52:04.500297   47940 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0917 17:52:04.500305   47940 command_runner.go:130] > # ]
	I0917 17:52:04.500316   47940 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0917 17:52:04.500325   47940 command_runner.go:130] > [crio.metrics]
	I0917 17:52:04.500341   47940 command_runner.go:130] > # Globally enable or disable metrics support.
	I0917 17:52:04.500348   47940 command_runner.go:130] > enable_metrics = true
	I0917 17:52:04.500359   47940 command_runner.go:130] > # Specify enabled metrics collectors.
	I0917 17:52:04.500369   47940 command_runner.go:130] > # Per default all metrics are enabled.
	I0917 17:52:04.500381   47940 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0917 17:52:04.500394   47940 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0917 17:52:04.500406   47940 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0917 17:52:04.500416   47940 command_runner.go:130] > # metrics_collectors = [
	I0917 17:52:04.500431   47940 command_runner.go:130] > # 	"operations",
	I0917 17:52:04.500444   47940 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0917 17:52:04.500455   47940 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0917 17:52:04.500463   47940 command_runner.go:130] > # 	"operations_errors",
	I0917 17:52:04.500474   47940 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0917 17:52:04.500481   47940 command_runner.go:130] > # 	"image_pulls_by_name",
	I0917 17:52:04.500491   47940 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0917 17:52:04.500501   47940 command_runner.go:130] > # 	"image_pulls_failures",
	I0917 17:52:04.500509   47940 command_runner.go:130] > # 	"image_pulls_successes",
	I0917 17:52:04.500519   47940 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0917 17:52:04.500528   47940 command_runner.go:130] > # 	"image_layer_reuse",
	I0917 17:52:04.500538   47940 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0917 17:52:04.500549   47940 command_runner.go:130] > # 	"containers_oom_total",
	I0917 17:52:04.500556   47940 command_runner.go:130] > # 	"containers_oom",
	I0917 17:52:04.500566   47940 command_runner.go:130] > # 	"processes_defunct",
	I0917 17:52:04.500573   47940 command_runner.go:130] > # 	"operations_total",
	I0917 17:52:04.500583   47940 command_runner.go:130] > # 	"operations_latency_seconds",
	I0917 17:52:04.500592   47940 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0917 17:52:04.500602   47940 command_runner.go:130] > # 	"operations_errors_total",
	I0917 17:52:04.500613   47940 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0917 17:52:04.500623   47940 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0917 17:52:04.500631   47940 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0917 17:52:04.500640   47940 command_runner.go:130] > # 	"image_pulls_success_total",
	I0917 17:52:04.500648   47940 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0917 17:52:04.500657   47940 command_runner.go:130] > # 	"containers_oom_count_total",
	I0917 17:52:04.500666   47940 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0917 17:52:04.500677   47940 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0917 17:52:04.500685   47940 command_runner.go:130] > # ]
	I0917 17:52:04.500695   47940 command_runner.go:130] > # The port on which the metrics server will listen.
	I0917 17:52:04.500703   47940 command_runner.go:130] > # metrics_port = 9090
	I0917 17:52:04.500712   47940 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0917 17:52:04.500721   47940 command_runner.go:130] > # metrics_socket = ""
	I0917 17:52:04.500731   47940 command_runner.go:130] > # The certificate for the secure metrics server.
	I0917 17:52:04.500753   47940 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0917 17:52:04.500767   47940 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0917 17:52:04.500778   47940 command_runner.go:130] > # certificate on any modification event.
	I0917 17:52:04.500785   47940 command_runner.go:130] > # metrics_cert = ""
	I0917 17:52:04.500793   47940 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0917 17:52:04.500806   47940 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0917 17:52:04.500816   47940 command_runner.go:130] > # metrics_key = ""
	I0917 17:52:04.500827   47940 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0917 17:52:04.500835   47940 command_runner.go:130] > [crio.tracing]
	I0917 17:52:04.500845   47940 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0917 17:52:04.500854   47940 command_runner.go:130] > # enable_tracing = false
	I0917 17:52:04.500864   47940 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0917 17:52:04.500875   47940 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0917 17:52:04.500888   47940 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0917 17:52:04.500899   47940 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0917 17:52:04.500909   47940 command_runner.go:130] > # CRI-O NRI configuration.
	I0917 17:52:04.500917   47940 command_runner.go:130] > [crio.nri]
	I0917 17:52:04.500925   47940 command_runner.go:130] > # Globally enable or disable NRI.
	I0917 17:52:04.500934   47940 command_runner.go:130] > # enable_nri = false
	I0917 17:52:04.500949   47940 command_runner.go:130] > # NRI socket to listen on.
	I0917 17:52:04.500960   47940 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0917 17:52:04.500968   47940 command_runner.go:130] > # NRI plugin directory to use.
	I0917 17:52:04.500979   47940 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0917 17:52:04.500990   47940 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0917 17:52:04.501001   47940 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0917 17:52:04.501013   47940 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0917 17:52:04.501022   47940 command_runner.go:130] > # nri_disable_connections = false
	I0917 17:52:04.501033   47940 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0917 17:52:04.501042   47940 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0917 17:52:04.501053   47940 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0917 17:52:04.501062   47940 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0917 17:52:04.501075   47940 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0917 17:52:04.501084   47940 command_runner.go:130] > [crio.stats]
	I0917 17:52:04.501100   47940 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0917 17:52:04.501113   47940 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0917 17:52:04.501122   47940 command_runner.go:130] > # stats_collection_period = 0
	I0917 17:52:04.501159   47940 command_runner.go:130] ! time="2024-09-17 17:52:04.448275633Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0917 17:52:04.501180   47940 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0917 17:52:04.501303   47940 cni.go:84] Creating CNI manager for ""
	I0917 17:52:04.501319   47940 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 17:52:04.501355   47940 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 17:52:04.501387   47940 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.35 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-178778 NodeName:multinode-178778 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.35"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.35 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 17:52:04.501553   47940 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.35
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-178778"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.35
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.35"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 17:52:04.501628   47940 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 17:52:04.514520   47940 command_runner.go:130] > kubeadm
	I0917 17:52:04.514543   47940 command_runner.go:130] > kubectl
	I0917 17:52:04.514548   47940 command_runner.go:130] > kubelet
	I0917 17:52:04.514573   47940 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 17:52:04.514639   47940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 17:52:04.525622   47940 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0917 17:52:04.544152   47940 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 17:52:04.562825   47940 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0917 17:52:04.581283   47940 ssh_runner.go:195] Run: grep 192.168.39.35	control-plane.minikube.internal$ /etc/hosts
	I0917 17:52:04.585532   47940 command_runner.go:130] > 192.168.39.35	control-plane.minikube.internal
	I0917 17:52:04.585687   47940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:52:04.729835   47940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 17:52:04.745911   47940 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/multinode-178778 for IP: 192.168.39.35
	I0917 17:52:04.745935   47940 certs.go:194] generating shared ca certs ...
	I0917 17:52:04.745950   47940 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:52:04.746105   47940 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 17:52:04.746148   47940 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 17:52:04.746164   47940 certs.go:256] generating profile certs ...
	I0917 17:52:04.746239   47940 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/multinode-178778/client.key
	I0917 17:52:04.746293   47940 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/multinode-178778/apiserver.key.b26bef04
	I0917 17:52:04.746332   47940 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/multinode-178778/proxy-client.key
	I0917 17:52:04.746343   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 17:52:04.746355   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 17:52:04.746367   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 17:52:04.746378   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 17:52:04.746390   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/multinode-178778/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 17:52:04.746410   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/multinode-178778/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 17:52:04.746425   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/multinode-178778/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 17:52:04.746436   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/multinode-178778/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 17:52:04.746487   47940 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 17:52:04.746516   47940 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 17:52:04.746522   47940 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 17:52:04.746541   47940 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 17:52:04.746561   47940 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 17:52:04.746581   47940 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 17:52:04.746623   47940 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 17:52:04.746649   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> /usr/share/ca-certificates/182592.pem
	I0917 17:52:04.746672   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:52:04.746683   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem -> /usr/share/ca-certificates/18259.pem
	I0917 17:52:04.747235   47940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 17:52:04.774491   47940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 17:52:04.800965   47940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 17:52:04.828637   47940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 17:52:04.855141   47940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/multinode-178778/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0917 17:52:04.881673   47940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/multinode-178778/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 17:52:04.908744   47940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/multinode-178778/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 17:52:04.936313   47940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/multinode-178778/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 17:52:04.963202   47940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 17:52:04.989620   47940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 17:52:05.016366   47940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 17:52:05.043395   47940 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 17:52:05.061825   47940 ssh_runner.go:195] Run: openssl version
	I0917 17:52:05.068566   47940 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0917 17:52:05.068674   47940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 17:52:05.080678   47940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 17:52:05.085309   47940 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 17:52:05.085333   47940 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 17:52:05.085386   47940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 17:52:05.091722   47940 command_runner.go:130] > 51391683
	I0917 17:52:05.091797   47940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 17:52:05.101992   47940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 17:52:05.113945   47940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 17:52:05.118711   47940 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 17:52:05.118762   47940 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 17:52:05.118813   47940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 17:52:05.124849   47940 command_runner.go:130] > 3ec20f2e
	I0917 17:52:05.124961   47940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 17:52:05.135236   47940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 17:52:05.147407   47940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:52:05.152316   47940 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:52:05.152355   47940 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:52:05.152413   47940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:52:05.158848   47940 command_runner.go:130] > b5213941
	I0917 17:52:05.158935   47940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 17:52:05.169028   47940 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 17:52:05.173737   47940 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 17:52:05.173765   47940 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0917 17:52:05.173770   47940 command_runner.go:130] > Device: 253,1	Inode: 5242920     Links: 1
	I0917 17:52:05.173777   47940 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0917 17:52:05.173794   47940 command_runner.go:130] > Access: 2024-09-17 17:45:14.502396918 +0000
	I0917 17:52:05.173799   47940 command_runner.go:130] > Modify: 2024-09-17 17:45:14.502396918 +0000
	I0917 17:52:05.173804   47940 command_runner.go:130] > Change: 2024-09-17 17:45:14.502396918 +0000
	I0917 17:52:05.173809   47940 command_runner.go:130] >  Birth: 2024-09-17 17:45:14.502396918 +0000
	I0917 17:52:05.173865   47940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 17:52:05.179855   47940 command_runner.go:130] > Certificate will not expire
	I0917 17:52:05.179925   47940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 17:52:05.185789   47940 command_runner.go:130] > Certificate will not expire
	I0917 17:52:05.185932   47940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 17:52:05.191963   47940 command_runner.go:130] > Certificate will not expire
	I0917 17:52:05.192042   47940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 17:52:05.198201   47940 command_runner.go:130] > Certificate will not expire
	I0917 17:52:05.198356   47940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 17:52:05.204543   47940 command_runner.go:130] > Certificate will not expire
	I0917 17:52:05.204681   47940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 17:52:05.211194   47940 command_runner.go:130] > Certificate will not expire
	I0917 17:52:05.211306   47940 kubeadm.go:392] StartCluster: {Name:multinode-178778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-178778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.62 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:52:05.211452   47940 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 17:52:05.211519   47940 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 17:52:05.250668   47940 command_runner.go:130] > 5a42732ed4168b2b853710bf32449c57a2865255fb4184fc1a47f7564d66e866
	I0917 17:52:05.250692   47940 command_runner.go:130] > bb9c7ffc975f881a0c87b1434cca5ac265eb1c53b8f3cb8187bc701987009ba2
	I0917 17:52:05.250698   47940 command_runner.go:130] > d92c1bfd527d084237fb6a09891bcebbe2571a43bceb2b535e247e189a275ca9
	I0917 17:52:05.250705   47940 command_runner.go:130] > 1b25b58f20590fc6b00792854ea4852554e14daa8c313ec2cf96b998597e2bad
	I0917 17:52:05.250710   47940 command_runner.go:130] > 2dec6c2647270691d88ea7a950f9ad02bff02a646e0eb5139498c65a76b8471e
	I0917 17:52:05.250719   47940 command_runner.go:130] > 8d7e1ab5d7a869fe10b29c149283fee294171d881c41c776a1cd2dfebeee766b
	I0917 17:52:05.250732   47940 command_runner.go:130] > 4d0d3a5d8108feb075b79f62ab6547aaf2221f7c708a364b10e3acb5ec484402
	I0917 17:52:05.250760   47940 command_runner.go:130] > b632cb69ae054b728a96c3ca2d075682a7a19b51a9d1e38b71e9d0c4185affbc
	I0917 17:52:05.250788   47940 cri.go:89] found id: "5a42732ed4168b2b853710bf32449c57a2865255fb4184fc1a47f7564d66e866"
	I0917 17:52:05.250797   47940 cri.go:89] found id: "bb9c7ffc975f881a0c87b1434cca5ac265eb1c53b8f3cb8187bc701987009ba2"
	I0917 17:52:05.250801   47940 cri.go:89] found id: "d92c1bfd527d084237fb6a09891bcebbe2571a43bceb2b535e247e189a275ca9"
	I0917 17:52:05.250804   47940 cri.go:89] found id: "1b25b58f20590fc6b00792854ea4852554e14daa8c313ec2cf96b998597e2bad"
	I0917 17:52:05.250807   47940 cri.go:89] found id: "2dec6c2647270691d88ea7a950f9ad02bff02a646e0eb5139498c65a76b8471e"
	I0917 17:52:05.250810   47940 cri.go:89] found id: "8d7e1ab5d7a869fe10b29c149283fee294171d881c41c776a1cd2dfebeee766b"
	I0917 17:52:05.250813   47940 cri.go:89] found id: "4d0d3a5d8108feb075b79f62ab6547aaf2221f7c708a364b10e3acb5ec484402"
	I0917 17:52:05.250820   47940 cri.go:89] found id: "b632cb69ae054b728a96c3ca2d075682a7a19b51a9d1e38b71e9d0c4185affbc"
	I0917 17:52:05.250824   47940 cri.go:89] found id: ""
	I0917 17:52:05.250880   47940 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 17 17:53:50 multinode-178778 crio[2738]: time="2024-09-17 17:53:50.685515314Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595630685491959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a4d135b-ccdd-4461-853a-461018a70f1b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:53:50 multinode-178778 crio[2738]: time="2024-09-17 17:53:50.686132498Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8451578-d61b-48c3-8578-a879efddb2ef name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:53:50 multinode-178778 crio[2738]: time="2024-09-17 17:53:50.686211014Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8451578-d61b-48c3-8578-a879efddb2ef name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:53:50 multinode-178778 crio[2738]: time="2024-09-17 17:53:50.686585409Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0405fbdc7bc87c6d18e30e57a3dde93975cf4b4755c329eff726577ab4aa08a0,PodSandboxId:0aba6a8ac13afc5a95e7c297bd868c36df27571c928eff1c85b2ba7a72219c81,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726595566329753966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dh729,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03328709-2219-4b31-bc7c-ef68c899f81e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7747cc5a58254db89857ca054f1c4b1d749a368af47cb16d144d10b3806e7b3,PodSandboxId:4b6c0d09ebedf46dfb7f81ac1735fa9cd0b008e40693d69d078232411c2a63e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726595532787415439,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jpqbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57cc35fa-61ef-4dfa-bfb5-5a32ab2ceac3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86c20e143c46a855604ba041aca6c0047438a8117585fc36c6d9d2c0b57a6135,PodSandboxId:ccf0cada1da70fd12c77e6997b7da7765c0e2813b61c5c89829bddfff48c9d00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726595532761249026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qp52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9bc7e7-c862-462e-ad97-d92e2376af58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6b24db391cb933da944ac182a332ce0db5f3474386509f7b9cf4cfb08e2463,PodSandboxId:b284dd0c0aaa1c67afc59e12432aa935a15681ba708ed08ceb02de5dfadf8190,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726595532640388820,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xgjrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a9ffb4-b223-4d4d-8330-76cbedfc944b,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32d8d953255d0e96ba3b975da855906b9544159875990479b40ddf69a330960,PodSandboxId:64eb283025385a8effd7bd872ed209627d1acc98d049fc912dd18b8262866bc0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726595532580777245,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8f59650-fc1a-4e96-859e-6a630e6d50eb,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cc24c2367951f0d65aac87876e0e147ad370db1e920c526de00dac0e4a8f24,PodSandboxId:d8ca75c103a340ea6ad2c46d609a6d8e2a436bda7a00b48ebc2215ad3b870c81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726595527806149659,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69fbb8cf1a2a95cef841df9f977035fb,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d9d8cddcad905c1ed3d3dd98d944dd690b2f22cf2d5e258ef7a9195a1c77a4,PodSandboxId:c6e67650f8fe9fde77a87372e1e358baf900fd3c2904da863fd4cd4a7645c014,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726595527807650266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7e63432ab647f618158ba4465a7666,},Annotations:map[string]string{io.kub
ernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b6f4978954af28b30cf25e85875e8ee37fed382fa7752a10870ad1dceb734db,PodSandboxId:9f322ac6a5eedf6ebde783fa1ea75d8ccac60089ed54a53f3ef5eadc184cfbb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726595527696697113,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9fc36b9d4e13ff6509050eb710296a9,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c84521880b71b7452275804f3b79ff0aa785ebfc3d85ce0cfda1e7aaf958b0b3,PodSandboxId:d7b4d4533a9fc7e14bb13b775e065910f44677f4614d2428ec6c0e56ec7475c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726595527691472842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9bed66fba5ec211216ea1925b1d31c6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c935cf93b3484ed31fa337cecde14b652e2ceb4d008e20cf9dec2d21ca99c5d8,PodSandboxId:b09fedf3bddf89f29d25509a6796691e01c325e88e44cbde8cbdf443fedd7f34,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726595198502732789,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dh729,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03328709-2219-4b31-bc7c-ef68c899f81e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a42732ed4168b2b853710bf32449c57a2865255fb4184fc1a47f7564d66e866,PodSandboxId:7ea2370d20b04d7c496447e1fa6a0d99fd826a206f6dc6441af2f0001e513261,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726595141844813179,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qp52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9bc7e7-c862-462e-ad97-d92e2376af58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb9c7ffc975f881a0c87b1434cca5ac265eb1c53b8f3cb8187bc701987009ba2,PodSandboxId:7ca00b7e670bd01d93703889dcdf905c764bd42a0aeaf3d3c4c9f0bc8d4f6a8a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726595141788297943,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: b8f59650-fc1a-4e96-859e-6a630e6d50eb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d92c1bfd527d084237fb6a09891bcebbe2571a43bceb2b535e247e189a275ca9,PodSandboxId:bcd38cf96e3f52061bc550835c8864bdf8c1a0de64f6be952d9ff97bf2ffc1ad,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726595129671702808,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jpqbk,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 57cc35fa-61ef-4dfa-bfb5-5a32ab2ceac3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b25b58f20590fc6b00792854ea4852554e14daa8c313ec2cf96b998597e2bad,PodSandboxId:52d4b2fe8a56eab9472fa986766b9e75b7c6839634c8f998beb9879f22815850,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726595129388769710,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xgjrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a9ffb4-b223-4d4d-8330
-76cbedfc944b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7e1ab5d7a869fe10b29c149283fee294171d881c41c776a1cd2dfebeee766b,PodSandboxId:8618dedde106d5ebc72c32ea3b655af24dbd2814f39023d8b5a345858a442893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726595118565296729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7e634
32ab647f618158ba4465a7666,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dec6c2647270691d88ea7a950f9ad02bff02a646e0eb5139498c65a76b8471e,PodSandboxId:5f38a03a0f184f778311f158000f5e08b66be2dedb3e985d1f525dd16df6f528,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726595118566962018,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69fbb8cf1a2a95cef841df
9f977035fb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d0d3a5d8108feb075b79f62ab6547aaf2221f7c708a364b10e3acb5ec484402,PodSandboxId:ef58d0b7c446555e14dc25abe5669655cd8462fd8481c2861e892a9ae34e6f8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726595118541635049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9bed66fba5ec211216ea1925b1d31c6,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b632cb69ae054b728a96c3ca2d075682a7a19b51a9d1e38b71e9d0c4185affbc,PodSandboxId:e5ecbef0bd37bdfcd02135133123463359c80d774fafcb5958232b9774ad226e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726595118534276459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9fc36b9d4e13ff6509050eb710296a9,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c8451578-d61b-48c3-8578-a879efddb2ef name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:53:50 multinode-178778 crio[2738]: time="2024-09-17 17:53:50.739519535Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0d5a671d-f7b0-48a6-b2dd-eef33c70718f name=/runtime.v1.RuntimeService/Version
	Sep 17 17:53:50 multinode-178778 crio[2738]: time="2024-09-17 17:53:50.739619697Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0d5a671d-f7b0-48a6-b2dd-eef33c70718f name=/runtime.v1.RuntimeService/Version
	Sep 17 17:53:50 multinode-178778 crio[2738]: time="2024-09-17 17:53:50.740822575Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d91b4f0c-f3fb-4aa3-838a-f073afa8f7ba name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:53:50 multinode-178778 crio[2738]: time="2024-09-17 17:53:50.741342720Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595630741314773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d91b4f0c-f3fb-4aa3-838a-f073afa8f7ba name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:53:50 multinode-178778 crio[2738]: time="2024-09-17 17:53:50.741863343Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f3d48b1c-85ae-4e66-8bfc-91383baa5cec name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:53:50 multinode-178778 crio[2738]: time="2024-09-17 17:53:50.741950489Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f3d48b1c-85ae-4e66-8bfc-91383baa5cec name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:53:50 multinode-178778 crio[2738]: time="2024-09-17 17:53:50.742377597Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0405fbdc7bc87c6d18e30e57a3dde93975cf4b4755c329eff726577ab4aa08a0,PodSandboxId:0aba6a8ac13afc5a95e7c297bd868c36df27571c928eff1c85b2ba7a72219c81,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726595566329753966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dh729,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03328709-2219-4b31-bc7c-ef68c899f81e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7747cc5a58254db89857ca054f1c4b1d749a368af47cb16d144d10b3806e7b3,PodSandboxId:4b6c0d09ebedf46dfb7f81ac1735fa9cd0b008e40693d69d078232411c2a63e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726595532787415439,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jpqbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57cc35fa-61ef-4dfa-bfb5-5a32ab2ceac3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86c20e143c46a855604ba041aca6c0047438a8117585fc36c6d9d2c0b57a6135,PodSandboxId:ccf0cada1da70fd12c77e6997b7da7765c0e2813b61c5c89829bddfff48c9d00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726595532761249026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qp52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9bc7e7-c862-462e-ad97-d92e2376af58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6b24db391cb933da944ac182a332ce0db5f3474386509f7b9cf4cfb08e2463,PodSandboxId:b284dd0c0aaa1c67afc59e12432aa935a15681ba708ed08ceb02de5dfadf8190,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726595532640388820,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xgjrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a9ffb4-b223-4d4d-8330-76cbedfc944b,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32d8d953255d0e96ba3b975da855906b9544159875990479b40ddf69a330960,PodSandboxId:64eb283025385a8effd7bd872ed209627d1acc98d049fc912dd18b8262866bc0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726595532580777245,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8f59650-fc1a-4e96-859e-6a630e6d50eb,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cc24c2367951f0d65aac87876e0e147ad370db1e920c526de00dac0e4a8f24,PodSandboxId:d8ca75c103a340ea6ad2c46d609a6d8e2a436bda7a00b48ebc2215ad3b870c81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726595527806149659,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69fbb8cf1a2a95cef841df9f977035fb,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d9d8cddcad905c1ed3d3dd98d944dd690b2f22cf2d5e258ef7a9195a1c77a4,PodSandboxId:c6e67650f8fe9fde77a87372e1e358baf900fd3c2904da863fd4cd4a7645c014,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726595527807650266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7e63432ab647f618158ba4465a7666,},Annotations:map[string]string{io.kub
ernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b6f4978954af28b30cf25e85875e8ee37fed382fa7752a10870ad1dceb734db,PodSandboxId:9f322ac6a5eedf6ebde783fa1ea75d8ccac60089ed54a53f3ef5eadc184cfbb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726595527696697113,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9fc36b9d4e13ff6509050eb710296a9,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c84521880b71b7452275804f3b79ff0aa785ebfc3d85ce0cfda1e7aaf958b0b3,PodSandboxId:d7b4d4533a9fc7e14bb13b775e065910f44677f4614d2428ec6c0e56ec7475c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726595527691472842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9bed66fba5ec211216ea1925b1d31c6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c935cf93b3484ed31fa337cecde14b652e2ceb4d008e20cf9dec2d21ca99c5d8,PodSandboxId:b09fedf3bddf89f29d25509a6796691e01c325e88e44cbde8cbdf443fedd7f34,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726595198502732789,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dh729,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03328709-2219-4b31-bc7c-ef68c899f81e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a42732ed4168b2b853710bf32449c57a2865255fb4184fc1a47f7564d66e866,PodSandboxId:7ea2370d20b04d7c496447e1fa6a0d99fd826a206f6dc6441af2f0001e513261,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726595141844813179,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qp52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9bc7e7-c862-462e-ad97-d92e2376af58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb9c7ffc975f881a0c87b1434cca5ac265eb1c53b8f3cb8187bc701987009ba2,PodSandboxId:7ca00b7e670bd01d93703889dcdf905c764bd42a0aeaf3d3c4c9f0bc8d4f6a8a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726595141788297943,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: b8f59650-fc1a-4e96-859e-6a630e6d50eb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d92c1bfd527d084237fb6a09891bcebbe2571a43bceb2b535e247e189a275ca9,PodSandboxId:bcd38cf96e3f52061bc550835c8864bdf8c1a0de64f6be952d9ff97bf2ffc1ad,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726595129671702808,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jpqbk,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 57cc35fa-61ef-4dfa-bfb5-5a32ab2ceac3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b25b58f20590fc6b00792854ea4852554e14daa8c313ec2cf96b998597e2bad,PodSandboxId:52d4b2fe8a56eab9472fa986766b9e75b7c6839634c8f998beb9879f22815850,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726595129388769710,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xgjrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a9ffb4-b223-4d4d-8330
-76cbedfc944b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7e1ab5d7a869fe10b29c149283fee294171d881c41c776a1cd2dfebeee766b,PodSandboxId:8618dedde106d5ebc72c32ea3b655af24dbd2814f39023d8b5a345858a442893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726595118565296729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7e634
32ab647f618158ba4465a7666,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dec6c2647270691d88ea7a950f9ad02bff02a646e0eb5139498c65a76b8471e,PodSandboxId:5f38a03a0f184f778311f158000f5e08b66be2dedb3e985d1f525dd16df6f528,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726595118566962018,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69fbb8cf1a2a95cef841df
9f977035fb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d0d3a5d8108feb075b79f62ab6547aaf2221f7c708a364b10e3acb5ec484402,PodSandboxId:ef58d0b7c446555e14dc25abe5669655cd8462fd8481c2861e892a9ae34e6f8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726595118541635049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9bed66fba5ec211216ea1925b1d31c6,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b632cb69ae054b728a96c3ca2d075682a7a19b51a9d1e38b71e9d0c4185affbc,PodSandboxId:e5ecbef0bd37bdfcd02135133123463359c80d774fafcb5958232b9774ad226e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726595118534276459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9fc36b9d4e13ff6509050eb710296a9,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f3d48b1c-85ae-4e66-8bfc-91383baa5cec name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:53:50 multinode-178778 crio[2738]: time="2024-09-17 17:53:50.788191296Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=31584417-1e04-4184-98dc-2da8003c2f47 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:53:50 multinode-178778 crio[2738]: time="2024-09-17 17:53:50.788282468Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=31584417-1e04-4184-98dc-2da8003c2f47 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:53:50 multinode-178778 crio[2738]: time="2024-09-17 17:53:50.789843101Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2154a815-1ec9-445d-b872-654b3f20f9d0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:53:50 multinode-178778 crio[2738]: time="2024-09-17 17:53:50.790405120Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595630790378532,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2154a815-1ec9-445d-b872-654b3f20f9d0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:53:50 multinode-178778 crio[2738]: time="2024-09-17 17:53:50.791144142Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=47ebb293-1b43-45f4-89ab-cbdf6ab0d1b6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:53:50 multinode-178778 crio[2738]: time="2024-09-17 17:53:50.791207926Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=47ebb293-1b43-45f4-89ab-cbdf6ab0d1b6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:53:50 multinode-178778 crio[2738]: time="2024-09-17 17:53:50.792711707Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0405fbdc7bc87c6d18e30e57a3dde93975cf4b4755c329eff726577ab4aa08a0,PodSandboxId:0aba6a8ac13afc5a95e7c297bd868c36df27571c928eff1c85b2ba7a72219c81,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726595566329753966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dh729,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03328709-2219-4b31-bc7c-ef68c899f81e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7747cc5a58254db89857ca054f1c4b1d749a368af47cb16d144d10b3806e7b3,PodSandboxId:4b6c0d09ebedf46dfb7f81ac1735fa9cd0b008e40693d69d078232411c2a63e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726595532787415439,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jpqbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57cc35fa-61ef-4dfa-bfb5-5a32ab2ceac3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86c20e143c46a855604ba041aca6c0047438a8117585fc36c6d9d2c0b57a6135,PodSandboxId:ccf0cada1da70fd12c77e6997b7da7765c0e2813b61c5c89829bddfff48c9d00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726595532761249026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qp52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9bc7e7-c862-462e-ad97-d92e2376af58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6b24db391cb933da944ac182a332ce0db5f3474386509f7b9cf4cfb08e2463,PodSandboxId:b284dd0c0aaa1c67afc59e12432aa935a15681ba708ed08ceb02de5dfadf8190,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726595532640388820,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xgjrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a9ffb4-b223-4d4d-8330-76cbedfc944b,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32d8d953255d0e96ba3b975da855906b9544159875990479b40ddf69a330960,PodSandboxId:64eb283025385a8effd7bd872ed209627d1acc98d049fc912dd18b8262866bc0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726595532580777245,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8f59650-fc1a-4e96-859e-6a630e6d50eb,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cc24c2367951f0d65aac87876e0e147ad370db1e920c526de00dac0e4a8f24,PodSandboxId:d8ca75c103a340ea6ad2c46d609a6d8e2a436bda7a00b48ebc2215ad3b870c81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726595527806149659,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69fbb8cf1a2a95cef841df9f977035fb,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d9d8cddcad905c1ed3d3dd98d944dd690b2f22cf2d5e258ef7a9195a1c77a4,PodSandboxId:c6e67650f8fe9fde77a87372e1e358baf900fd3c2904da863fd4cd4a7645c014,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726595527807650266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7e63432ab647f618158ba4465a7666,},Annotations:map[string]string{io.kub
ernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b6f4978954af28b30cf25e85875e8ee37fed382fa7752a10870ad1dceb734db,PodSandboxId:9f322ac6a5eedf6ebde783fa1ea75d8ccac60089ed54a53f3ef5eadc184cfbb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726595527696697113,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9fc36b9d4e13ff6509050eb710296a9,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c84521880b71b7452275804f3b79ff0aa785ebfc3d85ce0cfda1e7aaf958b0b3,PodSandboxId:d7b4d4533a9fc7e14bb13b775e065910f44677f4614d2428ec6c0e56ec7475c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726595527691472842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9bed66fba5ec211216ea1925b1d31c6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c935cf93b3484ed31fa337cecde14b652e2ceb4d008e20cf9dec2d21ca99c5d8,PodSandboxId:b09fedf3bddf89f29d25509a6796691e01c325e88e44cbde8cbdf443fedd7f34,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726595198502732789,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dh729,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03328709-2219-4b31-bc7c-ef68c899f81e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a42732ed4168b2b853710bf32449c57a2865255fb4184fc1a47f7564d66e866,PodSandboxId:7ea2370d20b04d7c496447e1fa6a0d99fd826a206f6dc6441af2f0001e513261,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726595141844813179,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qp52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9bc7e7-c862-462e-ad97-d92e2376af58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb9c7ffc975f881a0c87b1434cca5ac265eb1c53b8f3cb8187bc701987009ba2,PodSandboxId:7ca00b7e670bd01d93703889dcdf905c764bd42a0aeaf3d3c4c9f0bc8d4f6a8a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726595141788297943,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: b8f59650-fc1a-4e96-859e-6a630e6d50eb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d92c1bfd527d084237fb6a09891bcebbe2571a43bceb2b535e247e189a275ca9,PodSandboxId:bcd38cf96e3f52061bc550835c8864bdf8c1a0de64f6be952d9ff97bf2ffc1ad,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726595129671702808,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jpqbk,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 57cc35fa-61ef-4dfa-bfb5-5a32ab2ceac3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b25b58f20590fc6b00792854ea4852554e14daa8c313ec2cf96b998597e2bad,PodSandboxId:52d4b2fe8a56eab9472fa986766b9e75b7c6839634c8f998beb9879f22815850,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726595129388769710,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xgjrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a9ffb4-b223-4d4d-8330
-76cbedfc944b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7e1ab5d7a869fe10b29c149283fee294171d881c41c776a1cd2dfebeee766b,PodSandboxId:8618dedde106d5ebc72c32ea3b655af24dbd2814f39023d8b5a345858a442893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726595118565296729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7e634
32ab647f618158ba4465a7666,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dec6c2647270691d88ea7a950f9ad02bff02a646e0eb5139498c65a76b8471e,PodSandboxId:5f38a03a0f184f778311f158000f5e08b66be2dedb3e985d1f525dd16df6f528,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726595118566962018,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69fbb8cf1a2a95cef841df
9f977035fb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d0d3a5d8108feb075b79f62ab6547aaf2221f7c708a364b10e3acb5ec484402,PodSandboxId:ef58d0b7c446555e14dc25abe5669655cd8462fd8481c2861e892a9ae34e6f8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726595118541635049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9bed66fba5ec211216ea1925b1d31c6,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b632cb69ae054b728a96c3ca2d075682a7a19b51a9d1e38b71e9d0c4185affbc,PodSandboxId:e5ecbef0bd37bdfcd02135133123463359c80d774fafcb5958232b9774ad226e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726595118534276459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9fc36b9d4e13ff6509050eb710296a9,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=47ebb293-1b43-45f4-89ab-cbdf6ab0d1b6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:53:50 multinode-178778 crio[2738]: time="2024-09-17 17:53:50.842577005Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eb699636-7718-4275-9f99-06a51ad45d7c name=/runtime.v1.RuntimeService/Version
	Sep 17 17:53:50 multinode-178778 crio[2738]: time="2024-09-17 17:53:50.842658920Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eb699636-7718-4275-9f99-06a51ad45d7c name=/runtime.v1.RuntimeService/Version
	Sep 17 17:53:50 multinode-178778 crio[2738]: time="2024-09-17 17:53:50.844226771Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2f6cd564-653c-4a27-af8e-abe1cad719a4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:53:50 multinode-178778 crio[2738]: time="2024-09-17 17:53:50.844634484Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595630844608940,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2f6cd564-653c-4a27-af8e-abe1cad719a4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:53:50 multinode-178778 crio[2738]: time="2024-09-17 17:53:50.845329594Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d5ea44fe-211a-467a-8bc1-36dbbe8c3cae name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:53:50 multinode-178778 crio[2738]: time="2024-09-17 17:53:50.845385906Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d5ea44fe-211a-467a-8bc1-36dbbe8c3cae name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:53:50 multinode-178778 crio[2738]: time="2024-09-17 17:53:50.845737969Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0405fbdc7bc87c6d18e30e57a3dde93975cf4b4755c329eff726577ab4aa08a0,PodSandboxId:0aba6a8ac13afc5a95e7c297bd868c36df27571c928eff1c85b2ba7a72219c81,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726595566329753966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dh729,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03328709-2219-4b31-bc7c-ef68c899f81e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7747cc5a58254db89857ca054f1c4b1d749a368af47cb16d144d10b3806e7b3,PodSandboxId:4b6c0d09ebedf46dfb7f81ac1735fa9cd0b008e40693d69d078232411c2a63e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726595532787415439,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jpqbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57cc35fa-61ef-4dfa-bfb5-5a32ab2ceac3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86c20e143c46a855604ba041aca6c0047438a8117585fc36c6d9d2c0b57a6135,PodSandboxId:ccf0cada1da70fd12c77e6997b7da7765c0e2813b61c5c89829bddfff48c9d00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726595532761249026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qp52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9bc7e7-c862-462e-ad97-d92e2376af58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6b24db391cb933da944ac182a332ce0db5f3474386509f7b9cf4cfb08e2463,PodSandboxId:b284dd0c0aaa1c67afc59e12432aa935a15681ba708ed08ceb02de5dfadf8190,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726595532640388820,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xgjrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a9ffb4-b223-4d4d-8330-76cbedfc944b,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32d8d953255d0e96ba3b975da855906b9544159875990479b40ddf69a330960,PodSandboxId:64eb283025385a8effd7bd872ed209627d1acc98d049fc912dd18b8262866bc0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726595532580777245,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8f59650-fc1a-4e96-859e-6a630e6d50eb,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cc24c2367951f0d65aac87876e0e147ad370db1e920c526de00dac0e4a8f24,PodSandboxId:d8ca75c103a340ea6ad2c46d609a6d8e2a436bda7a00b48ebc2215ad3b870c81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726595527806149659,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69fbb8cf1a2a95cef841df9f977035fb,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d9d8cddcad905c1ed3d3dd98d944dd690b2f22cf2d5e258ef7a9195a1c77a4,PodSandboxId:c6e67650f8fe9fde77a87372e1e358baf900fd3c2904da863fd4cd4a7645c014,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726595527807650266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7e63432ab647f618158ba4465a7666,},Annotations:map[string]string{io.kub
ernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b6f4978954af28b30cf25e85875e8ee37fed382fa7752a10870ad1dceb734db,PodSandboxId:9f322ac6a5eedf6ebde783fa1ea75d8ccac60089ed54a53f3ef5eadc184cfbb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726595527696697113,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9fc36b9d4e13ff6509050eb710296a9,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c84521880b71b7452275804f3b79ff0aa785ebfc3d85ce0cfda1e7aaf958b0b3,PodSandboxId:d7b4d4533a9fc7e14bb13b775e065910f44677f4614d2428ec6c0e56ec7475c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726595527691472842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9bed66fba5ec211216ea1925b1d31c6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c935cf93b3484ed31fa337cecde14b652e2ceb4d008e20cf9dec2d21ca99c5d8,PodSandboxId:b09fedf3bddf89f29d25509a6796691e01c325e88e44cbde8cbdf443fedd7f34,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726595198502732789,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dh729,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03328709-2219-4b31-bc7c-ef68c899f81e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a42732ed4168b2b853710bf32449c57a2865255fb4184fc1a47f7564d66e866,PodSandboxId:7ea2370d20b04d7c496447e1fa6a0d99fd826a206f6dc6441af2f0001e513261,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726595141844813179,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qp52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9bc7e7-c862-462e-ad97-d92e2376af58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb9c7ffc975f881a0c87b1434cca5ac265eb1c53b8f3cb8187bc701987009ba2,PodSandboxId:7ca00b7e670bd01d93703889dcdf905c764bd42a0aeaf3d3c4c9f0bc8d4f6a8a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726595141788297943,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: b8f59650-fc1a-4e96-859e-6a630e6d50eb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d92c1bfd527d084237fb6a09891bcebbe2571a43bceb2b535e247e189a275ca9,PodSandboxId:bcd38cf96e3f52061bc550835c8864bdf8c1a0de64f6be952d9ff97bf2ffc1ad,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726595129671702808,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jpqbk,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 57cc35fa-61ef-4dfa-bfb5-5a32ab2ceac3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b25b58f20590fc6b00792854ea4852554e14daa8c313ec2cf96b998597e2bad,PodSandboxId:52d4b2fe8a56eab9472fa986766b9e75b7c6839634c8f998beb9879f22815850,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726595129388769710,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xgjrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a9ffb4-b223-4d4d-8330
-76cbedfc944b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7e1ab5d7a869fe10b29c149283fee294171d881c41c776a1cd2dfebeee766b,PodSandboxId:8618dedde106d5ebc72c32ea3b655af24dbd2814f39023d8b5a345858a442893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726595118565296729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7e634
32ab647f618158ba4465a7666,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dec6c2647270691d88ea7a950f9ad02bff02a646e0eb5139498c65a76b8471e,PodSandboxId:5f38a03a0f184f778311f158000f5e08b66be2dedb3e985d1f525dd16df6f528,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726595118566962018,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69fbb8cf1a2a95cef841df
9f977035fb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d0d3a5d8108feb075b79f62ab6547aaf2221f7c708a364b10e3acb5ec484402,PodSandboxId:ef58d0b7c446555e14dc25abe5669655cd8462fd8481c2861e892a9ae34e6f8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726595118541635049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9bed66fba5ec211216ea1925b1d31c6,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b632cb69ae054b728a96c3ca2d075682a7a19b51a9d1e38b71e9d0c4185affbc,PodSandboxId:e5ecbef0bd37bdfcd02135133123463359c80d774fafcb5958232b9774ad226e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726595118534276459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9fc36b9d4e13ff6509050eb710296a9,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d5ea44fe-211a-467a-8bc1-36dbbe8c3cae name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0405fbdc7bc87       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   0aba6a8ac13af       busybox-7dff88458-dh729
	c7747cc5a5825       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   4b6c0d09ebedf       kindnet-jpqbk
	86c20e143c46a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   1                   ccf0cada1da70       coredns-7c65d6cfc9-6qp52
	6e6b24db391cb       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      About a minute ago   Running             kube-proxy                1                   b284dd0c0aaa1       kube-proxy-xgjrq
	a32d8d953255d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   64eb283025385       storage-provisioner
	f4d9d8cddcad9       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   1                   c6e67650f8fe9       kube-controller-manager-multinode-178778
	e0cc24c236795       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      About a minute ago   Running             kube-scheduler            1                   d8ca75c103a34       kube-scheduler-multinode-178778
	1b6f4978954af       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            1                   9f322ac6a5eed       kube-apiserver-multinode-178778
	c84521880b71b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   d7b4d4533a9fc       etcd-multinode-178778
	c935cf93b3484       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   b09fedf3bddf8       busybox-7dff88458-dh729
	5a42732ed4168       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      8 minutes ago        Exited              coredns                   0                   7ea2370d20b04       coredns-7c65d6cfc9-6qp52
	bb9c7ffc975f8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   7ca00b7e670bd       storage-provisioner
	d92c1bfd527d0       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      8 minutes ago        Exited              kindnet-cni               0                   bcd38cf96e3f5       kindnet-jpqbk
	1b25b58f20590       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      8 minutes ago        Exited              kube-proxy                0                   52d4b2fe8a56e       kube-proxy-xgjrq
	2dec6c2647270       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      8 minutes ago        Exited              kube-scheduler            0                   5f38a03a0f184       kube-scheduler-multinode-178778
	8d7e1ab5d7a86       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      8 minutes ago        Exited              kube-controller-manager   0                   8618dedde106d       kube-controller-manager-multinode-178778
	4d0d3a5d8108f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   ef58d0b7c4465       etcd-multinode-178778
	b632cb69ae054       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      8 minutes ago        Exited              kube-apiserver            0                   e5ecbef0bd37b       kube-apiserver-multinode-178778
	
	
	==> coredns [5a42732ed4168b2b853710bf32449c57a2865255fb4184fc1a47f7564d66e866] <==
	[INFO] 10.244.1.2:34177 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001763538s
	[INFO] 10.244.1.2:49434 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000145433s
	[INFO] 10.244.1.2:51746 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115236s
	[INFO] 10.244.1.2:49965 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001213737s
	[INFO] 10.244.1.2:56314 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066309s
	[INFO] 10.244.1.2:34888 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093495s
	[INFO] 10.244.1.2:54349 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094506s
	[INFO] 10.244.0.3:40770 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113623s
	[INFO] 10.244.0.3:52920 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138575s
	[INFO] 10.244.0.3:43285 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075872s
	[INFO] 10.244.0.3:46058 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000078939s
	[INFO] 10.244.1.2:56883 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164364s
	[INFO] 10.244.1.2:47461 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000183226s
	[INFO] 10.244.1.2:48640 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130569s
	[INFO] 10.244.1.2:49432 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144736s
	[INFO] 10.244.0.3:52617 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125296s
	[INFO] 10.244.0.3:43114 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000169824s
	[INFO] 10.244.0.3:38825 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00012174s
	[INFO] 10.244.0.3:36682 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000106208s
	[INFO] 10.244.1.2:45018 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147648s
	[INFO] 10.244.1.2:46383 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000164519s
	[INFO] 10.244.1.2:43690 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111118s
	[INFO] 10.244.1.2:39530 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011877s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [86c20e143c46a855604ba041aca6c0047438a8117585fc36c6d9d2c0b57a6135] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48700 - 42719 "HINFO IN 6621620421430811849.8086069219133186416. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01679534s
	
	
	==> describe nodes <==
	Name:               multinode-178778
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-178778
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=multinode-178778
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T17_45_24_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 17:45:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-178778
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:53:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:52:11 +0000   Tue, 17 Sep 2024 17:45:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:52:11 +0000   Tue, 17 Sep 2024 17:45:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:52:11 +0000   Tue, 17 Sep 2024 17:45:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:52:11 +0000   Tue, 17 Sep 2024 17:45:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.35
	  Hostname:    multinode-178778
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7c5753240dc445929e857fe9cb9def72
	  System UUID:                7c575324-0dc4-4592-9e85-7fe9cb9def72
	  Boot ID:                    a8f276e5-ab31-48df-b3de-34e52584cbf0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dh729                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m15s
	  kube-system                 coredns-7c65d6cfc9-6qp52                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m22s
	  kube-system                 etcd-multinode-178778                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m28s
	  kube-system                 kindnet-jpqbk                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m23s
	  kube-system                 kube-apiserver-multinode-178778             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m29s
	  kube-system                 kube-controller-manager-multinode-178778    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 kube-proxy-xgjrq                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m23s
	  kube-system                 kube-scheduler-multinode-178778             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m21s                  kube-proxy       
	  Normal  Starting                 97s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  8m34s (x8 over 8m34s)  kubelet          Node multinode-178778 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m34s (x8 over 8m34s)  kubelet          Node multinode-178778 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m34s (x7 over 8m34s)  kubelet          Node multinode-178778 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m28s                  kubelet          Node multinode-178778 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m28s                  kubelet          Node multinode-178778 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m28s                  kubelet          Node multinode-178778 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m28s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m23s                  node-controller  Node multinode-178778 event: Registered Node multinode-178778 in Controller
	  Normal  NodeReady                8m10s                  kubelet          Node multinode-178778 status is now: NodeReady
	  Normal  Starting                 105s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  104s (x8 over 104s)    kubelet          Node multinode-178778 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s (x8 over 104s)    kubelet          Node multinode-178778 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s (x7 over 104s)    kubelet          Node multinode-178778 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  104s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           97s                    node-controller  Node multinode-178778 event: Registered Node multinode-178778 in Controller
	
	
	Name:               multinode-178778-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-178778-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=multinode-178778
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T17_52_51_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 17:52:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-178778-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:53:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:53:21 +0000   Tue, 17 Sep 2024 17:52:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:53:21 +0000   Tue, 17 Sep 2024 17:52:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:53:21 +0000   Tue, 17 Sep 2024 17:52:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:53:21 +0000   Tue, 17 Sep 2024 17:53:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.118
	  Hostname:    multinode-178778-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f4ca9ca4ae3642c18aba8226124e3ebf
	  System UUID:                f4ca9ca4-ae36-42c1-8aba-8226124e3ebf
	  Boot ID:                    232398c8-2a88-4761-9b3b-5221ce050f77
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nw788    0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kindnet-2qnbk              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m36s
	  kube-system                 kube-proxy-c8cnr           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m30s                  kube-proxy  
	  Normal  Starting                 55s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m36s (x2 over 7m36s)  kubelet     Node multinode-178778-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m36s (x2 over 7m36s)  kubelet     Node multinode-178778-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m36s (x2 over 7m36s)  kubelet     Node multinode-178778-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m36s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m18s                  kubelet     Node multinode-178778-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  60s (x2 over 60s)      kubelet     Node multinode-178778-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x2 over 60s)      kubelet     Node multinode-178778-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x2 over 60s)      kubelet     Node multinode-178778-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  60s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                42s                    kubelet     Node multinode-178778-m02 status is now: NodeReady
	
	
	Name:               multinode-178778-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-178778-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=multinode-178778
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T17_53_30_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 17:53:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-178778-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:53:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:53:47 +0000   Tue, 17 Sep 2024 17:53:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:53:47 +0000   Tue, 17 Sep 2024 17:53:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:53:47 +0000   Tue, 17 Sep 2024 17:53:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:53:47 +0000   Tue, 17 Sep 2024 17:53:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.62
	  Hostname:    multinode-178778-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 35bcbafe566c497c87f31444d60a943f
	  System UUID:                35bcbafe-566c-497c-87f3-1444d60a943f
	  Boot ID:                    ed240b21-88a6-47fa-b64b-014f3a53315a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-tvvwv       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m40s
	  kube-system                 kube-proxy-m8z6x    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m47s                  kube-proxy       
	  Normal  Starting                 6m35s                  kube-proxy       
	  Normal  Starting                 16s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  6m40s (x2 over 6m40s)  kubelet          Node multinode-178778-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m40s (x2 over 6m40s)  kubelet          Node multinode-178778-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m40s (x2 over 6m40s)  kubelet          Node multinode-178778-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m22s                  kubelet          Node multinode-178778-m03 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  5m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     5m52s (x2 over 5m52s)  kubelet          Node multinode-178778-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m52s (x2 over 5m52s)  kubelet          Node multinode-178778-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m52s (x2 over 5m52s)  kubelet          Node multinode-178778-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m34s                  kubelet          Node multinode-178778-m03 status is now: NodeReady
	  Normal  Starting                 22s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x2 over 22s)      kubelet          Node multinode-178778-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 22s)      kubelet          Node multinode-178778-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 22s)      kubelet          Node multinode-178778-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17s                    node-controller  Node multinode-178778-m03 event: Registered Node multinode-178778-m03 in Controller
	  Normal  NodeReady                4s                     kubelet          Node multinode-178778-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.066262] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.164050] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.148836] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.282267] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +4.210236] systemd-fstab-generator[743]: Ignoring "noauto" option for root device
	[  +4.165058] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.060981] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.498598] systemd-fstab-generator[1218]: Ignoring "noauto" option for root device
	[  +0.091673] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.718557] systemd-fstab-generator[1319]: Ignoring "noauto" option for root device
	[  +1.026682] kauditd_printk_skb: 46 callbacks suppressed
	[ +12.366049] kauditd_printk_skb: 41 callbacks suppressed
	[Sep17 17:46] kauditd_printk_skb: 14 callbacks suppressed
	[Sep17 17:51] systemd-fstab-generator[2661]: Ignoring "noauto" option for root device
	[  +0.156396] systemd-fstab-generator[2673]: Ignoring "noauto" option for root device
	[  +0.185248] systemd-fstab-generator[2689]: Ignoring "noauto" option for root device
	[  +0.153418] systemd-fstab-generator[2701]: Ignoring "noauto" option for root device
	[  +0.298054] systemd-fstab-generator[2729]: Ignoring "noauto" option for root device
	[Sep17 17:52] systemd-fstab-generator[2824]: Ignoring "noauto" option for root device
	[  +0.086195] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.073423] systemd-fstab-generator[2947]: Ignoring "noauto" option for root device
	[  +5.664636] kauditd_printk_skb: 74 callbacks suppressed
	[  +6.717026] kauditd_printk_skb: 34 callbacks suppressed
	[  +7.295449] systemd-fstab-generator[3795]: Ignoring "noauto" option for root device
	[ +19.812877] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [4d0d3a5d8108feb075b79f62ab6547aaf2221f7c708a364b10e3acb5ec484402] <==
	{"level":"info","ts":"2024-09-17T17:45:19.610472Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T17:45:19.613212Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"732232f81d76e930","local-member-attributes":"{Name:multinode-178778 ClientURLs:[https://192.168.39.35:2379]}","request-path":"/0/members/732232f81d76e930/attributes","cluster-id":"45f5838de4bd43f1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T17:45:19.613423Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T17:45:19.613741Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T17:45:19.614069Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T17:45:19.614105Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T17:45:19.614729Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T17:45:19.621091Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"45f5838de4bd43f1","local-member-id":"732232f81d76e930","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T17:45:19.621195Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T17:45:19.621237Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T17:45:19.621811Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T17:45:19.622590Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-17T17:45:19.624234Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.35:2379"}
	{"level":"info","ts":"2024-09-17T17:46:15.136060Z","caller":"traceutil/trace.go:171","msg":"trace[1538620714] transaction","detail":"{read_only:false; response_revision:477; number_of_response:1; }","duration":"100.292864ms","start":"2024-09-17T17:46:15.035734Z","end":"2024-09-17T17:46:15.136026Z","steps":["trace[1538620714] 'process raft request'  (duration: 40.580853ms)","trace[1538620714] 'compare'  (duration: 59.562208ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-17T17:46:15.360968Z","caller":"traceutil/trace.go:171","msg":"trace[948571904] transaction","detail":"{read_only:false; response_revision:479; number_of_response:1; }","duration":"157.428375ms","start":"2024-09-17T17:46:15.203520Z","end":"2024-09-17T17:46:15.360948Z","steps":["trace[948571904] 'process raft request'  (duration: 152.690087ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T17:50:23.814397Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-17T17:50:23.814511Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-178778","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.35:2380"],"advertise-client-urls":["https://192.168.39.35:2379"]}
	{"level":"warn","ts":"2024-09-17T17:50:23.814653Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-17T17:50:23.814759Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-17T17:50:23.914368Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.35:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-17T17:50:23.914438Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.35:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-17T17:50:23.914520Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"732232f81d76e930","current-leader-member-id":"732232f81d76e930"}
	{"level":"info","ts":"2024-09-17T17:50:23.917488Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.35:2380"}
	{"level":"info","ts":"2024-09-17T17:50:23.917703Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.35:2380"}
	{"level":"info","ts":"2024-09-17T17:50:23.917741Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-178778","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.35:2380"],"advertise-client-urls":["https://192.168.39.35:2379"]}
	
	
	==> etcd [c84521880b71b7452275804f3b79ff0aa785ebfc3d85ce0cfda1e7aaf958b0b3] <==
	{"level":"info","ts":"2024-09-17T17:52:08.099899Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"45f5838de4bd43f1","local-member-id":"732232f81d76e930","added-peer-id":"732232f81d76e930","added-peer-peer-urls":["https://192.168.39.35:2380"]}
	{"level":"info","ts":"2024-09-17T17:52:08.100092Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"45f5838de4bd43f1","local-member-id":"732232f81d76e930","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T17:52:08.100140Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T17:52:08.102238Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T17:52:08.103955Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-17T17:52:08.111195Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"732232f81d76e930","initial-advertise-peer-urls":["https://192.168.39.35:2380"],"listen-peer-urls":["https://192.168.39.35:2380"],"advertise-client-urls":["https://192.168.39.35:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.35:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-17T17:52:08.111686Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-17T17:52:08.112212Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.35:2380"}
	{"level":"info","ts":"2024-09-17T17:52:08.113009Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.35:2380"}
	{"level":"info","ts":"2024-09-17T17:52:09.730623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"732232f81d76e930 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-17T17:52:09.730675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"732232f81d76e930 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-17T17:52:09.730721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"732232f81d76e930 received MsgPreVoteResp from 732232f81d76e930 at term 2"}
	{"level":"info","ts":"2024-09-17T17:52:09.730750Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"732232f81d76e930 became candidate at term 3"}
	{"level":"info","ts":"2024-09-17T17:52:09.730756Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"732232f81d76e930 received MsgVoteResp from 732232f81d76e930 at term 3"}
	{"level":"info","ts":"2024-09-17T17:52:09.730765Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"732232f81d76e930 became leader at term 3"}
	{"level":"info","ts":"2024-09-17T17:52:09.730774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 732232f81d76e930 elected leader 732232f81d76e930 at term 3"}
	{"level":"info","ts":"2024-09-17T17:52:09.733458Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"732232f81d76e930","local-member-attributes":"{Name:multinode-178778 ClientURLs:[https://192.168.39.35:2379]}","request-path":"/0/members/732232f81d76e930/attributes","cluster-id":"45f5838de4bd43f1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T17:52:09.733671Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T17:52:09.733750Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T17:52:09.733782Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T17:52:09.733800Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T17:52:09.735082Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T17:52:09.736070Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-17T17:52:09.735084Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T17:52:09.736931Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.35:2379"}
	
	
	==> kernel <==
	 17:53:51 up 9 min,  0 users,  load average: 0.13, 0.14, 0.09
	Linux multinode-178778 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c7747cc5a58254db89857ca054f1c4b1d749a368af47cb16d144d10b3806e7b3] <==
	I0917 17:53:03.844367       1 main.go:299] handling current node
	I0917 17:53:13.844271       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0917 17:53:13.844397       1 main.go:299] handling current node
	I0917 17:53:13.844426       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0917 17:53:13.844444       1 main.go:322] Node multinode-178778-m02 has CIDR [10.244.1.0/24] 
	I0917 17:53:13.844592       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0917 17:53:13.844614       1 main.go:322] Node multinode-178778-m03 has CIDR [10.244.4.0/24] 
	I0917 17:53:23.844232       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0917 17:53:23.844491       1 main.go:299] handling current node
	I0917 17:53:23.844605       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0917 17:53:23.844633       1 main.go:322] Node multinode-178778-m02 has CIDR [10.244.1.0/24] 
	I0917 17:53:23.844970       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0917 17:53:23.845090       1 main.go:322] Node multinode-178778-m03 has CIDR [10.244.4.0/24] 
	I0917 17:53:33.847456       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0917 17:53:33.847757       1 main.go:299] handling current node
	I0917 17:53:33.847816       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0917 17:53:33.847843       1 main.go:322] Node multinode-178778-m02 has CIDR [10.244.1.0/24] 
	I0917 17:53:33.848113       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0917 17:53:33.848154       1 main.go:322] Node multinode-178778-m03 has CIDR [10.244.2.0/24] 
	I0917 17:53:43.845364       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0917 17:53:43.845471       1 main.go:322] Node multinode-178778-m03 has CIDR [10.244.2.0/24] 
	I0917 17:53:43.845785       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0917 17:53:43.845822       1 main.go:299] handling current node
	I0917 17:53:43.845896       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0917 17:53:43.845903       1 main.go:322] Node multinode-178778-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [d92c1bfd527d084237fb6a09891bcebbe2571a43bceb2b535e247e189a275ca9] <==
	I0917 17:49:40.849687       1 main.go:322] Node multinode-178778-m02 has CIDR [10.244.1.0/24] 
	I0917 17:49:50.852522       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0917 17:49:50.852689       1 main.go:322] Node multinode-178778-m02 has CIDR [10.244.1.0/24] 
	I0917 17:49:50.852863       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0917 17:49:50.852892       1 main.go:322] Node multinode-178778-m03 has CIDR [10.244.4.0/24] 
	I0917 17:49:50.852963       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0917 17:49:50.853072       1 main.go:299] handling current node
	I0917 17:50:00.854210       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0917 17:50:00.854288       1 main.go:299] handling current node
	I0917 17:50:00.854303       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0917 17:50:00.854309       1 main.go:322] Node multinode-178778-m02 has CIDR [10.244.1.0/24] 
	I0917 17:50:00.854487       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0917 17:50:00.854515       1 main.go:322] Node multinode-178778-m03 has CIDR [10.244.4.0/24] 
	I0917 17:50:10.848681       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0917 17:50:10.848751       1 main.go:299] handling current node
	I0917 17:50:10.848769       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0917 17:50:10.848774       1 main.go:322] Node multinode-178778-m02 has CIDR [10.244.1.0/24] 
	I0917 17:50:10.849102       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0917 17:50:10.849127       1 main.go:322] Node multinode-178778-m03 has CIDR [10.244.4.0/24] 
	I0917 17:50:20.848574       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0917 17:50:20.848701       1 main.go:322] Node multinode-178778-m03 has CIDR [10.244.4.0/24] 
	I0917 17:50:20.848906       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0917 17:50:20.848954       1 main.go:299] handling current node
	I0917 17:50:20.849075       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0917 17:50:20.849097       1 main.go:322] Node multinode-178778-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [1b6f4978954af28b30cf25e85875e8ee37fed382fa7752a10870ad1dceb734db] <==
	I0917 17:52:11.110301       1 policy_source.go:224] refreshing policies
	I0917 17:52:11.120092       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0917 17:52:11.125366       1 shared_informer.go:320] Caches are synced for configmaps
	I0917 17:52:11.126345       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0917 17:52:11.135361       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0917 17:52:11.135479       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0917 17:52:11.135504       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0917 17:52:11.135604       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0917 17:52:11.137598       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0917 17:52:11.168143       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0917 17:52:11.172326       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0917 17:52:11.172421       1 aggregator.go:171] initial CRD sync complete...
	I0917 17:52:11.172429       1 autoregister_controller.go:144] Starting autoregister controller
	I0917 17:52:11.172434       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0917 17:52:11.172439       1 cache.go:39] Caches are synced for autoregister controller
	I0917 17:52:11.193925       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 17:52:11.200060       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0917 17:52:12.032576       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0917 17:52:13.476072       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0917 17:52:13.671899       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0917 17:52:13.683408       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0917 17:52:13.780369       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0917 17:52:13.793645       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0917 17:52:14.531518       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0917 17:52:14.823179       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [b632cb69ae054b728a96c3ca2d075682a7a19b51a9d1e38b71e9d0c4185affbc] <==
	I0917 17:45:23.874818       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0917 17:45:28.713506       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0917 17:45:28.880432       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0917 17:46:39.713904       1 conn.go:339] Error on socket receive: read tcp 192.168.39.35:8443->192.168.39.1:48720: use of closed network connection
	E0917 17:46:39.886268       1 conn.go:339] Error on socket receive: read tcp 192.168.39.35:8443->192.168.39.1:48734: use of closed network connection
	E0917 17:46:40.067289       1 conn.go:339] Error on socket receive: read tcp 192.168.39.35:8443->192.168.39.1:48750: use of closed network connection
	E0917 17:46:40.255709       1 conn.go:339] Error on socket receive: read tcp 192.168.39.35:8443->192.168.39.1:48764: use of closed network connection
	E0917 17:46:40.431787       1 conn.go:339] Error on socket receive: read tcp 192.168.39.35:8443->192.168.39.1:48784: use of closed network connection
	E0917 17:46:40.602437       1 conn.go:339] Error on socket receive: read tcp 192.168.39.35:8443->192.168.39.1:48798: use of closed network connection
	E0917 17:46:40.901391       1 conn.go:339] Error on socket receive: read tcp 192.168.39.35:8443->192.168.39.1:48830: use of closed network connection
	E0917 17:46:41.077891       1 conn.go:339] Error on socket receive: read tcp 192.168.39.35:8443->192.168.39.1:48840: use of closed network connection
	E0917 17:46:41.257588       1 conn.go:339] Error on socket receive: read tcp 192.168.39.35:8443->192.168.39.1:48852: use of closed network connection
	E0917 17:46:41.424849       1 conn.go:339] Error on socket receive: read tcp 192.168.39.35:8443->192.168.39.1:48870: use of closed network connection
	I0917 17:50:23.813327       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0917 17:50:23.824425       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0917 17:50:23.830692       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0917 17:50:23.830913       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0917 17:50:23.835500       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0917 17:50:23.836512       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0917 17:50:23.836602       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0917 17:50:23.836970       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0917 17:50:23.839127       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0917 17:50:23.842956       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0917 17:50:23.843506       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0917 17:50:23.843595       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	
	
	==> kube-controller-manager [8d7e1ab5d7a869fe10b29c149283fee294171d881c41c776a1cd2dfebeee766b] <==
	I0917 17:47:58.339720       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:47:58.339820       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-178778-m02"
	I0917 17:47:59.451629       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-178778-m03\" does not exist"
	I0917 17:47:59.452538       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-178778-m02"
	I0917 17:47:59.464212       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-178778-m03" podCIDRs=["10.244.4.0/24"]
	I0917 17:47:59.464267       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:47:59.464297       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:47:59.474555       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:47:59.913929       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:48:00.279782       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:48:03.250225       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:48:09.711665       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:48:17.646732       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:48:17.646888       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-178778-m03"
	I0917 17:48:17.661759       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:48:18.184449       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:48:58.205507       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m02"
	I0917 17:48:58.205577       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-178778-m03"
	I0917 17:48:58.223326       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m02"
	I0917 17:48:58.279433       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="19.194791ms"
	I0917 17:48:58.279592       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="44.657µs"
	I0917 17:49:03.273831       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:49:03.292801       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:49:03.355429       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m02"
	I0917 17:49:13.435738       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	
	
	==> kube-controller-manager [f4d9d8cddcad905c1ed3d3dd98d944dd690b2f22cf2d5e258ef7a9195a1c77a4] <==
	I0917 17:53:09.135453       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m02"
	I0917 17:53:09.142958       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="69.183µs"
	I0917 17:53:09.169082       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="152.18µs"
	I0917 17:53:09.534141       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m02"
	I0917 17:53:12.581250       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.670047ms"
	I0917 17:53:12.581544       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="87.366µs"
	I0917 17:53:21.615075       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m02"
	I0917 17:53:28.260755       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:53:28.276640       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:53:28.516154       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:53:28.516520       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-178778-m02"
	I0917 17:53:29.655450       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-178778-m03\" does not exist"
	I0917 17:53:29.655663       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-178778-m02"
	I0917 17:53:29.670276       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-178778-m03" podCIDRs=["10.244.2.0/24"]
	I0917 17:53:29.670403       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:53:29.670929       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:53:29.679718       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:53:30.077838       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:53:30.444915       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:53:34.563441       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:53:39.938473       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:53:47.733229       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:53:47.733354       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-178778-m02"
	I0917 17:53:47.749837       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:53:49.556728       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	
	
	==> kube-proxy [1b25b58f20590fc6b00792854ea4852554e14daa8c313ec2cf96b998597e2bad] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 17:45:29.604115       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 17:45:29.624029       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.35"]
	E0917 17:45:29.624209       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 17:45:29.686269       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 17:45:29.686323       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 17:45:29.686387       1 server_linux.go:169] "Using iptables Proxier"
	I0917 17:45:29.689082       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 17:45:29.689473       1 server.go:483] "Version info" version="v1.31.1"
	I0917 17:45:29.689500       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:45:29.691114       1 config.go:199] "Starting service config controller"
	I0917 17:45:29.691161       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 17:45:29.691191       1 config.go:105] "Starting endpoint slice config controller"
	I0917 17:45:29.691196       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 17:45:29.693876       1 config.go:328] "Starting node config controller"
	I0917 17:45:29.693908       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 17:45:29.791568       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 17:45:29.791639       1 shared_informer.go:320] Caches are synced for service config
	I0917 17:45:29.794650       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [6e6b24db391cb933da944ac182a332ce0db5f3474386509f7b9cf4cfb08e2463] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 17:52:13.253126       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 17:52:13.278535       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.35"]
	E0917 17:52:13.278672       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 17:52:13.409705       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 17:52:13.409820       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 17:52:13.409862       1 server_linux.go:169] "Using iptables Proxier"
	I0917 17:52:13.418230       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 17:52:13.418601       1 server.go:483] "Version info" version="v1.31.1"
	I0917 17:52:13.418629       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:52:13.422681       1 config.go:328] "Starting node config controller"
	I0917 17:52:13.422772       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 17:52:13.422887       1 config.go:199] "Starting service config controller"
	I0917 17:52:13.422943       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 17:52:13.423031       1 config.go:105] "Starting endpoint slice config controller"
	I0917 17:52:13.423051       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 17:52:13.523426       1 shared_informer.go:320] Caches are synced for node config
	I0917 17:52:13.523560       1 shared_informer.go:320] Caches are synced for service config
	I0917 17:52:13.523666       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2dec6c2647270691d88ea7a950f9ad02bff02a646e0eb5139498c65a76b8471e] <==
	E0917 17:45:22.106240       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 17:45:22.135512       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 17:45:22.135607       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:45:22.158913       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0917 17:45:22.158967       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:45:22.173931       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 17:45:22.174027       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 17:45:22.334363       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 17:45:22.334513       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:45:22.343318       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0917 17:45:22.343371       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:45:22.378497       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0917 17:45:22.378632       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 17:45:22.391028       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 17:45:22.391179       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 17:45:22.472897       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 17:45:22.473067       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:45:22.517325       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0917 17:45:22.517440       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:45:22.543782       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0917 17:45:22.543890       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0917 17:45:22.546448       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0917 17:45:22.546497       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0917 17:45:25.387053       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0917 17:50:23.818061       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e0cc24c2367951f0d65aac87876e0e147ad370db1e920c526de00dac0e4a8f24] <==
	I0917 17:52:09.112927       1 serving.go:386] Generated self-signed cert in-memory
	W0917 17:52:11.080936       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0917 17:52:11.081036       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0917 17:52:11.081049       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0917 17:52:11.081061       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0917 17:52:11.119767       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0917 17:52:11.119827       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:52:11.122633       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 17:52:11.122735       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0917 17:52:11.122807       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0917 17:52:11.122908       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 17:52:11.223701       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 17 17:52:17 multinode-178778 kubelet[2954]: E0917 17:52:17.087480    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595537086865951,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:52:19 multinode-178778 kubelet[2954]: I0917 17:52:19.100618    2954 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 17 17:52:27 multinode-178778 kubelet[2954]: E0917 17:52:27.088610    2954 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595547088360590,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:52:27 multinode-178778 kubelet[2954]: E0917 17:52:27.088634    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595547088360590,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:52:37 multinode-178778 kubelet[2954]: E0917 17:52:37.097212    2954 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595557096386045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:52:37 multinode-178778 kubelet[2954]: E0917 17:52:37.097539    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595557096386045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:52:47 multinode-178778 kubelet[2954]: E0917 17:52:47.099669    2954 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595567099335407,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:52:47 multinode-178778 kubelet[2954]: E0917 17:52:47.100036    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595567099335407,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:52:57 multinode-178778 kubelet[2954]: E0917 17:52:57.107575    2954 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595577106740673,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:52:57 multinode-178778 kubelet[2954]: E0917 17:52:57.107781    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595577106740673,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:53:07 multinode-178778 kubelet[2954]: E0917 17:53:07.046651    2954 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 17 17:53:07 multinode-178778 kubelet[2954]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 17 17:53:07 multinode-178778 kubelet[2954]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 17 17:53:07 multinode-178778 kubelet[2954]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 17 17:53:07 multinode-178778 kubelet[2954]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 17 17:53:07 multinode-178778 kubelet[2954]: E0917 17:53:07.109536    2954 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595587109188966,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:53:07 multinode-178778 kubelet[2954]: E0917 17:53:07.109589    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595587109188966,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:53:17 multinode-178778 kubelet[2954]: E0917 17:53:17.111848    2954 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595597111276748,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:53:17 multinode-178778 kubelet[2954]: E0917 17:53:17.111900    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595597111276748,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:53:27 multinode-178778 kubelet[2954]: E0917 17:53:27.113522    2954 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595607112932953,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:53:27 multinode-178778 kubelet[2954]: E0917 17:53:27.113894    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595607112932953,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:53:37 multinode-178778 kubelet[2954]: E0917 17:53:37.116367    2954 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595617115861844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:53:37 multinode-178778 kubelet[2954]: E0917 17:53:37.116708    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595617115861844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:53:47 multinode-178778 kubelet[2954]: E0917 17:53:47.118584    2954 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595627118226587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:53:47 multinode-178778 kubelet[2954]: E0917 17:53:47.118959    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595627118226587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 17:53:50.407611   49087 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19662-11085/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-178778 -n multinode-178778
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-178778 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (331.90s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 stop
E0917 17:54:28.049598   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-178778 stop: exit status 82 (2m0.470970143s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-178778-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-178778 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-178778 status: exit status 3 (18.894242163s)

                                                
                                                
-- stdout --
	multinode-178778
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-178778-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 17:56:14.061538   49731 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.118:22: connect: no route to host
	E0917 17:56:14.061581   49731 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.118:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-178778 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-178778 -n multinode-178778
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-178778 logs -n 25: (1.568999724s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-178778 ssh -n                                                                 | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | multinode-178778-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-178778 cp multinode-178778-m02:/home/docker/cp-test.txt                       | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | multinode-178778:/home/docker/cp-test_multinode-178778-m02_multinode-178778.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-178778 ssh -n                                                                 | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | multinode-178778-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-178778 ssh -n multinode-178778 sudo cat                                       | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | /home/docker/cp-test_multinode-178778-m02_multinode-178778.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-178778 cp multinode-178778-m02:/home/docker/cp-test.txt                       | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | multinode-178778-m03:/home/docker/cp-test_multinode-178778-m02_multinode-178778-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-178778 ssh -n                                                                 | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | multinode-178778-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-178778 ssh -n multinode-178778-m03 sudo cat                                   | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | /home/docker/cp-test_multinode-178778-m02_multinode-178778-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-178778 cp testdata/cp-test.txt                                                | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | multinode-178778-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-178778 ssh -n                                                                 | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | multinode-178778-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-178778 cp multinode-178778-m03:/home/docker/cp-test.txt                       | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile460922367/001/cp-test_multinode-178778-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-178778 ssh -n                                                                 | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | multinode-178778-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-178778 cp multinode-178778-m03:/home/docker/cp-test.txt                       | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | multinode-178778:/home/docker/cp-test_multinode-178778-m03_multinode-178778.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-178778 ssh -n                                                                 | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | multinode-178778-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-178778 ssh -n multinode-178778 sudo cat                                       | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | /home/docker/cp-test_multinode-178778-m03_multinode-178778.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-178778 cp multinode-178778-m03:/home/docker/cp-test.txt                       | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | multinode-178778-m02:/home/docker/cp-test_multinode-178778-m03_multinode-178778-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-178778 ssh -n                                                                 | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | multinode-178778-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-178778 ssh -n multinode-178778-m02 sudo cat                                   | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	|         | /home/docker/cp-test_multinode-178778-m03_multinode-178778-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-178778 node stop m03                                                          | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:47 UTC |
	| node    | multinode-178778 node start                                                             | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:47 UTC | 17 Sep 24 17:48 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-178778                                                                | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:48 UTC |                     |
	| stop    | -p multinode-178778                                                                     | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:48 UTC |                     |
	| start   | -p multinode-178778                                                                     | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:50 UTC | 17 Sep 24 17:53 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-178778                                                                | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:53 UTC |                     |
	| node    | multinode-178778 node delete                                                            | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:53 UTC | 17 Sep 24 17:53 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-178778 stop                                                                   | multinode-178778 | jenkins | v1.34.0 | 17 Sep 24 17:53 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 17:50:22
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 17:50:22.407405   47940 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:50:22.407818   47940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:50:22.407828   47940 out.go:358] Setting ErrFile to fd 2...
	I0917 17:50:22.407835   47940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:50:22.408119   47940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 17:50:22.408728   47940 out.go:352] Setting JSON to false
	I0917 17:50:22.409750   47940 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5537,"bootTime":1726589885,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 17:50:22.409851   47940 start.go:139] virtualization: kvm guest
	I0917 17:50:22.412162   47940 out.go:177] * [multinode-178778] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 17:50:22.413560   47940 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 17:50:22.413561   47940 notify.go:220] Checking for updates...
	I0917 17:50:22.414900   47940 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 17:50:22.416235   47940 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 17:50:22.417819   47940 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 17:50:22.419325   47940 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 17:50:22.420562   47940 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 17:50:22.422347   47940 config.go:182] Loaded profile config "multinode-178778": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:50:22.422477   47940 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 17:50:22.423141   47940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:50:22.423201   47940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:50:22.440370   47940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35517
	I0917 17:50:22.440937   47940 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:50:22.441574   47940 main.go:141] libmachine: Using API Version  1
	I0917 17:50:22.441596   47940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:50:22.441962   47940 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:50:22.442181   47940 main.go:141] libmachine: (multinode-178778) Calling .DriverName
	I0917 17:50:22.479596   47940 out.go:177] * Using the kvm2 driver based on existing profile
	I0917 17:50:22.480923   47940 start.go:297] selected driver: kvm2
	I0917 17:50:22.480947   47940 start.go:901] validating driver "kvm2" against &{Name:multinode-178778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-178778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.62 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:50:22.481107   47940 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 17:50:22.481483   47940 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 17:50:22.481577   47940 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19662-11085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0917 17:50:22.497866   47940 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0917 17:50:22.498603   47940 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 17:50:22.498652   47940 cni.go:84] Creating CNI manager for ""
	I0917 17:50:22.498706   47940 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 17:50:22.498775   47940 start.go:340] cluster config:
	{Name:multinode-178778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-178778 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.62 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:50:22.498963   47940 iso.go:125] acquiring lock: {Name:mkee7131e09b1c5eb94d106bda884bcbdbf6f906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 17:50:22.500937   47940 out.go:177] * Starting "multinode-178778" primary control-plane node in "multinode-178778" cluster
	I0917 17:50:22.502213   47940 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 17:50:22.502256   47940 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0917 17:50:22.502263   47940 cache.go:56] Caching tarball of preloaded images
	I0917 17:50:22.502380   47940 preload.go:172] Found /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 17:50:22.502395   47940 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0917 17:50:22.502518   47940 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/multinode-178778/config.json ...
	I0917 17:50:22.502707   47940 start.go:360] acquireMachinesLock for multinode-178778: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 17:50:22.502753   47940 start.go:364] duration metric: took 25.965µs to acquireMachinesLock for "multinode-178778"
	I0917 17:50:22.502772   47940 start.go:96] Skipping create...Using existing machine configuration
	I0917 17:50:22.502780   47940 fix.go:54] fixHost starting: 
	I0917 17:50:22.503030   47940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:50:22.503065   47940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:50:22.517839   47940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36845
	I0917 17:50:22.518243   47940 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:50:22.518737   47940 main.go:141] libmachine: Using API Version  1
	I0917 17:50:22.518759   47940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:50:22.519056   47940 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:50:22.519217   47940 main.go:141] libmachine: (multinode-178778) Calling .DriverName
	I0917 17:50:22.519397   47940 main.go:141] libmachine: (multinode-178778) Calling .GetState
	I0917 17:50:22.520935   47940 fix.go:112] recreateIfNeeded on multinode-178778: state=Running err=<nil>
	W0917 17:50:22.520956   47940 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 17:50:22.523004   47940 out.go:177] * Updating the running kvm2 "multinode-178778" VM ...
	I0917 17:50:22.524521   47940 machine.go:93] provisionDockerMachine start ...
	I0917 17:50:22.524565   47940 main.go:141] libmachine: (multinode-178778) Calling .DriverName
	I0917 17:50:22.524803   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHHostname
	I0917 17:50:22.527466   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:50:22.527945   47940 main.go:141] libmachine: (multinode-178778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:92:d1", ip: ""} in network mk-multinode-178778: {Iface:virbr1 ExpiryTime:2024-09-17 18:45:00 +0000 UTC Type:0 Mac:52:54:00:c4:92:d1 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-178778 Clientid:01:52:54:00:c4:92:d1}
	I0917 17:50:22.527965   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined IP address 192.168.39.35 and MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:50:22.528148   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHPort
	I0917 17:50:22.528321   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHKeyPath
	I0917 17:50:22.528475   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHKeyPath
	I0917 17:50:22.528598   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHUsername
	I0917 17:50:22.528766   47940 main.go:141] libmachine: Using SSH client type: native
	I0917 17:50:22.528961   47940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0917 17:50:22.528976   47940 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 17:50:22.646608   47940 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-178778
	
	I0917 17:50:22.646633   47940 main.go:141] libmachine: (multinode-178778) Calling .GetMachineName
	I0917 17:50:22.646910   47940 buildroot.go:166] provisioning hostname "multinode-178778"
	I0917 17:50:22.646935   47940 main.go:141] libmachine: (multinode-178778) Calling .GetMachineName
	I0917 17:50:22.647114   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHHostname
	I0917 17:50:22.649595   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:50:22.650098   47940 main.go:141] libmachine: (multinode-178778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:92:d1", ip: ""} in network mk-multinode-178778: {Iface:virbr1 ExpiryTime:2024-09-17 18:45:00 +0000 UTC Type:0 Mac:52:54:00:c4:92:d1 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-178778 Clientid:01:52:54:00:c4:92:d1}
	I0917 17:50:22.650124   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined IP address 192.168.39.35 and MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:50:22.650313   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHPort
	I0917 17:50:22.650490   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHKeyPath
	I0917 17:50:22.650654   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHKeyPath
	I0917 17:50:22.650788   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHUsername
	I0917 17:50:22.650920   47940 main.go:141] libmachine: Using SSH client type: native
	I0917 17:50:22.651122   47940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0917 17:50:22.651139   47940 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-178778 && echo "multinode-178778" | sudo tee /etc/hostname
	I0917 17:50:22.785094   47940 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-178778
	
	I0917 17:50:22.785129   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHHostname
	I0917 17:50:22.787581   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:50:22.787915   47940 main.go:141] libmachine: (multinode-178778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:92:d1", ip: ""} in network mk-multinode-178778: {Iface:virbr1 ExpiryTime:2024-09-17 18:45:00 +0000 UTC Type:0 Mac:52:54:00:c4:92:d1 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-178778 Clientid:01:52:54:00:c4:92:d1}
	I0917 17:50:22.787948   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined IP address 192.168.39.35 and MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:50:22.788099   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHPort
	I0917 17:50:22.788284   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHKeyPath
	I0917 17:50:22.788480   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHKeyPath
	I0917 17:50:22.788609   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHUsername
	I0917 17:50:22.788759   47940 main.go:141] libmachine: Using SSH client type: native
	I0917 17:50:22.788971   47940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0917 17:50:22.788993   47940 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-178778' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-178778/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-178778' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 17:50:22.906436   47940 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 17:50:22.906465   47940 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 17:50:22.906507   47940 buildroot.go:174] setting up certificates
	I0917 17:50:22.906517   47940 provision.go:84] configureAuth start
	I0917 17:50:22.906533   47940 main.go:141] libmachine: (multinode-178778) Calling .GetMachineName
	I0917 17:50:22.906811   47940 main.go:141] libmachine: (multinode-178778) Calling .GetIP
	I0917 17:50:22.909670   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:50:22.909998   47940 main.go:141] libmachine: (multinode-178778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:92:d1", ip: ""} in network mk-multinode-178778: {Iface:virbr1 ExpiryTime:2024-09-17 18:45:00 +0000 UTC Type:0 Mac:52:54:00:c4:92:d1 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-178778 Clientid:01:52:54:00:c4:92:d1}
	I0917 17:50:22.910037   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined IP address 192.168.39.35 and MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:50:22.910194   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHHostname
	I0917 17:50:22.912444   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:50:22.912736   47940 main.go:141] libmachine: (multinode-178778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:92:d1", ip: ""} in network mk-multinode-178778: {Iface:virbr1 ExpiryTime:2024-09-17 18:45:00 +0000 UTC Type:0 Mac:52:54:00:c4:92:d1 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-178778 Clientid:01:52:54:00:c4:92:d1}
	I0917 17:50:22.912763   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined IP address 192.168.39.35 and MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:50:22.912874   47940 provision.go:143] copyHostCerts
	I0917 17:50:22.912915   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 17:50:22.912972   47940 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 17:50:22.912986   47940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 17:50:22.913075   47940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 17:50:22.913189   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 17:50:22.913216   47940 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 17:50:22.913224   47940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 17:50:22.913282   47940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 17:50:22.913353   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 17:50:22.913376   47940 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 17:50:22.913383   47940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 17:50:22.913421   47940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 17:50:22.913494   47940 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.multinode-178778 san=[127.0.0.1 192.168.39.35 localhost minikube multinode-178778]
	I0917 17:50:23.514652   47940 provision.go:177] copyRemoteCerts
	I0917 17:50:23.514714   47940 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 17:50:23.514755   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHHostname
	I0917 17:50:23.517680   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:50:23.518020   47940 main.go:141] libmachine: (multinode-178778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:92:d1", ip: ""} in network mk-multinode-178778: {Iface:virbr1 ExpiryTime:2024-09-17 18:45:00 +0000 UTC Type:0 Mac:52:54:00:c4:92:d1 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-178778 Clientid:01:52:54:00:c4:92:d1}
	I0917 17:50:23.518051   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined IP address 192.168.39.35 and MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:50:23.518301   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHPort
	I0917 17:50:23.518512   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHKeyPath
	I0917 17:50:23.518718   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHUsername
	I0917 17:50:23.518819   47940 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/multinode-178778/id_rsa Username:docker}
	I0917 17:50:23.604458   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 17:50:23.604569   47940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 17:50:23.632425   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 17:50:23.632490   47940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0917 17:50:23.661014   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 17:50:23.661089   47940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 17:50:23.690763   47940 provision.go:87] duration metric: took 784.229418ms to configureAuth
	I0917 17:50:23.690792   47940 buildroot.go:189] setting minikube options for container-runtime
	I0917 17:50:23.691065   47940 config.go:182] Loaded profile config "multinode-178778": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:50:23.691164   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHHostname
	I0917 17:50:23.693876   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:50:23.694243   47940 main.go:141] libmachine: (multinode-178778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:92:d1", ip: ""} in network mk-multinode-178778: {Iface:virbr1 ExpiryTime:2024-09-17 18:45:00 +0000 UTC Type:0 Mac:52:54:00:c4:92:d1 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-178778 Clientid:01:52:54:00:c4:92:d1}
	I0917 17:50:23.694267   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined IP address 192.168.39.35 and MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:50:23.694418   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHPort
	I0917 17:50:23.694599   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHKeyPath
	I0917 17:50:23.694718   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHKeyPath
	I0917 17:50:23.694835   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHUsername
	I0917 17:50:23.694977   47940 main.go:141] libmachine: Using SSH client type: native
	I0917 17:50:23.695169   47940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0917 17:50:23.695187   47940 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 17:51:54.523792   47940 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 17:51:54.523829   47940 machine.go:96] duration metric: took 1m31.999273185s to provisionDockerMachine
	I0917 17:51:54.523844   47940 start.go:293] postStartSetup for "multinode-178778" (driver="kvm2")
	I0917 17:51:54.523857   47940 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 17:51:54.523883   47940 main.go:141] libmachine: (multinode-178778) Calling .DriverName
	I0917 17:51:54.524194   47940 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 17:51:54.524240   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHHostname
	I0917 17:51:54.527742   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:51:54.528229   47940 main.go:141] libmachine: (multinode-178778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:92:d1", ip: ""} in network mk-multinode-178778: {Iface:virbr1 ExpiryTime:2024-09-17 18:45:00 +0000 UTC Type:0 Mac:52:54:00:c4:92:d1 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-178778 Clientid:01:52:54:00:c4:92:d1}
	I0917 17:51:54.528264   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined IP address 192.168.39.35 and MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:51:54.528453   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHPort
	I0917 17:51:54.528636   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHKeyPath
	I0917 17:51:54.528793   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHUsername
	I0917 17:51:54.528908   47940 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/multinode-178778/id_rsa Username:docker}
	I0917 17:51:54.617719   47940 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 17:51:54.622164   47940 command_runner.go:130] > NAME=Buildroot
	I0917 17:51:54.622181   47940 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0917 17:51:54.622185   47940 command_runner.go:130] > ID=buildroot
	I0917 17:51:54.622190   47940 command_runner.go:130] > VERSION_ID=2023.02.9
	I0917 17:51:54.622195   47940 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0917 17:51:54.622240   47940 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 17:51:54.622258   47940 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 17:51:54.622323   47940 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 17:51:54.622424   47940 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 17:51:54.622437   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> /etc/ssl/certs/182592.pem
	I0917 17:51:54.622549   47940 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 17:51:54.632861   47940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 17:51:54.659506   47940 start.go:296] duration metric: took 135.647767ms for postStartSetup
	I0917 17:51:54.659547   47940 fix.go:56] duration metric: took 1m32.1567664s for fixHost
	I0917 17:51:54.659569   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHHostname
	I0917 17:51:54.662213   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:51:54.662664   47940 main.go:141] libmachine: (multinode-178778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:92:d1", ip: ""} in network mk-multinode-178778: {Iface:virbr1 ExpiryTime:2024-09-17 18:45:00 +0000 UTC Type:0 Mac:52:54:00:c4:92:d1 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-178778 Clientid:01:52:54:00:c4:92:d1}
	I0917 17:51:54.662694   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined IP address 192.168.39.35 and MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:51:54.662876   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHPort
	I0917 17:51:54.663056   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHKeyPath
	I0917 17:51:54.663202   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHKeyPath
	I0917 17:51:54.663346   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHUsername
	I0917 17:51:54.663510   47940 main.go:141] libmachine: Using SSH client type: native
	I0917 17:51:54.663686   47940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0917 17:51:54.663699   47940 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 17:51:54.778470   47940 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726595514.742790022
	
	I0917 17:51:54.778492   47940 fix.go:216] guest clock: 1726595514.742790022
	I0917 17:51:54.778499   47940 fix.go:229] Guest: 2024-09-17 17:51:54.742790022 +0000 UTC Remote: 2024-09-17 17:51:54.659551225 +0000 UTC m=+92.289658942 (delta=83.238797ms)
	I0917 17:51:54.778517   47940 fix.go:200] guest clock delta is within tolerance: 83.238797ms
	I0917 17:51:54.778522   47940 start.go:83] releasing machines lock for "multinode-178778", held for 1m32.275759155s
	I0917 17:51:54.778543   47940 main.go:141] libmachine: (multinode-178778) Calling .DriverName
	I0917 17:51:54.778808   47940 main.go:141] libmachine: (multinode-178778) Calling .GetIP
	I0917 17:51:54.781489   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:51:54.781845   47940 main.go:141] libmachine: (multinode-178778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:92:d1", ip: ""} in network mk-multinode-178778: {Iface:virbr1 ExpiryTime:2024-09-17 18:45:00 +0000 UTC Type:0 Mac:52:54:00:c4:92:d1 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-178778 Clientid:01:52:54:00:c4:92:d1}
	I0917 17:51:54.781867   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined IP address 192.168.39.35 and MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:51:54.782014   47940 main.go:141] libmachine: (multinode-178778) Calling .DriverName
	I0917 17:51:54.782514   47940 main.go:141] libmachine: (multinode-178778) Calling .DriverName
	I0917 17:51:54.782706   47940 main.go:141] libmachine: (multinode-178778) Calling .DriverName
	I0917 17:51:54.782792   47940 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 17:51:54.782847   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHHostname
	I0917 17:51:54.782984   47940 ssh_runner.go:195] Run: cat /version.json
	I0917 17:51:54.783009   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHHostname
	I0917 17:51:54.785493   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:51:54.785758   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:51:54.785869   47940 main.go:141] libmachine: (multinode-178778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:92:d1", ip: ""} in network mk-multinode-178778: {Iface:virbr1 ExpiryTime:2024-09-17 18:45:00 +0000 UTC Type:0 Mac:52:54:00:c4:92:d1 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-178778 Clientid:01:52:54:00:c4:92:d1}
	I0917 17:51:54.785908   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined IP address 192.168.39.35 and MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:51:54.786056   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHPort
	I0917 17:51:54.786157   47940 main.go:141] libmachine: (multinode-178778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:92:d1", ip: ""} in network mk-multinode-178778: {Iface:virbr1 ExpiryTime:2024-09-17 18:45:00 +0000 UTC Type:0 Mac:52:54:00:c4:92:d1 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-178778 Clientid:01:52:54:00:c4:92:d1}
	I0917 17:51:54.786184   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined IP address 192.168.39.35 and MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:51:54.786225   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHKeyPath
	I0917 17:51:54.786310   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHPort
	I0917 17:51:54.786400   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHUsername
	I0917 17:51:54.786460   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHKeyPath
	I0917 17:51:54.786520   47940 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/multinode-178778/id_rsa Username:docker}
	I0917 17:51:54.786568   47940 main.go:141] libmachine: (multinode-178778) Calling .GetSSHUsername
	I0917 17:51:54.786690   47940 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/multinode-178778/id_rsa Username:docker}
	I0917 17:51:54.867371   47940 command_runner.go:130] > {"iso_version": "v1.34.0-1726481713-19649", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "fcd4ba3dbb1ef408e3a4b79c864df2496ddd3848"}
	I0917 17:51:54.867619   47940 ssh_runner.go:195] Run: systemctl --version
	I0917 17:51:54.891627   47940 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0917 17:51:54.891687   47940 command_runner.go:130] > systemd 252 (252)
	I0917 17:51:54.891717   47940 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0917 17:51:54.891789   47940 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 17:51:55.051696   47940 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 17:51:55.061152   47940 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0917 17:51:55.061605   47940 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 17:51:55.061693   47940 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 17:51:55.071924   47940 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 17:51:55.071951   47940 start.go:495] detecting cgroup driver to use...
	I0917 17:51:55.072037   47940 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 17:51:55.090114   47940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 17:51:55.105787   47940 docker.go:217] disabling cri-docker service (if available) ...
	I0917 17:51:55.105871   47940 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 17:51:55.122024   47940 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 17:51:55.160309   47940 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 17:51:55.312660   47940 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 17:51:55.456580   47940 docker.go:233] disabling docker service ...
	I0917 17:51:55.456676   47940 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 17:51:55.475785   47940 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 17:51:55.490838   47940 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 17:51:55.646741   47940 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 17:51:55.799053   47940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 17:51:55.813687   47940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 17:51:55.834677   47940 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0917 17:51:55.835163   47940 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 17:51:55.835228   47940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:51:55.847702   47940 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 17:51:55.847785   47940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:51:55.858755   47940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:51:55.870055   47940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:51:55.881763   47940 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 17:51:55.893710   47940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:51:55.905239   47940 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:51:55.918113   47940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 17:51:55.929558   47940 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 17:51:55.940665   47940 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0917 17:51:55.940764   47940 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 17:51:55.951426   47940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:51:56.096913   47940 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 17:52:04.220347   47940 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.123392368s)
	I0917 17:52:04.220384   47940 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 17:52:04.220449   47940 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 17:52:04.226170   47940 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0917 17:52:04.226191   47940 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0917 17:52:04.226209   47940 command_runner.go:130] > Device: 0,22	Inode: 1328        Links: 1
	I0917 17:52:04.226218   47940 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0917 17:52:04.226225   47940 command_runner.go:130] > Access: 2024-09-17 17:52:04.065518657 +0000
	I0917 17:52:04.226235   47940 command_runner.go:130] > Modify: 2024-09-17 17:52:04.065518657 +0000
	I0917 17:52:04.226246   47940 command_runner.go:130] > Change: 2024-09-17 17:52:04.065518657 +0000
	I0917 17:52:04.226254   47940 command_runner.go:130] >  Birth: -
	I0917 17:52:04.226484   47940 start.go:563] Will wait 60s for crictl version
	I0917 17:52:04.226535   47940 ssh_runner.go:195] Run: which crictl
	I0917 17:52:04.230478   47940 command_runner.go:130] > /usr/bin/crictl
	I0917 17:52:04.230623   47940 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 17:52:04.276134   47940 command_runner.go:130] > Version:  0.1.0
	I0917 17:52:04.276156   47940 command_runner.go:130] > RuntimeName:  cri-o
	I0917 17:52:04.276162   47940 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0917 17:52:04.276167   47940 command_runner.go:130] > RuntimeApiVersion:  v1
	I0917 17:52:04.276188   47940 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 17:52:04.276246   47940 ssh_runner.go:195] Run: crio --version
	I0917 17:52:04.309612   47940 command_runner.go:130] > crio version 1.29.1
	I0917 17:52:04.309637   47940 command_runner.go:130] > Version:        1.29.1
	I0917 17:52:04.309649   47940 command_runner.go:130] > GitCommit:      unknown
	I0917 17:52:04.309657   47940 command_runner.go:130] > GitCommitDate:  unknown
	I0917 17:52:04.309664   47940 command_runner.go:130] > GitTreeState:   clean
	I0917 17:52:04.309679   47940 command_runner.go:130] > BuildDate:      2024-09-16T15:42:14Z
	I0917 17:52:04.309686   47940 command_runner.go:130] > GoVersion:      go1.21.6
	I0917 17:52:04.309691   47940 command_runner.go:130] > Compiler:       gc
	I0917 17:52:04.309697   47940 command_runner.go:130] > Platform:       linux/amd64
	I0917 17:52:04.309702   47940 command_runner.go:130] > Linkmode:       dynamic
	I0917 17:52:04.309707   47940 command_runner.go:130] > BuildTags:      
	I0917 17:52:04.309712   47940 command_runner.go:130] >   containers_image_ostree_stub
	I0917 17:52:04.309716   47940 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0917 17:52:04.309721   47940 command_runner.go:130] >   btrfs_noversion
	I0917 17:52:04.309729   47940 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0917 17:52:04.309738   47940 command_runner.go:130] >   libdm_no_deferred_remove
	I0917 17:52:04.309746   47940 command_runner.go:130] >   seccomp
	I0917 17:52:04.309756   47940 command_runner.go:130] > LDFlags:          unknown
	I0917 17:52:04.309762   47940 command_runner.go:130] > SeccompEnabled:   true
	I0917 17:52:04.309770   47940 command_runner.go:130] > AppArmorEnabled:  false
	I0917 17:52:04.310961   47940 ssh_runner.go:195] Run: crio --version
	I0917 17:52:04.343998   47940 command_runner.go:130] > crio version 1.29.1
	I0917 17:52:04.344024   47940 command_runner.go:130] > Version:        1.29.1
	I0917 17:52:04.344031   47940 command_runner.go:130] > GitCommit:      unknown
	I0917 17:52:04.344036   47940 command_runner.go:130] > GitCommitDate:  unknown
	I0917 17:52:04.344040   47940 command_runner.go:130] > GitTreeState:   clean
	I0917 17:52:04.344055   47940 command_runner.go:130] > BuildDate:      2024-09-16T15:42:14Z
	I0917 17:52:04.344062   47940 command_runner.go:130] > GoVersion:      go1.21.6
	I0917 17:52:04.344067   47940 command_runner.go:130] > Compiler:       gc
	I0917 17:52:04.344078   47940 command_runner.go:130] > Platform:       linux/amd64
	I0917 17:52:04.344088   47940 command_runner.go:130] > Linkmode:       dynamic
	I0917 17:52:04.344092   47940 command_runner.go:130] > BuildTags:      
	I0917 17:52:04.344098   47940 command_runner.go:130] >   containers_image_ostree_stub
	I0917 17:52:04.344105   47940 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0917 17:52:04.344111   47940 command_runner.go:130] >   btrfs_noversion
	I0917 17:52:04.344116   47940 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0917 17:52:04.344121   47940 command_runner.go:130] >   libdm_no_deferred_remove
	I0917 17:52:04.344125   47940 command_runner.go:130] >   seccomp
	I0917 17:52:04.344130   47940 command_runner.go:130] > LDFlags:          unknown
	I0917 17:52:04.344133   47940 command_runner.go:130] > SeccompEnabled:   true
	I0917 17:52:04.344138   47940 command_runner.go:130] > AppArmorEnabled:  false
	I0917 17:52:04.347937   47940 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 17:52:04.349686   47940 main.go:141] libmachine: (multinode-178778) Calling .GetIP
	I0917 17:52:04.352706   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:52:04.353146   47940 main.go:141] libmachine: (multinode-178778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:92:d1", ip: ""} in network mk-multinode-178778: {Iface:virbr1 ExpiryTime:2024-09-17 18:45:00 +0000 UTC Type:0 Mac:52:54:00:c4:92:d1 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-178778 Clientid:01:52:54:00:c4:92:d1}
	I0917 17:52:04.353173   47940 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined IP address 192.168.39.35 and MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:52:04.353413   47940 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0917 17:52:04.358007   47940 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0917 17:52:04.358134   47940 kubeadm.go:883] updating cluster {Name:multinode-178778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-178778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.62 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 17:52:04.358315   47940 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 17:52:04.358360   47940 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 17:52:04.410095   47940 command_runner.go:130] > {
	I0917 17:52:04.410120   47940 command_runner.go:130] >   "images": [
	I0917 17:52:04.410125   47940 command_runner.go:130] >     {
	I0917 17:52:04.410136   47940 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0917 17:52:04.410142   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.410151   47940 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0917 17:52:04.410156   47940 command_runner.go:130] >       ],
	I0917 17:52:04.410161   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.410172   47940 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0917 17:52:04.410181   47940 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0917 17:52:04.410186   47940 command_runner.go:130] >       ],
	I0917 17:52:04.410193   47940 command_runner.go:130] >       "size": "87190579",
	I0917 17:52:04.410199   47940 command_runner.go:130] >       "uid": null,
	I0917 17:52:04.410206   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.410221   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.410230   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.410236   47940 command_runner.go:130] >     },
	I0917 17:52:04.410241   47940 command_runner.go:130] >     {
	I0917 17:52:04.410251   47940 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0917 17:52:04.410260   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.410268   47940 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0917 17:52:04.410275   47940 command_runner.go:130] >       ],
	I0917 17:52:04.410283   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.410296   47940 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0917 17:52:04.410310   47940 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0917 17:52:04.410316   47940 command_runner.go:130] >       ],
	I0917 17:52:04.410323   47940 command_runner.go:130] >       "size": "1363676",
	I0917 17:52:04.410330   47940 command_runner.go:130] >       "uid": null,
	I0917 17:52:04.410345   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.410354   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.410361   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.410367   47940 command_runner.go:130] >     },
	I0917 17:52:04.410373   47940 command_runner.go:130] >     {
	I0917 17:52:04.410384   47940 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0917 17:52:04.410394   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.410404   47940 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0917 17:52:04.410424   47940 command_runner.go:130] >       ],
	I0917 17:52:04.410431   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.410448   47940 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0917 17:52:04.410466   47940 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0917 17:52:04.410473   47940 command_runner.go:130] >       ],
	I0917 17:52:04.410485   47940 command_runner.go:130] >       "size": "31470524",
	I0917 17:52:04.410495   47940 command_runner.go:130] >       "uid": null,
	I0917 17:52:04.410502   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.410512   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.410521   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.410530   47940 command_runner.go:130] >     },
	I0917 17:52:04.410538   47940 command_runner.go:130] >     {
	I0917 17:52:04.410550   47940 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0917 17:52:04.410558   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.410570   47940 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0917 17:52:04.410579   47940 command_runner.go:130] >       ],
	I0917 17:52:04.410586   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.410601   47940 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0917 17:52:04.410622   47940 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0917 17:52:04.410629   47940 command_runner.go:130] >       ],
	I0917 17:52:04.410639   47940 command_runner.go:130] >       "size": "63273227",
	I0917 17:52:04.410646   47940 command_runner.go:130] >       "uid": null,
	I0917 17:52:04.410656   47940 command_runner.go:130] >       "username": "nonroot",
	I0917 17:52:04.410663   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.410674   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.410682   47940 command_runner.go:130] >     },
	I0917 17:52:04.410688   47940 command_runner.go:130] >     {
	I0917 17:52:04.410699   47940 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0917 17:52:04.410707   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.410717   47940 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0917 17:52:04.410726   47940 command_runner.go:130] >       ],
	I0917 17:52:04.410735   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.410748   47940 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0917 17:52:04.410763   47940 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0917 17:52:04.410772   47940 command_runner.go:130] >       ],
	I0917 17:52:04.410779   47940 command_runner.go:130] >       "size": "149009664",
	I0917 17:52:04.410787   47940 command_runner.go:130] >       "uid": {
	I0917 17:52:04.410795   47940 command_runner.go:130] >         "value": "0"
	I0917 17:52:04.410803   47940 command_runner.go:130] >       },
	I0917 17:52:04.410810   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.410819   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.410828   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.410834   47940 command_runner.go:130] >     },
	I0917 17:52:04.410839   47940 command_runner.go:130] >     {
	I0917 17:52:04.410856   47940 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0917 17:52:04.410865   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.410874   47940 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0917 17:52:04.410883   47940 command_runner.go:130] >       ],
	I0917 17:52:04.410890   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.410905   47940 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0917 17:52:04.410921   47940 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0917 17:52:04.410930   47940 command_runner.go:130] >       ],
	I0917 17:52:04.410942   47940 command_runner.go:130] >       "size": "95237600",
	I0917 17:52:04.410950   47940 command_runner.go:130] >       "uid": {
	I0917 17:52:04.410958   47940 command_runner.go:130] >         "value": "0"
	I0917 17:52:04.410965   47940 command_runner.go:130] >       },
	I0917 17:52:04.410975   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.410982   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.410991   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.410997   47940 command_runner.go:130] >     },
	I0917 17:52:04.411004   47940 command_runner.go:130] >     {
	I0917 17:52:04.411016   47940 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0917 17:52:04.411024   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.411034   47940 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0917 17:52:04.411043   47940 command_runner.go:130] >       ],
	I0917 17:52:04.411049   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.411065   47940 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0917 17:52:04.411080   47940 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0917 17:52:04.411089   47940 command_runner.go:130] >       ],
	I0917 17:52:04.411096   47940 command_runner.go:130] >       "size": "89437508",
	I0917 17:52:04.411105   47940 command_runner.go:130] >       "uid": {
	I0917 17:52:04.411112   47940 command_runner.go:130] >         "value": "0"
	I0917 17:52:04.411121   47940 command_runner.go:130] >       },
	I0917 17:52:04.411128   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.411137   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.411144   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.411152   47940 command_runner.go:130] >     },
	I0917 17:52:04.411166   47940 command_runner.go:130] >     {
	I0917 17:52:04.411179   47940 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0917 17:52:04.411188   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.411196   47940 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0917 17:52:04.411205   47940 command_runner.go:130] >       ],
	I0917 17:52:04.411213   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.411238   47940 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0917 17:52:04.411254   47940 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0917 17:52:04.411263   47940 command_runner.go:130] >       ],
	I0917 17:52:04.411271   47940 command_runner.go:130] >       "size": "92733849",
	I0917 17:52:04.411280   47940 command_runner.go:130] >       "uid": null,
	I0917 17:52:04.411286   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.411291   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.411297   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.411303   47940 command_runner.go:130] >     },
	I0917 17:52:04.411311   47940 command_runner.go:130] >     {
	I0917 17:52:04.411322   47940 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0917 17:52:04.411329   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.411338   47940 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0917 17:52:04.411346   47940 command_runner.go:130] >       ],
	I0917 17:52:04.411354   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.411369   47940 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0917 17:52:04.411412   47940 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0917 17:52:04.411425   47940 command_runner.go:130] >       ],
	I0917 17:52:04.411432   47940 command_runner.go:130] >       "size": "68420934",
	I0917 17:52:04.411439   47940 command_runner.go:130] >       "uid": {
	I0917 17:52:04.411450   47940 command_runner.go:130] >         "value": "0"
	I0917 17:52:04.411457   47940 command_runner.go:130] >       },
	I0917 17:52:04.411466   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.411474   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.411483   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.411490   47940 command_runner.go:130] >     },
	I0917 17:52:04.411498   47940 command_runner.go:130] >     {
	I0917 17:52:04.411511   47940 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0917 17:52:04.411521   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.411530   47940 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0917 17:52:04.411538   47940 command_runner.go:130] >       ],
	I0917 17:52:04.411546   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.411560   47940 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0917 17:52:04.411575   47940 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0917 17:52:04.411584   47940 command_runner.go:130] >       ],
	I0917 17:52:04.411592   47940 command_runner.go:130] >       "size": "742080",
	I0917 17:52:04.411601   47940 command_runner.go:130] >       "uid": {
	I0917 17:52:04.411608   47940 command_runner.go:130] >         "value": "65535"
	I0917 17:52:04.411616   47940 command_runner.go:130] >       },
	I0917 17:52:04.411623   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.411633   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.411642   47940 command_runner.go:130] >       "pinned": true
	I0917 17:52:04.411649   47940 command_runner.go:130] >     }
	I0917 17:52:04.411659   47940 command_runner.go:130] >   ]
	I0917 17:52:04.411664   47940 command_runner.go:130] > }
	I0917 17:52:04.411846   47940 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 17:52:04.411859   47940 crio.go:433] Images already preloaded, skipping extraction
	I0917 17:52:04.411924   47940 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 17:52:04.448053   47940 command_runner.go:130] > {
	I0917 17:52:04.448080   47940 command_runner.go:130] >   "images": [
	I0917 17:52:04.448086   47940 command_runner.go:130] >     {
	I0917 17:52:04.448097   47940 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0917 17:52:04.448104   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.448113   47940 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0917 17:52:04.448119   47940 command_runner.go:130] >       ],
	I0917 17:52:04.448125   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.448136   47940 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0917 17:52:04.448146   47940 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0917 17:52:04.448151   47940 command_runner.go:130] >       ],
	I0917 17:52:04.448159   47940 command_runner.go:130] >       "size": "87190579",
	I0917 17:52:04.448166   47940 command_runner.go:130] >       "uid": null,
	I0917 17:52:04.448179   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.448188   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.448198   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.448213   47940 command_runner.go:130] >     },
	I0917 17:52:04.448222   47940 command_runner.go:130] >     {
	I0917 17:52:04.448232   47940 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0917 17:52:04.448238   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.448247   47940 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0917 17:52:04.448256   47940 command_runner.go:130] >       ],
	I0917 17:52:04.448263   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.448276   47940 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0917 17:52:04.448291   47940 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0917 17:52:04.448300   47940 command_runner.go:130] >       ],
	I0917 17:52:04.448311   47940 command_runner.go:130] >       "size": "1363676",
	I0917 17:52:04.448321   47940 command_runner.go:130] >       "uid": null,
	I0917 17:52:04.448349   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.448358   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.448364   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.448370   47940 command_runner.go:130] >     },
	I0917 17:52:04.448376   47940 command_runner.go:130] >     {
	I0917 17:52:04.448387   47940 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0917 17:52:04.448401   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.448412   47940 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0917 17:52:04.448422   47940 command_runner.go:130] >       ],
	I0917 17:52:04.448430   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.448446   47940 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0917 17:52:04.448461   47940 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0917 17:52:04.448471   47940 command_runner.go:130] >       ],
	I0917 17:52:04.448478   47940 command_runner.go:130] >       "size": "31470524",
	I0917 17:52:04.448487   47940 command_runner.go:130] >       "uid": null,
	I0917 17:52:04.448493   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.448502   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.448510   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.448518   47940 command_runner.go:130] >     },
	I0917 17:52:04.448525   47940 command_runner.go:130] >     {
	I0917 17:52:04.448539   47940 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0917 17:52:04.448556   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.448568   47940 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0917 17:52:04.448576   47940 command_runner.go:130] >       ],
	I0917 17:52:04.448583   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.448596   47940 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0917 17:52:04.448620   47940 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0917 17:52:04.448629   47940 command_runner.go:130] >       ],
	I0917 17:52:04.448637   47940 command_runner.go:130] >       "size": "63273227",
	I0917 17:52:04.448647   47940 command_runner.go:130] >       "uid": null,
	I0917 17:52:04.448655   47940 command_runner.go:130] >       "username": "nonroot",
	I0917 17:52:04.448663   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.448671   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.448679   47940 command_runner.go:130] >     },
	I0917 17:52:04.448686   47940 command_runner.go:130] >     {
	I0917 17:52:04.448699   47940 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0917 17:52:04.448709   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.448718   47940 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0917 17:52:04.448725   47940 command_runner.go:130] >       ],
	I0917 17:52:04.448733   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.448748   47940 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0917 17:52:04.448762   47940 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0917 17:52:04.448771   47940 command_runner.go:130] >       ],
	I0917 17:52:04.448778   47940 command_runner.go:130] >       "size": "149009664",
	I0917 17:52:04.448786   47940 command_runner.go:130] >       "uid": {
	I0917 17:52:04.448794   47940 command_runner.go:130] >         "value": "0"
	I0917 17:52:04.448802   47940 command_runner.go:130] >       },
	I0917 17:52:04.448810   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.448819   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.448830   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.448838   47940 command_runner.go:130] >     },
	I0917 17:52:04.448845   47940 command_runner.go:130] >     {
	I0917 17:52:04.448858   47940 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0917 17:52:04.448868   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.448885   47940 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0917 17:52:04.448894   47940 command_runner.go:130] >       ],
	I0917 17:52:04.448902   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.448916   47940 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0917 17:52:04.448932   47940 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0917 17:52:04.448940   47940 command_runner.go:130] >       ],
	I0917 17:52:04.448947   47940 command_runner.go:130] >       "size": "95237600",
	I0917 17:52:04.448955   47940 command_runner.go:130] >       "uid": {
	I0917 17:52:04.448962   47940 command_runner.go:130] >         "value": "0"
	I0917 17:52:04.448969   47940 command_runner.go:130] >       },
	I0917 17:52:04.448976   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.448986   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.448995   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.449001   47940 command_runner.go:130] >     },
	I0917 17:52:04.449009   47940 command_runner.go:130] >     {
	I0917 17:52:04.449022   47940 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0917 17:52:04.449032   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.449041   47940 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0917 17:52:04.449050   47940 command_runner.go:130] >       ],
	I0917 17:52:04.449058   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.449077   47940 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0917 17:52:04.449093   47940 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0917 17:52:04.449102   47940 command_runner.go:130] >       ],
	I0917 17:52:04.449110   47940 command_runner.go:130] >       "size": "89437508",
	I0917 17:52:04.449118   47940 command_runner.go:130] >       "uid": {
	I0917 17:52:04.449125   47940 command_runner.go:130] >         "value": "0"
	I0917 17:52:04.449134   47940 command_runner.go:130] >       },
	I0917 17:52:04.449142   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.449150   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.449158   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.449166   47940 command_runner.go:130] >     },
	I0917 17:52:04.449172   47940 command_runner.go:130] >     {
	I0917 17:52:04.449185   47940 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0917 17:52:04.449195   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.449204   47940 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0917 17:52:04.449212   47940 command_runner.go:130] >       ],
	I0917 17:52:04.449220   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.449255   47940 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0917 17:52:04.449270   47940 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0917 17:52:04.449277   47940 command_runner.go:130] >       ],
	I0917 17:52:04.449288   47940 command_runner.go:130] >       "size": "92733849",
	I0917 17:52:04.449297   47940 command_runner.go:130] >       "uid": null,
	I0917 17:52:04.449305   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.449314   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.449321   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.449340   47940 command_runner.go:130] >     },
	I0917 17:52:04.449348   47940 command_runner.go:130] >     {
	I0917 17:52:04.449359   47940 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0917 17:52:04.449368   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.449377   47940 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0917 17:52:04.449385   47940 command_runner.go:130] >       ],
	I0917 17:52:04.449393   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.449408   47940 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0917 17:52:04.449424   47940 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0917 17:52:04.449432   47940 command_runner.go:130] >       ],
	I0917 17:52:04.449443   47940 command_runner.go:130] >       "size": "68420934",
	I0917 17:52:04.449450   47940 command_runner.go:130] >       "uid": {
	I0917 17:52:04.449459   47940 command_runner.go:130] >         "value": "0"
	I0917 17:52:04.449468   47940 command_runner.go:130] >       },
	I0917 17:52:04.449477   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.449486   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.449494   47940 command_runner.go:130] >       "pinned": false
	I0917 17:52:04.449502   47940 command_runner.go:130] >     },
	I0917 17:52:04.449509   47940 command_runner.go:130] >     {
	I0917 17:52:04.449522   47940 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0917 17:52:04.449531   47940 command_runner.go:130] >       "repoTags": [
	I0917 17:52:04.449544   47940 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0917 17:52:04.449552   47940 command_runner.go:130] >       ],
	I0917 17:52:04.449560   47940 command_runner.go:130] >       "repoDigests": [
	I0917 17:52:04.449575   47940 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0917 17:52:04.449590   47940 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0917 17:52:04.449599   47940 command_runner.go:130] >       ],
	I0917 17:52:04.449606   47940 command_runner.go:130] >       "size": "742080",
	I0917 17:52:04.449614   47940 command_runner.go:130] >       "uid": {
	I0917 17:52:04.449623   47940 command_runner.go:130] >         "value": "65535"
	I0917 17:52:04.449632   47940 command_runner.go:130] >       },
	I0917 17:52:04.449641   47940 command_runner.go:130] >       "username": "",
	I0917 17:52:04.449649   47940 command_runner.go:130] >       "spec": null,
	I0917 17:52:04.449656   47940 command_runner.go:130] >       "pinned": true
	I0917 17:52:04.449664   47940 command_runner.go:130] >     }
	I0917 17:52:04.449670   47940 command_runner.go:130] >   ]
	I0917 17:52:04.449678   47940 command_runner.go:130] > }
	I0917 17:52:04.449801   47940 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 17:52:04.449814   47940 cache_images.go:84] Images are preloaded, skipping loading
	I0917 17:52:04.449824   47940 kubeadm.go:934] updating node { 192.168.39.35 8443 v1.31.1 crio true true} ...
	I0917 17:52:04.449950   47940 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-178778 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.35
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-178778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 17:52:04.450038   47940 ssh_runner.go:195] Run: crio config
	I0917 17:52:04.493383   47940 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0917 17:52:04.493415   47940 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0917 17:52:04.493425   47940 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0917 17:52:04.493429   47940 command_runner.go:130] > #
	I0917 17:52:04.493440   47940 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0917 17:52:04.493448   47940 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0917 17:52:04.493456   47940 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0917 17:52:04.493464   47940 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0917 17:52:04.493470   47940 command_runner.go:130] > # reload'.
	I0917 17:52:04.493480   47940 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0917 17:52:04.493489   47940 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0917 17:52:04.493499   47940 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0917 17:52:04.493533   47940 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0917 17:52:04.493543   47940 command_runner.go:130] > [crio]
	I0917 17:52:04.493555   47940 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0917 17:52:04.493563   47940 command_runner.go:130] > # containers images, in this directory.
	I0917 17:52:04.493574   47940 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0917 17:52:04.493593   47940 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0917 17:52:04.493604   47940 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0917 17:52:04.493617   47940 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0917 17:52:04.493833   47940 command_runner.go:130] > # imagestore = ""
	I0917 17:52:04.493848   47940 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0917 17:52:04.493858   47940 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0917 17:52:04.494461   47940 command_runner.go:130] > storage_driver = "overlay"
	I0917 17:52:04.494484   47940 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0917 17:52:04.494495   47940 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0917 17:52:04.494502   47940 command_runner.go:130] > storage_option = [
	I0917 17:52:04.495007   47940 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0917 17:52:04.495019   47940 command_runner.go:130] > ]
	I0917 17:52:04.495030   47940 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0917 17:52:04.495039   47940 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0917 17:52:04.495047   47940 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0917 17:52:04.495058   47940 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0917 17:52:04.495074   47940 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0917 17:52:04.495083   47940 command_runner.go:130] > # always happen on a node reboot
	I0917 17:52:04.495107   47940 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0917 17:52:04.495124   47940 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0917 17:52:04.495135   47940 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0917 17:52:04.495147   47940 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0917 17:52:04.495157   47940 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0917 17:52:04.495172   47940 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0917 17:52:04.495189   47940 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0917 17:52:04.495199   47940 command_runner.go:130] > # internal_wipe = true
	I0917 17:52:04.495216   47940 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0917 17:52:04.495228   47940 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0917 17:52:04.495236   47940 command_runner.go:130] > # internal_repair = false
	I0917 17:52:04.495246   47940 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0917 17:52:04.495256   47940 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0917 17:52:04.495266   47940 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0917 17:52:04.495278   47940 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0917 17:52:04.495289   47940 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0917 17:52:04.495298   47940 command_runner.go:130] > [crio.api]
	I0917 17:52:04.495317   47940 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0917 17:52:04.495344   47940 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0917 17:52:04.495357   47940 command_runner.go:130] > # IP address on which the stream server will listen.
	I0917 17:52:04.495367   47940 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0917 17:52:04.495382   47940 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0917 17:52:04.495393   47940 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0917 17:52:04.495400   47940 command_runner.go:130] > # stream_port = "0"
	I0917 17:52:04.495412   47940 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0917 17:52:04.495423   47940 command_runner.go:130] > # stream_enable_tls = false
	I0917 17:52:04.495436   47940 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0917 17:52:04.495447   47940 command_runner.go:130] > # stream_idle_timeout = ""
	I0917 17:52:04.495460   47940 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0917 17:52:04.495474   47940 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0917 17:52:04.495483   47940 command_runner.go:130] > # minutes.
	I0917 17:52:04.495493   47940 command_runner.go:130] > # stream_tls_cert = ""
	I0917 17:52:04.495506   47940 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0917 17:52:04.495534   47940 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0917 17:52:04.495544   47940 command_runner.go:130] > # stream_tls_key = ""
	I0917 17:52:04.495556   47940 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0917 17:52:04.495570   47940 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0917 17:52:04.495598   47940 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0917 17:52:04.495607   47940 command_runner.go:130] > # stream_tls_ca = ""
	I0917 17:52:04.495620   47940 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0917 17:52:04.495630   47940 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0917 17:52:04.495645   47940 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0917 17:52:04.495657   47940 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0917 17:52:04.495669   47940 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0917 17:52:04.495681   47940 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0917 17:52:04.495690   47940 command_runner.go:130] > [crio.runtime]
	I0917 17:52:04.495700   47940 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0917 17:52:04.495712   47940 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0917 17:52:04.495721   47940 command_runner.go:130] > # "nofile=1024:2048"
	I0917 17:52:04.495731   47940 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0917 17:52:04.495740   47940 command_runner.go:130] > # default_ulimits = [
	I0917 17:52:04.495745   47940 command_runner.go:130] > # ]
	I0917 17:52:04.495753   47940 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0917 17:52:04.495759   47940 command_runner.go:130] > # no_pivot = false
	I0917 17:52:04.495768   47940 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0917 17:52:04.495781   47940 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0917 17:52:04.495791   47940 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0917 17:52:04.495801   47940 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0917 17:52:04.495811   47940 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0917 17:52:04.495822   47940 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0917 17:52:04.495833   47940 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0917 17:52:04.495842   47940 command_runner.go:130] > # Cgroup setting for conmon
	I0917 17:52:04.495854   47940 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0917 17:52:04.495864   47940 command_runner.go:130] > conmon_cgroup = "pod"
	I0917 17:52:04.495876   47940 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0917 17:52:04.495886   47940 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0917 17:52:04.495908   47940 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0917 17:52:04.495918   47940 command_runner.go:130] > conmon_env = [
	I0917 17:52:04.495928   47940 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0917 17:52:04.495936   47940 command_runner.go:130] > ]
	I0917 17:52:04.495945   47940 command_runner.go:130] > # Additional environment variables to set for all the
	I0917 17:52:04.495956   47940 command_runner.go:130] > # containers. These are overridden if set in the
	I0917 17:52:04.495966   47940 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0917 17:52:04.495976   47940 command_runner.go:130] > # default_env = [
	I0917 17:52:04.495985   47940 command_runner.go:130] > # ]
	I0917 17:52:04.495994   47940 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0917 17:52:04.496011   47940 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0917 17:52:04.496020   47940 command_runner.go:130] > # selinux = false
	I0917 17:52:04.496032   47940 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0917 17:52:04.496044   47940 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0917 17:52:04.496057   47940 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0917 17:52:04.496067   47940 command_runner.go:130] > # seccomp_profile = ""
	I0917 17:52:04.496080   47940 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0917 17:52:04.496092   47940 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0917 17:52:04.496105   47940 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0917 17:52:04.496113   47940 command_runner.go:130] > # which might increase security.
	I0917 17:52:04.496124   47940 command_runner.go:130] > # This option is currently deprecated,
	I0917 17:52:04.496137   47940 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0917 17:52:04.496148   47940 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0917 17:52:04.496160   47940 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0917 17:52:04.496173   47940 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0917 17:52:04.496185   47940 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0917 17:52:04.496197   47940 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0917 17:52:04.496207   47940 command_runner.go:130] > # This option supports live configuration reload.
	I0917 17:52:04.496219   47940 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0917 17:52:04.496232   47940 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0917 17:52:04.496242   47940 command_runner.go:130] > # the cgroup blockio controller.
	I0917 17:52:04.496251   47940 command_runner.go:130] > # blockio_config_file = ""
	I0917 17:52:04.496264   47940 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0917 17:52:04.496281   47940 command_runner.go:130] > # blockio parameters.
	I0917 17:52:04.496292   47940 command_runner.go:130] > # blockio_reload = false
	I0917 17:52:04.496306   47940 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0917 17:52:04.496316   47940 command_runner.go:130] > # irqbalance daemon.
	I0917 17:52:04.496325   47940 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0917 17:52:04.496343   47940 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0917 17:52:04.496358   47940 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0917 17:52:04.496377   47940 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0917 17:52:04.496390   47940 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0917 17:52:04.496403   47940 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0917 17:52:04.496413   47940 command_runner.go:130] > # This option supports live configuration reload.
	I0917 17:52:04.496423   47940 command_runner.go:130] > # rdt_config_file = ""
	I0917 17:52:04.496433   47940 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0917 17:52:04.496443   47940 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0917 17:52:04.496473   47940 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0917 17:52:04.496483   47940 command_runner.go:130] > # separate_pull_cgroup = ""
	I0917 17:52:04.496494   47940 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0917 17:52:04.496507   47940 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0917 17:52:04.496516   47940 command_runner.go:130] > # will be added.
	I0917 17:52:04.496527   47940 command_runner.go:130] > # default_capabilities = [
	I0917 17:52:04.496534   47940 command_runner.go:130] > # 	"CHOWN",
	I0917 17:52:04.496544   47940 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0917 17:52:04.496553   47940 command_runner.go:130] > # 	"FSETID",
	I0917 17:52:04.496560   47940 command_runner.go:130] > # 	"FOWNER",
	I0917 17:52:04.496568   47940 command_runner.go:130] > # 	"SETGID",
	I0917 17:52:04.496575   47940 command_runner.go:130] > # 	"SETUID",
	I0917 17:52:04.496587   47940 command_runner.go:130] > # 	"SETPCAP",
	I0917 17:52:04.496595   47940 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0917 17:52:04.496604   47940 command_runner.go:130] > # 	"KILL",
	I0917 17:52:04.496611   47940 command_runner.go:130] > # ]
	I0917 17:52:04.496626   47940 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0917 17:52:04.496639   47940 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0917 17:52:04.496650   47940 command_runner.go:130] > # add_inheritable_capabilities = false
	I0917 17:52:04.496668   47940 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0917 17:52:04.496680   47940 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0917 17:52:04.496687   47940 command_runner.go:130] > default_sysctls = [
	I0917 17:52:04.496696   47940 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0917 17:52:04.496704   47940 command_runner.go:130] > ]
	I0917 17:52:04.496713   47940 command_runner.go:130] > # List of devices on the host that a
	I0917 17:52:04.496726   47940 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0917 17:52:04.496735   47940 command_runner.go:130] > # allowed_devices = [
	I0917 17:52:04.496742   47940 command_runner.go:130] > # 	"/dev/fuse",
	I0917 17:52:04.496750   47940 command_runner.go:130] > # ]
	I0917 17:52:04.496759   47940 command_runner.go:130] > # List of additional devices. specified as
	I0917 17:52:04.496773   47940 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0917 17:52:04.496783   47940 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0917 17:52:04.496796   47940 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0917 17:52:04.496806   47940 command_runner.go:130] > # additional_devices = [
	I0917 17:52:04.496814   47940 command_runner.go:130] > # ]
	I0917 17:52:04.496824   47940 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0917 17:52:04.496833   47940 command_runner.go:130] > # cdi_spec_dirs = [
	I0917 17:52:04.496839   47940 command_runner.go:130] > # 	"/etc/cdi",
	I0917 17:52:04.496850   47940 command_runner.go:130] > # 	"/var/run/cdi",
	I0917 17:52:04.496858   47940 command_runner.go:130] > # ]
	I0917 17:52:04.496869   47940 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0917 17:52:04.496883   47940 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0917 17:52:04.496892   47940 command_runner.go:130] > # Defaults to false.
	I0917 17:52:04.496903   47940 command_runner.go:130] > # device_ownership_from_security_context = false
	I0917 17:52:04.496918   47940 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0917 17:52:04.496930   47940 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0917 17:52:04.496937   47940 command_runner.go:130] > # hooks_dir = [
	I0917 17:52:04.496949   47940 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0917 17:52:04.496957   47940 command_runner.go:130] > # ]
	I0917 17:52:04.496969   47940 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0917 17:52:04.496982   47940 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0917 17:52:04.496993   47940 command_runner.go:130] > # its default mounts from the following two files:
	I0917 17:52:04.497007   47940 command_runner.go:130] > #
	I0917 17:52:04.497021   47940 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0917 17:52:04.497035   47940 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0917 17:52:04.497047   47940 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0917 17:52:04.497054   47940 command_runner.go:130] > #
	I0917 17:52:04.497064   47940 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0917 17:52:04.497076   47940 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0917 17:52:04.497087   47940 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0917 17:52:04.497099   47940 command_runner.go:130] > #      only add mounts it finds in this file.
	I0917 17:52:04.497107   47940 command_runner.go:130] > #
	I0917 17:52:04.497115   47940 command_runner.go:130] > # default_mounts_file = ""
	I0917 17:52:04.497126   47940 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0917 17:52:04.497141   47940 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0917 17:52:04.497151   47940 command_runner.go:130] > pids_limit = 1024
	I0917 17:52:04.497162   47940 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0917 17:52:04.497176   47940 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0917 17:52:04.497189   47940 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0917 17:52:04.497203   47940 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0917 17:52:04.497212   47940 command_runner.go:130] > # log_size_max = -1
	I0917 17:52:04.497225   47940 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0917 17:52:04.497247   47940 command_runner.go:130] > # log_to_journald = false
	I0917 17:52:04.497259   47940 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0917 17:52:04.497270   47940 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0917 17:52:04.497282   47940 command_runner.go:130] > # Path to directory for container attach sockets.
	I0917 17:52:04.497294   47940 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0917 17:52:04.497306   47940 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0917 17:52:04.497316   47940 command_runner.go:130] > # bind_mount_prefix = ""
	I0917 17:52:04.497333   47940 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0917 17:52:04.497342   47940 command_runner.go:130] > # read_only = false
	I0917 17:52:04.497356   47940 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0917 17:52:04.497369   47940 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0917 17:52:04.497376   47940 command_runner.go:130] > # live configuration reload.
	I0917 17:52:04.497386   47940 command_runner.go:130] > # log_level = "info"
	I0917 17:52:04.497399   47940 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0917 17:52:04.497411   47940 command_runner.go:130] > # This option supports live configuration reload.
	I0917 17:52:04.497420   47940 command_runner.go:130] > # log_filter = ""
	I0917 17:52:04.497431   47940 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0917 17:52:04.497444   47940 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0917 17:52:04.497454   47940 command_runner.go:130] > # separated by comma.
	I0917 17:52:04.497468   47940 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0917 17:52:04.497478   47940 command_runner.go:130] > # uid_mappings = ""
	I0917 17:52:04.497490   47940 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0917 17:52:04.497503   47940 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0917 17:52:04.497512   47940 command_runner.go:130] > # separated by comma.
	I0917 17:52:04.497525   47940 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0917 17:52:04.497534   47940 command_runner.go:130] > # gid_mappings = ""
	I0917 17:52:04.497545   47940 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0917 17:52:04.497558   47940 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0917 17:52:04.497571   47940 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0917 17:52:04.497585   47940 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0917 17:52:04.497596   47940 command_runner.go:130] > # minimum_mappable_uid = -1
	I0917 17:52:04.497608   47940 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0917 17:52:04.497621   47940 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0917 17:52:04.497634   47940 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0917 17:52:04.497647   47940 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0917 17:52:04.497657   47940 command_runner.go:130] > # minimum_mappable_gid = -1
	I0917 17:52:04.497670   47940 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0917 17:52:04.497683   47940 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0917 17:52:04.497696   47940 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0917 17:52:04.497707   47940 command_runner.go:130] > # ctr_stop_timeout = 30
	I0917 17:52:04.497719   47940 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0917 17:52:04.497731   47940 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0917 17:52:04.497739   47940 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0917 17:52:04.497750   47940 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0917 17:52:04.497757   47940 command_runner.go:130] > drop_infra_ctr = false
	I0917 17:52:04.497769   47940 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0917 17:52:04.497782   47940 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0917 17:52:04.497797   47940 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0917 17:52:04.497807   47940 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0917 17:52:04.497822   47940 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0917 17:52:04.497835   47940 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0917 17:52:04.497847   47940 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0917 17:52:04.497859   47940 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0917 17:52:04.497870   47940 command_runner.go:130] > # shared_cpuset = ""
	I0917 17:52:04.497882   47940 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0917 17:52:04.497893   47940 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0917 17:52:04.497901   47940 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0917 17:52:04.497913   47940 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0917 17:52:04.497921   47940 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0917 17:52:04.497933   47940 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0917 17:52:04.497947   47940 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0917 17:52:04.497957   47940 command_runner.go:130] > # enable_criu_support = false
	I0917 17:52:04.497968   47940 command_runner.go:130] > # Enable/disable the generation of the container,
	I0917 17:52:04.497980   47940 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0917 17:52:04.497989   47940 command_runner.go:130] > # enable_pod_events = false
	I0917 17:52:04.498001   47940 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0917 17:52:04.498014   47940 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0917 17:52:04.498024   47940 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0917 17:52:04.498033   47940 command_runner.go:130] > # default_runtime = "runc"
	I0917 17:52:04.498044   47940 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0917 17:52:04.498059   47940 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0917 17:52:04.498077   47940 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0917 17:52:04.498088   47940 command_runner.go:130] > # creation as a file is not desired either.
	I0917 17:52:04.498102   47940 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0917 17:52:04.498113   47940 command_runner.go:130] > # the hostname is being managed dynamically.
	I0917 17:52:04.498122   47940 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0917 17:52:04.498130   47940 command_runner.go:130] > # ]
	I0917 17:52:04.498143   47940 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0917 17:52:04.498157   47940 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0917 17:52:04.498182   47940 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0917 17:52:04.498194   47940 command_runner.go:130] > # Each entry in the table should follow the format:
	I0917 17:52:04.498200   47940 command_runner.go:130] > #
	I0917 17:52:04.498210   47940 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0917 17:52:04.498221   47940 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0917 17:52:04.498279   47940 command_runner.go:130] > # runtime_type = "oci"
	I0917 17:52:04.498289   47940 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0917 17:52:04.498295   47940 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0917 17:52:04.498302   47940 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0917 17:52:04.498310   47940 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0917 17:52:04.498325   47940 command_runner.go:130] > # monitor_env = []
	I0917 17:52:04.498340   47940 command_runner.go:130] > # privileged_without_host_devices = false
	I0917 17:52:04.498350   47940 command_runner.go:130] > # allowed_annotations = []
	I0917 17:52:04.498363   47940 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0917 17:52:04.498372   47940 command_runner.go:130] > # Where:
	I0917 17:52:04.498382   47940 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0917 17:52:04.498396   47940 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0917 17:52:04.498409   47940 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0917 17:52:04.498423   47940 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0917 17:52:04.498432   47940 command_runner.go:130] > #   in $PATH.
	I0917 17:52:04.498443   47940 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0917 17:52:04.498453   47940 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0917 17:52:04.498467   47940 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0917 17:52:04.498476   47940 command_runner.go:130] > #   state.
	I0917 17:52:04.498488   47940 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0917 17:52:04.498501   47940 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0917 17:52:04.498514   47940 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0917 17:52:04.498526   47940 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0917 17:52:04.498540   47940 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0917 17:52:04.498553   47940 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0917 17:52:04.498564   47940 command_runner.go:130] > #   The currently recognized values are:
	I0917 17:52:04.498576   47940 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0917 17:52:04.498590   47940 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0917 17:52:04.498610   47940 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0917 17:52:04.498623   47940 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0917 17:52:04.498638   47940 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0917 17:52:04.498651   47940 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0917 17:52:04.498665   47940 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0917 17:52:04.498678   47940 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0917 17:52:04.498688   47940 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0917 17:52:04.498701   47940 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0917 17:52:04.498711   47940 command_runner.go:130] > #   deprecated option "conmon".
	I0917 17:52:04.498724   47940 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0917 17:52:04.498735   47940 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0917 17:52:04.498750   47940 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0917 17:52:04.498761   47940 command_runner.go:130] > #   should be moved to the container's cgroup
	I0917 17:52:04.498774   47940 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0917 17:52:04.498785   47940 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0917 17:52:04.498800   47940 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0917 17:52:04.498811   47940 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0917 17:52:04.498819   47940 command_runner.go:130] > #
	I0917 17:52:04.498828   47940 command_runner.go:130] > # Using the seccomp notifier feature:
	I0917 17:52:04.498835   47940 command_runner.go:130] > #
	I0917 17:52:04.498845   47940 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0917 17:52:04.498859   47940 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0917 17:52:04.498867   47940 command_runner.go:130] > #
	I0917 17:52:04.498878   47940 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0917 17:52:04.498892   47940 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0917 17:52:04.498899   47940 command_runner.go:130] > #
	I0917 17:52:04.498910   47940 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0917 17:52:04.498919   47940 command_runner.go:130] > # feature.
	I0917 17:52:04.498924   47940 command_runner.go:130] > #
	I0917 17:52:04.498937   47940 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0917 17:52:04.498950   47940 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0917 17:52:04.498963   47940 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0917 17:52:04.498976   47940 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0917 17:52:04.498997   47940 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0917 17:52:04.499005   47940 command_runner.go:130] > #
	I0917 17:52:04.499016   47940 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0917 17:52:04.499031   47940 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0917 17:52:04.499039   47940 command_runner.go:130] > #
	I0917 17:52:04.499050   47940 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0917 17:52:04.499062   47940 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0917 17:52:04.499070   47940 command_runner.go:130] > #
	I0917 17:52:04.499081   47940 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0917 17:52:04.499092   47940 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0917 17:52:04.499099   47940 command_runner.go:130] > # limitation.
	I0917 17:52:04.499108   47940 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0917 17:52:04.499117   47940 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0917 17:52:04.499127   47940 command_runner.go:130] > runtime_type = "oci"
	I0917 17:52:04.499135   47940 command_runner.go:130] > runtime_root = "/run/runc"
	I0917 17:52:04.499145   47940 command_runner.go:130] > runtime_config_path = ""
	I0917 17:52:04.499155   47940 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0917 17:52:04.499165   47940 command_runner.go:130] > monitor_cgroup = "pod"
	I0917 17:52:04.499174   47940 command_runner.go:130] > monitor_exec_cgroup = ""
	I0917 17:52:04.499182   47940 command_runner.go:130] > monitor_env = [
	I0917 17:52:04.499195   47940 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0917 17:52:04.499203   47940 command_runner.go:130] > ]
	I0917 17:52:04.499212   47940 command_runner.go:130] > privileged_without_host_devices = false
	I0917 17:52:04.499226   47940 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0917 17:52:04.499237   47940 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0917 17:52:04.499253   47940 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0917 17:52:04.499267   47940 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0917 17:52:04.499281   47940 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0917 17:52:04.499294   47940 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0917 17:52:04.499312   47940 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0917 17:52:04.499332   47940 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0917 17:52:04.499344   47940 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0917 17:52:04.499359   47940 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0917 17:52:04.499376   47940 command_runner.go:130] > # Example:
	I0917 17:52:04.499387   47940 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0917 17:52:04.499396   47940 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0917 17:52:04.499407   47940 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0917 17:52:04.499419   47940 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0917 17:52:04.499428   47940 command_runner.go:130] > # cpuset = 0
	I0917 17:52:04.499435   47940 command_runner.go:130] > # cpushares = "0-1"
	I0917 17:52:04.499445   47940 command_runner.go:130] > # Where:
	I0917 17:52:04.499453   47940 command_runner.go:130] > # The workload name is workload-type.
	I0917 17:52:04.499468   47940 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0917 17:52:04.499481   47940 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0917 17:52:04.499493   47940 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0917 17:52:04.499509   47940 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0917 17:52:04.499521   47940 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0917 17:52:04.499531   47940 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0917 17:52:04.499545   47940 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0917 17:52:04.499556   47940 command_runner.go:130] > # Default value is set to true
	I0917 17:52:04.499566   47940 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0917 17:52:04.499577   47940 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0917 17:52:04.499588   47940 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0917 17:52:04.499596   47940 command_runner.go:130] > # Default value is set to 'false'
	I0917 17:52:04.499607   47940 command_runner.go:130] > # disable_hostport_mapping = false
	I0917 17:52:04.499619   47940 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0917 17:52:04.499626   47940 command_runner.go:130] > #
	I0917 17:52:04.499636   47940 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0917 17:52:04.499647   47940 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0917 17:52:04.499655   47940 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0917 17:52:04.499664   47940 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0917 17:52:04.499672   47940 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0917 17:52:04.499678   47940 command_runner.go:130] > [crio.image]
	I0917 17:52:04.499687   47940 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0917 17:52:04.499695   47940 command_runner.go:130] > # default_transport = "docker://"
	I0917 17:52:04.499705   47940 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0917 17:52:04.499723   47940 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0917 17:52:04.499730   47940 command_runner.go:130] > # global_auth_file = ""
	I0917 17:52:04.499738   47940 command_runner.go:130] > # The image used to instantiate infra containers.
	I0917 17:52:04.499747   47940 command_runner.go:130] > # This option supports live configuration reload.
	I0917 17:52:04.499756   47940 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0917 17:52:04.499766   47940 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0917 17:52:04.499776   47940 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0917 17:52:04.499784   47940 command_runner.go:130] > # This option supports live configuration reload.
	I0917 17:52:04.499791   47940 command_runner.go:130] > # pause_image_auth_file = ""
	I0917 17:52:04.499800   47940 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0917 17:52:04.499810   47940 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0917 17:52:04.499825   47940 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0917 17:52:04.499835   47940 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0917 17:52:04.499842   47940 command_runner.go:130] > # pause_command = "/pause"
	I0917 17:52:04.499851   47940 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0917 17:52:04.499860   47940 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0917 17:52:04.499870   47940 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0917 17:52:04.499885   47940 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0917 17:52:04.499898   47940 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0917 17:52:04.499911   47940 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0917 17:52:04.499921   47940 command_runner.go:130] > # pinned_images = [
	I0917 17:52:04.499929   47940 command_runner.go:130] > # ]
	I0917 17:52:04.499939   47940 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0917 17:52:04.499952   47940 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0917 17:52:04.499966   47940 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0917 17:52:04.499980   47940 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0917 17:52:04.499993   47940 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0917 17:52:04.500003   47940 command_runner.go:130] > # signature_policy = ""
	I0917 17:52:04.500014   47940 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0917 17:52:04.500025   47940 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0917 17:52:04.500039   47940 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0917 17:52:04.500052   47940 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0917 17:52:04.500065   47940 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0917 17:52:04.500083   47940 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0917 17:52:04.500096   47940 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0917 17:52:04.500109   47940 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0917 17:52:04.500119   47940 command_runner.go:130] > # changing them here.
	I0917 17:52:04.500128   47940 command_runner.go:130] > # insecure_registries = [
	I0917 17:52:04.500136   47940 command_runner.go:130] > # ]
	I0917 17:52:04.500148   47940 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0917 17:52:04.500159   47940 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0917 17:52:04.500170   47940 command_runner.go:130] > # image_volumes = "mkdir"
	I0917 17:52:04.500180   47940 command_runner.go:130] > # Temporary directory to use for storing big files
	I0917 17:52:04.500190   47940 command_runner.go:130] > # big_files_temporary_dir = ""
	I0917 17:52:04.500200   47940 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0917 17:52:04.500210   47940 command_runner.go:130] > # CNI plugins.
	I0917 17:52:04.500217   47940 command_runner.go:130] > [crio.network]
	I0917 17:52:04.500229   47940 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0917 17:52:04.500241   47940 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0917 17:52:04.500250   47940 command_runner.go:130] > # cni_default_network = ""
	I0917 17:52:04.500262   47940 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0917 17:52:04.500271   47940 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0917 17:52:04.500281   47940 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0917 17:52:04.500290   47940 command_runner.go:130] > # plugin_dirs = [
	I0917 17:52:04.500297   47940 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0917 17:52:04.500305   47940 command_runner.go:130] > # ]
	I0917 17:52:04.500316   47940 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0917 17:52:04.500325   47940 command_runner.go:130] > [crio.metrics]
	I0917 17:52:04.500341   47940 command_runner.go:130] > # Globally enable or disable metrics support.
	I0917 17:52:04.500348   47940 command_runner.go:130] > enable_metrics = true
	I0917 17:52:04.500359   47940 command_runner.go:130] > # Specify enabled metrics collectors.
	I0917 17:52:04.500369   47940 command_runner.go:130] > # Per default all metrics are enabled.
	I0917 17:52:04.500381   47940 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0917 17:52:04.500394   47940 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0917 17:52:04.500406   47940 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0917 17:52:04.500416   47940 command_runner.go:130] > # metrics_collectors = [
	I0917 17:52:04.500431   47940 command_runner.go:130] > # 	"operations",
	I0917 17:52:04.500444   47940 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0917 17:52:04.500455   47940 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0917 17:52:04.500463   47940 command_runner.go:130] > # 	"operations_errors",
	I0917 17:52:04.500474   47940 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0917 17:52:04.500481   47940 command_runner.go:130] > # 	"image_pulls_by_name",
	I0917 17:52:04.500491   47940 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0917 17:52:04.500501   47940 command_runner.go:130] > # 	"image_pulls_failures",
	I0917 17:52:04.500509   47940 command_runner.go:130] > # 	"image_pulls_successes",
	I0917 17:52:04.500519   47940 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0917 17:52:04.500528   47940 command_runner.go:130] > # 	"image_layer_reuse",
	I0917 17:52:04.500538   47940 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0917 17:52:04.500549   47940 command_runner.go:130] > # 	"containers_oom_total",
	I0917 17:52:04.500556   47940 command_runner.go:130] > # 	"containers_oom",
	I0917 17:52:04.500566   47940 command_runner.go:130] > # 	"processes_defunct",
	I0917 17:52:04.500573   47940 command_runner.go:130] > # 	"operations_total",
	I0917 17:52:04.500583   47940 command_runner.go:130] > # 	"operations_latency_seconds",
	I0917 17:52:04.500592   47940 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0917 17:52:04.500602   47940 command_runner.go:130] > # 	"operations_errors_total",
	I0917 17:52:04.500613   47940 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0917 17:52:04.500623   47940 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0917 17:52:04.500631   47940 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0917 17:52:04.500640   47940 command_runner.go:130] > # 	"image_pulls_success_total",
	I0917 17:52:04.500648   47940 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0917 17:52:04.500657   47940 command_runner.go:130] > # 	"containers_oom_count_total",
	I0917 17:52:04.500666   47940 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0917 17:52:04.500677   47940 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0917 17:52:04.500685   47940 command_runner.go:130] > # ]
	I0917 17:52:04.500695   47940 command_runner.go:130] > # The port on which the metrics server will listen.
	I0917 17:52:04.500703   47940 command_runner.go:130] > # metrics_port = 9090
	I0917 17:52:04.500712   47940 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0917 17:52:04.500721   47940 command_runner.go:130] > # metrics_socket = ""
	I0917 17:52:04.500731   47940 command_runner.go:130] > # The certificate for the secure metrics server.
	I0917 17:52:04.500753   47940 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0917 17:52:04.500767   47940 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0917 17:52:04.500778   47940 command_runner.go:130] > # certificate on any modification event.
	I0917 17:52:04.500785   47940 command_runner.go:130] > # metrics_cert = ""
	I0917 17:52:04.500793   47940 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0917 17:52:04.500806   47940 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0917 17:52:04.500816   47940 command_runner.go:130] > # metrics_key = ""
	I0917 17:52:04.500827   47940 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0917 17:52:04.500835   47940 command_runner.go:130] > [crio.tracing]
	I0917 17:52:04.500845   47940 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0917 17:52:04.500854   47940 command_runner.go:130] > # enable_tracing = false
	I0917 17:52:04.500864   47940 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0917 17:52:04.500875   47940 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0917 17:52:04.500888   47940 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0917 17:52:04.500899   47940 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0917 17:52:04.500909   47940 command_runner.go:130] > # CRI-O NRI configuration.
	I0917 17:52:04.500917   47940 command_runner.go:130] > [crio.nri]
	I0917 17:52:04.500925   47940 command_runner.go:130] > # Globally enable or disable NRI.
	I0917 17:52:04.500934   47940 command_runner.go:130] > # enable_nri = false
	I0917 17:52:04.500949   47940 command_runner.go:130] > # NRI socket to listen on.
	I0917 17:52:04.500960   47940 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0917 17:52:04.500968   47940 command_runner.go:130] > # NRI plugin directory to use.
	I0917 17:52:04.500979   47940 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0917 17:52:04.500990   47940 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0917 17:52:04.501001   47940 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0917 17:52:04.501013   47940 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0917 17:52:04.501022   47940 command_runner.go:130] > # nri_disable_connections = false
	I0917 17:52:04.501033   47940 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0917 17:52:04.501042   47940 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0917 17:52:04.501053   47940 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0917 17:52:04.501062   47940 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0917 17:52:04.501075   47940 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0917 17:52:04.501084   47940 command_runner.go:130] > [crio.stats]
	I0917 17:52:04.501100   47940 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0917 17:52:04.501113   47940 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0917 17:52:04.501122   47940 command_runner.go:130] > # stats_collection_period = 0
	I0917 17:52:04.501159   47940 command_runner.go:130] ! time="2024-09-17 17:52:04.448275633Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0917 17:52:04.501180   47940 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0917 17:52:04.501303   47940 cni.go:84] Creating CNI manager for ""
	I0917 17:52:04.501319   47940 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 17:52:04.501355   47940 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 17:52:04.501387   47940 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.35 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-178778 NodeName:multinode-178778 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.35"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.35 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 17:52:04.501553   47940 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.35
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-178778"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.35
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.35"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 17:52:04.501628   47940 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 17:52:04.514520   47940 command_runner.go:130] > kubeadm
	I0917 17:52:04.514543   47940 command_runner.go:130] > kubectl
	I0917 17:52:04.514548   47940 command_runner.go:130] > kubelet
	I0917 17:52:04.514573   47940 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 17:52:04.514639   47940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 17:52:04.525622   47940 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0917 17:52:04.544152   47940 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 17:52:04.562825   47940 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0917 17:52:04.581283   47940 ssh_runner.go:195] Run: grep 192.168.39.35	control-plane.minikube.internal$ /etc/hosts
	I0917 17:52:04.585532   47940 command_runner.go:130] > 192.168.39.35	control-plane.minikube.internal
	I0917 17:52:04.585687   47940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:52:04.729835   47940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 17:52:04.745911   47940 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/multinode-178778 for IP: 192.168.39.35
	I0917 17:52:04.745935   47940 certs.go:194] generating shared ca certs ...
	I0917 17:52:04.745950   47940 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:52:04.746105   47940 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 17:52:04.746148   47940 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 17:52:04.746164   47940 certs.go:256] generating profile certs ...
	I0917 17:52:04.746239   47940 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/multinode-178778/client.key
	I0917 17:52:04.746293   47940 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/multinode-178778/apiserver.key.b26bef04
	I0917 17:52:04.746332   47940 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/multinode-178778/proxy-client.key
	I0917 17:52:04.746343   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 17:52:04.746355   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 17:52:04.746367   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 17:52:04.746378   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 17:52:04.746390   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/multinode-178778/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 17:52:04.746410   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/multinode-178778/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 17:52:04.746425   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/multinode-178778/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 17:52:04.746436   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/multinode-178778/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 17:52:04.746487   47940 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 17:52:04.746516   47940 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 17:52:04.746522   47940 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 17:52:04.746541   47940 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 17:52:04.746561   47940 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 17:52:04.746581   47940 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 17:52:04.746623   47940 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 17:52:04.746649   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> /usr/share/ca-certificates/182592.pem
	I0917 17:52:04.746672   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:52:04.746683   47940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem -> /usr/share/ca-certificates/18259.pem
	I0917 17:52:04.747235   47940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 17:52:04.774491   47940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 17:52:04.800965   47940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 17:52:04.828637   47940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 17:52:04.855141   47940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/multinode-178778/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0917 17:52:04.881673   47940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/multinode-178778/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 17:52:04.908744   47940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/multinode-178778/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 17:52:04.936313   47940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/multinode-178778/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 17:52:04.963202   47940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 17:52:04.989620   47940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 17:52:05.016366   47940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 17:52:05.043395   47940 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 17:52:05.061825   47940 ssh_runner.go:195] Run: openssl version
	I0917 17:52:05.068566   47940 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0917 17:52:05.068674   47940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 17:52:05.080678   47940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 17:52:05.085309   47940 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 17:52:05.085333   47940 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 17:52:05.085386   47940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 17:52:05.091722   47940 command_runner.go:130] > 51391683
	I0917 17:52:05.091797   47940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 17:52:05.101992   47940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 17:52:05.113945   47940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 17:52:05.118711   47940 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 17:52:05.118762   47940 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 17:52:05.118813   47940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 17:52:05.124849   47940 command_runner.go:130] > 3ec20f2e
	I0917 17:52:05.124961   47940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 17:52:05.135236   47940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 17:52:05.147407   47940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:52:05.152316   47940 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:52:05.152355   47940 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:52:05.152413   47940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:52:05.158848   47940 command_runner.go:130] > b5213941
	I0917 17:52:05.158935   47940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 17:52:05.169028   47940 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 17:52:05.173737   47940 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 17:52:05.173765   47940 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0917 17:52:05.173770   47940 command_runner.go:130] > Device: 253,1	Inode: 5242920     Links: 1
	I0917 17:52:05.173777   47940 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0917 17:52:05.173794   47940 command_runner.go:130] > Access: 2024-09-17 17:45:14.502396918 +0000
	I0917 17:52:05.173799   47940 command_runner.go:130] > Modify: 2024-09-17 17:45:14.502396918 +0000
	I0917 17:52:05.173804   47940 command_runner.go:130] > Change: 2024-09-17 17:45:14.502396918 +0000
	I0917 17:52:05.173809   47940 command_runner.go:130] >  Birth: 2024-09-17 17:45:14.502396918 +0000
	I0917 17:52:05.173865   47940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 17:52:05.179855   47940 command_runner.go:130] > Certificate will not expire
	I0917 17:52:05.179925   47940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 17:52:05.185789   47940 command_runner.go:130] > Certificate will not expire
	I0917 17:52:05.185932   47940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 17:52:05.191963   47940 command_runner.go:130] > Certificate will not expire
	I0917 17:52:05.192042   47940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 17:52:05.198201   47940 command_runner.go:130] > Certificate will not expire
	I0917 17:52:05.198356   47940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 17:52:05.204543   47940 command_runner.go:130] > Certificate will not expire
	I0917 17:52:05.204681   47940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 17:52:05.211194   47940 command_runner.go:130] > Certificate will not expire
	I0917 17:52:05.211306   47940 kubeadm.go:392] StartCluster: {Name:multinode-178778 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-178778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.62 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:52:05.211452   47940 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 17:52:05.211519   47940 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 17:52:05.250668   47940 command_runner.go:130] > 5a42732ed4168b2b853710bf32449c57a2865255fb4184fc1a47f7564d66e866
	I0917 17:52:05.250692   47940 command_runner.go:130] > bb9c7ffc975f881a0c87b1434cca5ac265eb1c53b8f3cb8187bc701987009ba2
	I0917 17:52:05.250698   47940 command_runner.go:130] > d92c1bfd527d084237fb6a09891bcebbe2571a43bceb2b535e247e189a275ca9
	I0917 17:52:05.250705   47940 command_runner.go:130] > 1b25b58f20590fc6b00792854ea4852554e14daa8c313ec2cf96b998597e2bad
	I0917 17:52:05.250710   47940 command_runner.go:130] > 2dec6c2647270691d88ea7a950f9ad02bff02a646e0eb5139498c65a76b8471e
	I0917 17:52:05.250719   47940 command_runner.go:130] > 8d7e1ab5d7a869fe10b29c149283fee294171d881c41c776a1cd2dfebeee766b
	I0917 17:52:05.250732   47940 command_runner.go:130] > 4d0d3a5d8108feb075b79f62ab6547aaf2221f7c708a364b10e3acb5ec484402
	I0917 17:52:05.250760   47940 command_runner.go:130] > b632cb69ae054b728a96c3ca2d075682a7a19b51a9d1e38b71e9d0c4185affbc
	I0917 17:52:05.250788   47940 cri.go:89] found id: "5a42732ed4168b2b853710bf32449c57a2865255fb4184fc1a47f7564d66e866"
	I0917 17:52:05.250797   47940 cri.go:89] found id: "bb9c7ffc975f881a0c87b1434cca5ac265eb1c53b8f3cb8187bc701987009ba2"
	I0917 17:52:05.250801   47940 cri.go:89] found id: "d92c1bfd527d084237fb6a09891bcebbe2571a43bceb2b535e247e189a275ca9"
	I0917 17:52:05.250804   47940 cri.go:89] found id: "1b25b58f20590fc6b00792854ea4852554e14daa8c313ec2cf96b998597e2bad"
	I0917 17:52:05.250807   47940 cri.go:89] found id: "2dec6c2647270691d88ea7a950f9ad02bff02a646e0eb5139498c65a76b8471e"
	I0917 17:52:05.250810   47940 cri.go:89] found id: "8d7e1ab5d7a869fe10b29c149283fee294171d881c41c776a1cd2dfebeee766b"
	I0917 17:52:05.250813   47940 cri.go:89] found id: "4d0d3a5d8108feb075b79f62ab6547aaf2221f7c708a364b10e3acb5ec484402"
	I0917 17:52:05.250820   47940 cri.go:89] found id: "b632cb69ae054b728a96c3ca2d075682a7a19b51a9d1e38b71e9d0c4185affbc"
	I0917 17:52:05.250824   47940 cri.go:89] found id: ""
	I0917 17:52:05.250880   47940 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 17 17:56:14 multinode-178778 crio[2738]: time="2024-09-17 17:56:14.731272006Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a5420b6a-dd31-479f-95f4-d24a1862c51f name=/runtime.v1.RuntimeService/Version
	Sep 17 17:56:14 multinode-178778 crio[2738]: time="2024-09-17 17:56:14.732421879Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=86f3e648-dcfa-441a-a454-c489c7eabdd6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:56:14 multinode-178778 crio[2738]: time="2024-09-17 17:56:14.732851222Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595774732826305,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=86f3e648-dcfa-441a-a454-c489c7eabdd6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:56:14 multinode-178778 crio[2738]: time="2024-09-17 17:56:14.734033170Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fac79254-b129-4d67-aacd-2212c4bacd67 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:56:14 multinode-178778 crio[2738]: time="2024-09-17 17:56:14.734094353Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fac79254-b129-4d67-aacd-2212c4bacd67 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:56:14 multinode-178778 crio[2738]: time="2024-09-17 17:56:14.734497879Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0405fbdc7bc87c6d18e30e57a3dde93975cf4b4755c329eff726577ab4aa08a0,PodSandboxId:0aba6a8ac13afc5a95e7c297bd868c36df27571c928eff1c85b2ba7a72219c81,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726595566329753966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dh729,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03328709-2219-4b31-bc7c-ef68c899f81e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7747cc5a58254db89857ca054f1c4b1d749a368af47cb16d144d10b3806e7b3,PodSandboxId:4b6c0d09ebedf46dfb7f81ac1735fa9cd0b008e40693d69d078232411c2a63e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726595532787415439,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jpqbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57cc35fa-61ef-4dfa-bfb5-5a32ab2ceac3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86c20e143c46a855604ba041aca6c0047438a8117585fc36c6d9d2c0b57a6135,PodSandboxId:ccf0cada1da70fd12c77e6997b7da7765c0e2813b61c5c89829bddfff48c9d00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726595532761249026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qp52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9bc7e7-c862-462e-ad97-d92e2376af58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6b24db391cb933da944ac182a332ce0db5f3474386509f7b9cf4cfb08e2463,PodSandboxId:b284dd0c0aaa1c67afc59e12432aa935a15681ba708ed08ceb02de5dfadf8190,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726595532640388820,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xgjrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a9ffb4-b223-4d4d-8330-76cbedfc944b,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32d8d953255d0e96ba3b975da855906b9544159875990479b40ddf69a330960,PodSandboxId:64eb283025385a8effd7bd872ed209627d1acc98d049fc912dd18b8262866bc0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726595532580777245,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8f59650-fc1a-4e96-859e-6a630e6d50eb,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cc24c2367951f0d65aac87876e0e147ad370db1e920c526de00dac0e4a8f24,PodSandboxId:d8ca75c103a340ea6ad2c46d609a6d8e2a436bda7a00b48ebc2215ad3b870c81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726595527806149659,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69fbb8cf1a2a95cef841df9f977035fb,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d9d8cddcad905c1ed3d3dd98d944dd690b2f22cf2d5e258ef7a9195a1c77a4,PodSandboxId:c6e67650f8fe9fde77a87372e1e358baf900fd3c2904da863fd4cd4a7645c014,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726595527807650266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7e63432ab647f618158ba4465a7666,},Annotations:map[string]string{io.kub
ernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b6f4978954af28b30cf25e85875e8ee37fed382fa7752a10870ad1dceb734db,PodSandboxId:9f322ac6a5eedf6ebde783fa1ea75d8ccac60089ed54a53f3ef5eadc184cfbb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726595527696697113,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9fc36b9d4e13ff6509050eb710296a9,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c84521880b71b7452275804f3b79ff0aa785ebfc3d85ce0cfda1e7aaf958b0b3,PodSandboxId:d7b4d4533a9fc7e14bb13b775e065910f44677f4614d2428ec6c0e56ec7475c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726595527691472842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9bed66fba5ec211216ea1925b1d31c6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c935cf93b3484ed31fa337cecde14b652e2ceb4d008e20cf9dec2d21ca99c5d8,PodSandboxId:b09fedf3bddf89f29d25509a6796691e01c325e88e44cbde8cbdf443fedd7f34,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726595198502732789,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dh729,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03328709-2219-4b31-bc7c-ef68c899f81e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a42732ed4168b2b853710bf32449c57a2865255fb4184fc1a47f7564d66e866,PodSandboxId:7ea2370d20b04d7c496447e1fa6a0d99fd826a206f6dc6441af2f0001e513261,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726595141844813179,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qp52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9bc7e7-c862-462e-ad97-d92e2376af58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb9c7ffc975f881a0c87b1434cca5ac265eb1c53b8f3cb8187bc701987009ba2,PodSandboxId:7ca00b7e670bd01d93703889dcdf905c764bd42a0aeaf3d3c4c9f0bc8d4f6a8a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726595141788297943,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: b8f59650-fc1a-4e96-859e-6a630e6d50eb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d92c1bfd527d084237fb6a09891bcebbe2571a43bceb2b535e247e189a275ca9,PodSandboxId:bcd38cf96e3f52061bc550835c8864bdf8c1a0de64f6be952d9ff97bf2ffc1ad,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726595129671702808,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jpqbk,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 57cc35fa-61ef-4dfa-bfb5-5a32ab2ceac3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b25b58f20590fc6b00792854ea4852554e14daa8c313ec2cf96b998597e2bad,PodSandboxId:52d4b2fe8a56eab9472fa986766b9e75b7c6839634c8f998beb9879f22815850,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726595129388769710,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xgjrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a9ffb4-b223-4d4d-8330
-76cbedfc944b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7e1ab5d7a869fe10b29c149283fee294171d881c41c776a1cd2dfebeee766b,PodSandboxId:8618dedde106d5ebc72c32ea3b655af24dbd2814f39023d8b5a345858a442893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726595118565296729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7e634
32ab647f618158ba4465a7666,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dec6c2647270691d88ea7a950f9ad02bff02a646e0eb5139498c65a76b8471e,PodSandboxId:5f38a03a0f184f778311f158000f5e08b66be2dedb3e985d1f525dd16df6f528,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726595118566962018,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69fbb8cf1a2a95cef841df
9f977035fb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d0d3a5d8108feb075b79f62ab6547aaf2221f7c708a364b10e3acb5ec484402,PodSandboxId:ef58d0b7c446555e14dc25abe5669655cd8462fd8481c2861e892a9ae34e6f8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726595118541635049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9bed66fba5ec211216ea1925b1d31c6,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b632cb69ae054b728a96c3ca2d075682a7a19b51a9d1e38b71e9d0c4185affbc,PodSandboxId:e5ecbef0bd37bdfcd02135133123463359c80d774fafcb5958232b9774ad226e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726595118534276459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9fc36b9d4e13ff6509050eb710296a9,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fac79254-b129-4d67-aacd-2212c4bacd67 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:56:14 multinode-178778 crio[2738]: time="2024-09-17 17:56:14.784551867Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4b537719-0ced-4317-89a3-f2bf39e94d01 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:56:14 multinode-178778 crio[2738]: time="2024-09-17 17:56:14.784636233Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4b537719-0ced-4317-89a3-f2bf39e94d01 name=/runtime.v1.RuntimeService/Version
	Sep 17 17:56:14 multinode-178778 crio[2738]: time="2024-09-17 17:56:14.785884100Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=38cbc191-97fc-42a8-82c3-1ffc7dd420cb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:56:14 multinode-178778 crio[2738]: time="2024-09-17 17:56:14.786391439Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595774786366793,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=38cbc191-97fc-42a8-82c3-1ffc7dd420cb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:56:14 multinode-178778 crio[2738]: time="2024-09-17 17:56:14.787532852Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=31511f89-a947-4625-88e4-36a7598252db name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:56:14 multinode-178778 crio[2738]: time="2024-09-17 17:56:14.787588521Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=31511f89-a947-4625-88e4-36a7598252db name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:56:14 multinode-178778 crio[2738]: time="2024-09-17 17:56:14.787948019Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0405fbdc7bc87c6d18e30e57a3dde93975cf4b4755c329eff726577ab4aa08a0,PodSandboxId:0aba6a8ac13afc5a95e7c297bd868c36df27571c928eff1c85b2ba7a72219c81,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726595566329753966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dh729,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03328709-2219-4b31-bc7c-ef68c899f81e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7747cc5a58254db89857ca054f1c4b1d749a368af47cb16d144d10b3806e7b3,PodSandboxId:4b6c0d09ebedf46dfb7f81ac1735fa9cd0b008e40693d69d078232411c2a63e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726595532787415439,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jpqbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57cc35fa-61ef-4dfa-bfb5-5a32ab2ceac3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86c20e143c46a855604ba041aca6c0047438a8117585fc36c6d9d2c0b57a6135,PodSandboxId:ccf0cada1da70fd12c77e6997b7da7765c0e2813b61c5c89829bddfff48c9d00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726595532761249026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qp52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9bc7e7-c862-462e-ad97-d92e2376af58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6b24db391cb933da944ac182a332ce0db5f3474386509f7b9cf4cfb08e2463,PodSandboxId:b284dd0c0aaa1c67afc59e12432aa935a15681ba708ed08ceb02de5dfadf8190,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726595532640388820,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xgjrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a9ffb4-b223-4d4d-8330-76cbedfc944b,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32d8d953255d0e96ba3b975da855906b9544159875990479b40ddf69a330960,PodSandboxId:64eb283025385a8effd7bd872ed209627d1acc98d049fc912dd18b8262866bc0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726595532580777245,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8f59650-fc1a-4e96-859e-6a630e6d50eb,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cc24c2367951f0d65aac87876e0e147ad370db1e920c526de00dac0e4a8f24,PodSandboxId:d8ca75c103a340ea6ad2c46d609a6d8e2a436bda7a00b48ebc2215ad3b870c81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726595527806149659,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69fbb8cf1a2a95cef841df9f977035fb,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d9d8cddcad905c1ed3d3dd98d944dd690b2f22cf2d5e258ef7a9195a1c77a4,PodSandboxId:c6e67650f8fe9fde77a87372e1e358baf900fd3c2904da863fd4cd4a7645c014,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726595527807650266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7e63432ab647f618158ba4465a7666,},Annotations:map[string]string{io.kub
ernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b6f4978954af28b30cf25e85875e8ee37fed382fa7752a10870ad1dceb734db,PodSandboxId:9f322ac6a5eedf6ebde783fa1ea75d8ccac60089ed54a53f3ef5eadc184cfbb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726595527696697113,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9fc36b9d4e13ff6509050eb710296a9,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c84521880b71b7452275804f3b79ff0aa785ebfc3d85ce0cfda1e7aaf958b0b3,PodSandboxId:d7b4d4533a9fc7e14bb13b775e065910f44677f4614d2428ec6c0e56ec7475c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726595527691472842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9bed66fba5ec211216ea1925b1d31c6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c935cf93b3484ed31fa337cecde14b652e2ceb4d008e20cf9dec2d21ca99c5d8,PodSandboxId:b09fedf3bddf89f29d25509a6796691e01c325e88e44cbde8cbdf443fedd7f34,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726595198502732789,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dh729,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03328709-2219-4b31-bc7c-ef68c899f81e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a42732ed4168b2b853710bf32449c57a2865255fb4184fc1a47f7564d66e866,PodSandboxId:7ea2370d20b04d7c496447e1fa6a0d99fd826a206f6dc6441af2f0001e513261,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726595141844813179,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qp52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9bc7e7-c862-462e-ad97-d92e2376af58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb9c7ffc975f881a0c87b1434cca5ac265eb1c53b8f3cb8187bc701987009ba2,PodSandboxId:7ca00b7e670bd01d93703889dcdf905c764bd42a0aeaf3d3c4c9f0bc8d4f6a8a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726595141788297943,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: b8f59650-fc1a-4e96-859e-6a630e6d50eb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d92c1bfd527d084237fb6a09891bcebbe2571a43bceb2b535e247e189a275ca9,PodSandboxId:bcd38cf96e3f52061bc550835c8864bdf8c1a0de64f6be952d9ff97bf2ffc1ad,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726595129671702808,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jpqbk,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 57cc35fa-61ef-4dfa-bfb5-5a32ab2ceac3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b25b58f20590fc6b00792854ea4852554e14daa8c313ec2cf96b998597e2bad,PodSandboxId:52d4b2fe8a56eab9472fa986766b9e75b7c6839634c8f998beb9879f22815850,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726595129388769710,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xgjrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a9ffb4-b223-4d4d-8330
-76cbedfc944b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7e1ab5d7a869fe10b29c149283fee294171d881c41c776a1cd2dfebeee766b,PodSandboxId:8618dedde106d5ebc72c32ea3b655af24dbd2814f39023d8b5a345858a442893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726595118565296729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7e634
32ab647f618158ba4465a7666,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dec6c2647270691d88ea7a950f9ad02bff02a646e0eb5139498c65a76b8471e,PodSandboxId:5f38a03a0f184f778311f158000f5e08b66be2dedb3e985d1f525dd16df6f528,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726595118566962018,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69fbb8cf1a2a95cef841df
9f977035fb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d0d3a5d8108feb075b79f62ab6547aaf2221f7c708a364b10e3acb5ec484402,PodSandboxId:ef58d0b7c446555e14dc25abe5669655cd8462fd8481c2861e892a9ae34e6f8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726595118541635049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9bed66fba5ec211216ea1925b1d31c6,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b632cb69ae054b728a96c3ca2d075682a7a19b51a9d1e38b71e9d0c4185affbc,PodSandboxId:e5ecbef0bd37bdfcd02135133123463359c80d774fafcb5958232b9774ad226e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726595118534276459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9fc36b9d4e13ff6509050eb710296a9,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=31511f89-a947-4625-88e4-36a7598252db name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:56:14 multinode-178778 crio[2738]: time="2024-09-17 17:56:14.831805635Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=76af5f34-068e-4cea-9af9-485b57a5c3fd name=/runtime.v1.RuntimeService/Version
	Sep 17 17:56:14 multinode-178778 crio[2738]: time="2024-09-17 17:56:14.831888231Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=76af5f34-068e-4cea-9af9-485b57a5c3fd name=/runtime.v1.RuntimeService/Version
	Sep 17 17:56:14 multinode-178778 crio[2738]: time="2024-09-17 17:56:14.834200136Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8f889d07-dc14-45dc-b813-d443b2af8c9f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:56:14 multinode-178778 crio[2738]: time="2024-09-17 17:56:14.834679009Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595774834648364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f889d07-dc14-45dc-b813-d443b2af8c9f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 17:56:14 multinode-178778 crio[2738]: time="2024-09-17 17:56:14.835944898Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=98e538f8-c731-4ebb-bc6d-248d29b179b2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:56:14 multinode-178778 crio[2738]: time="2024-09-17 17:56:14.836059501Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=98e538f8-c731-4ebb-bc6d-248d29b179b2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:56:14 multinode-178778 crio[2738]: time="2024-09-17 17:56:14.836490020Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0405fbdc7bc87c6d18e30e57a3dde93975cf4b4755c329eff726577ab4aa08a0,PodSandboxId:0aba6a8ac13afc5a95e7c297bd868c36df27571c928eff1c85b2ba7a72219c81,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726595566329753966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dh729,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03328709-2219-4b31-bc7c-ef68c899f81e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7747cc5a58254db89857ca054f1c4b1d749a368af47cb16d144d10b3806e7b3,PodSandboxId:4b6c0d09ebedf46dfb7f81ac1735fa9cd0b008e40693d69d078232411c2a63e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726595532787415439,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jpqbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57cc35fa-61ef-4dfa-bfb5-5a32ab2ceac3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86c20e143c46a855604ba041aca6c0047438a8117585fc36c6d9d2c0b57a6135,PodSandboxId:ccf0cada1da70fd12c77e6997b7da7765c0e2813b61c5c89829bddfff48c9d00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726595532761249026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qp52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9bc7e7-c862-462e-ad97-d92e2376af58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6b24db391cb933da944ac182a332ce0db5f3474386509f7b9cf4cfb08e2463,PodSandboxId:b284dd0c0aaa1c67afc59e12432aa935a15681ba708ed08ceb02de5dfadf8190,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726595532640388820,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xgjrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a9ffb4-b223-4d4d-8330-76cbedfc944b,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32d8d953255d0e96ba3b975da855906b9544159875990479b40ddf69a330960,PodSandboxId:64eb283025385a8effd7bd872ed209627d1acc98d049fc912dd18b8262866bc0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726595532580777245,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8f59650-fc1a-4e96-859e-6a630e6d50eb,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cc24c2367951f0d65aac87876e0e147ad370db1e920c526de00dac0e4a8f24,PodSandboxId:d8ca75c103a340ea6ad2c46d609a6d8e2a436bda7a00b48ebc2215ad3b870c81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726595527806149659,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69fbb8cf1a2a95cef841df9f977035fb,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d9d8cddcad905c1ed3d3dd98d944dd690b2f22cf2d5e258ef7a9195a1c77a4,PodSandboxId:c6e67650f8fe9fde77a87372e1e358baf900fd3c2904da863fd4cd4a7645c014,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726595527807650266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7e63432ab647f618158ba4465a7666,},Annotations:map[string]string{io.kub
ernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b6f4978954af28b30cf25e85875e8ee37fed382fa7752a10870ad1dceb734db,PodSandboxId:9f322ac6a5eedf6ebde783fa1ea75d8ccac60089ed54a53f3ef5eadc184cfbb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726595527696697113,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9fc36b9d4e13ff6509050eb710296a9,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c84521880b71b7452275804f3b79ff0aa785ebfc3d85ce0cfda1e7aaf958b0b3,PodSandboxId:d7b4d4533a9fc7e14bb13b775e065910f44677f4614d2428ec6c0e56ec7475c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726595527691472842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9bed66fba5ec211216ea1925b1d31c6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c935cf93b3484ed31fa337cecde14b652e2ceb4d008e20cf9dec2d21ca99c5d8,PodSandboxId:b09fedf3bddf89f29d25509a6796691e01c325e88e44cbde8cbdf443fedd7f34,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726595198502732789,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dh729,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03328709-2219-4b31-bc7c-ef68c899f81e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a42732ed4168b2b853710bf32449c57a2865255fb4184fc1a47f7564d66e866,PodSandboxId:7ea2370d20b04d7c496447e1fa6a0d99fd826a206f6dc6441af2f0001e513261,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726595141844813179,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qp52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9bc7e7-c862-462e-ad97-d92e2376af58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb9c7ffc975f881a0c87b1434cca5ac265eb1c53b8f3cb8187bc701987009ba2,PodSandboxId:7ca00b7e670bd01d93703889dcdf905c764bd42a0aeaf3d3c4c9f0bc8d4f6a8a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726595141788297943,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: b8f59650-fc1a-4e96-859e-6a630e6d50eb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d92c1bfd527d084237fb6a09891bcebbe2571a43bceb2b535e247e189a275ca9,PodSandboxId:bcd38cf96e3f52061bc550835c8864bdf8c1a0de64f6be952d9ff97bf2ffc1ad,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726595129671702808,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jpqbk,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 57cc35fa-61ef-4dfa-bfb5-5a32ab2ceac3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b25b58f20590fc6b00792854ea4852554e14daa8c313ec2cf96b998597e2bad,PodSandboxId:52d4b2fe8a56eab9472fa986766b9e75b7c6839634c8f998beb9879f22815850,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726595129388769710,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xgjrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a9ffb4-b223-4d4d-8330
-76cbedfc944b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7e1ab5d7a869fe10b29c149283fee294171d881c41c776a1cd2dfebeee766b,PodSandboxId:8618dedde106d5ebc72c32ea3b655af24dbd2814f39023d8b5a345858a442893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726595118565296729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7e634
32ab647f618158ba4465a7666,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dec6c2647270691d88ea7a950f9ad02bff02a646e0eb5139498c65a76b8471e,PodSandboxId:5f38a03a0f184f778311f158000f5e08b66be2dedb3e985d1f525dd16df6f528,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726595118566962018,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69fbb8cf1a2a95cef841df
9f977035fb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d0d3a5d8108feb075b79f62ab6547aaf2221f7c708a364b10e3acb5ec484402,PodSandboxId:ef58d0b7c446555e14dc25abe5669655cd8462fd8481c2861e892a9ae34e6f8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726595118541635049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9bed66fba5ec211216ea1925b1d31c6,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b632cb69ae054b728a96c3ca2d075682a7a19b51a9d1e38b71e9d0c4185affbc,PodSandboxId:e5ecbef0bd37bdfcd02135133123463359c80d774fafcb5958232b9774ad226e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726595118534276459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9fc36b9d4e13ff6509050eb710296a9,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=98e538f8-c731-4ebb-bc6d-248d29b179b2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:56:14 multinode-178778 crio[2738]: time="2024-09-17 17:56:14.862321197Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=af84e6a9-344b-4d24-9522-2d2dc76c271b name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 17 17:56:14 multinode-178778 crio[2738]: time="2024-09-17 17:56:14.862724520Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0aba6a8ac13afc5a95e7c297bd868c36df27571c928eff1c85b2ba7a72219c81,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-dh729,Uid:03328709-2219-4b31-bc7c-ef68c899f81e,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726595566178227821,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-dh729,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03328709-2219-4b31-bc7c-ef68c899f81e,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-17T17:52:11.975086179Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ccf0cada1da70fd12c77e6997b7da7765c0e2813b61c5c89829bddfff48c9d00,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-6qp52,Uid:ae9bc7e7-c862-462e-ad97-d92e2376af58,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1726595532440660222,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qp52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9bc7e7-c862-462e-ad97-d92e2376af58,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-17T17:52:11.975121739Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b284dd0c0aaa1c67afc59e12432aa935a15681ba708ed08ceb02de5dfadf8190,Metadata:&PodSandboxMetadata{Name:kube-proxy-xgjrq,Uid:a4a9ffb4-b223-4d4d-8330-76cbedfc944b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726595532358208012,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-xgjrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a9ffb4-b223-4d4d-8330-76cbedfc944b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{
kubernetes.io/config.seen: 2024-09-17T17:52:11.975083629Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:64eb283025385a8effd7bd872ed209627d1acc98d049fc912dd18b8262866bc0,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:b8f59650-fc1a-4e96-859e-6a630e6d50eb,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726595532325710236,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8f59650-fc1a-4e96-859e-6a630e6d50eb,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"
/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-17T17:52:11.975124270Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4b6c0d09ebedf46dfb7f81ac1735fa9cd0b008e40693d69d078232411c2a63e9,Metadata:&PodSandboxMetadata{Name:kindnet-jpqbk,Uid:57cc35fa-61ef-4dfa-bfb5-5a32ab2ceac3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726595532308506638,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-jpqbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57cc35fa-61ef-4dfa-bfb5-5a32ab2ceac3,k8s-app: kindnet,pod-template-generat
ion: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-17T17:52:11.975072588Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c6e67650f8fe9fde77a87372e1e358baf900fd3c2904da863fd4cd4a7645c014,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-178778,Uid:aa7e63432ab647f618158ba4465a7666,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726595527480762969,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7e63432ab647f618158ba4465a7666,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: aa7e63432ab647f618158ba4465a7666,kubernetes.io/config.seen: 2024-09-17T17:52:06.983354468Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d8ca75c103a340ea6ad2c46d609a6d8e2a436bda7a00b48ebc2215ad3b870c81,Metadata:&PodSandboxMetadat
a{Name:kube-scheduler-multinode-178778,Uid:69fbb8cf1a2a95cef841df9f977035fb,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726595527479512308,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69fbb8cf1a2a95cef841df9f977035fb,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 69fbb8cf1a2a95cef841df9f977035fb,kubernetes.io/config.seen: 2024-09-17T17:52:06.983355919Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9f322ac6a5eedf6ebde783fa1ea75d8ccac60089ed54a53f3ef5eadc184cfbb6,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-178778,Uid:a9fc36b9d4e13ff6509050eb710296a9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726595527477635003,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-1
78778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9fc36b9d4e13ff6509050eb710296a9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.35:8443,kubernetes.io/config.hash: a9fc36b9d4e13ff6509050eb710296a9,kubernetes.io/config.seen: 2024-09-17T17:52:06.983352549Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d7b4d4533a9fc7e14bb13b775e065910f44677f4614d2428ec6c0e56ec7475c2,Metadata:&PodSandboxMetadata{Name:etcd-multinode-178778,Uid:d9bed66fba5ec211216ea1925b1d31c6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726595527475150409,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9bed66fba5ec211216ea1925b1d31c6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.35:2379,kubernete
s.io/config.hash: d9bed66fba5ec211216ea1925b1d31c6,kubernetes.io/config.seen: 2024-09-17T17:52:06.983339428Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b09fedf3bddf89f29d25509a6796691e01c325e88e44cbde8cbdf443fedd7f34,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-dh729,Uid:03328709-2219-4b31-bc7c-ef68c899f81e,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726595196364226282,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-dh729,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03328709-2219-4b31-bc7c-ef68c899f81e,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-17T17:46:36.051500574Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7ca00b7e670bd01d93703889dcdf905c764bd42a0aeaf3d3c4c9f0bc8d4f6a8a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:b8f59650-fc1a-4e96-859e-6a630e6d50eb,Namespace:kube-system,Attempt:0,},S
tate:SANDBOX_NOTREADY,CreatedAt:1726595141612701319,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8f59650-fc1a-4e96-859e-6a630e6d50eb,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\"
:\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-17T17:45:41.302225727Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7ea2370d20b04d7c496447e1fa6a0d99fd826a206f6dc6441af2f0001e513261,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-6qp52,Uid:ae9bc7e7-c862-462e-ad97-d92e2376af58,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726595141610578312,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qp52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9bc7e7-c862-462e-ad97-d92e2376af58,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-17T17:45:41.296713152Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:52d4b2fe8a56eab9472fa986766b9e75b7c6839634c8f998beb9879f22815850,Metadata:&PodSandboxMetadata{Name:kube-proxy-xgjrq,Uid:a4a9ffb4-b223-4d4d-8330-76cbedfc944b,Namespace:kube-sy
stem,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726595129277962603,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-xgjrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a9ffb4-b223-4d4d-8330-76cbedfc944b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-17T17:45:28.972208531Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bcd38cf96e3f52061bc550835c8864bdf8c1a0de64f6be952d9ff97bf2ffc1ad,Metadata:&PodSandboxMetadata{Name:kindnet-jpqbk,Uid:57cc35fa-61ef-4dfa-bfb5-5a32ab2ceac3,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726595129229042509,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-jpqbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57cc35fa-61ef-4dfa-bfb5-5a32ab2ceac3,k8s-app: kindnet,pod-t
emplate-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-17T17:45:28.922522320Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8618dedde106d5ebc72c32ea3b655af24dbd2814f39023d8b5a345858a442893,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-178778,Uid:aa7e63432ab647f618158ba4465a7666,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726595118340867853,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7e63432ab647f618158ba4465a7666,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: aa7e63432ab647f618158ba4465a7666,kubernetes.io/config.seen: 2024-09-17T17:45:17.637537925Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5f38a03a0f184f778311f158000f5e08b66be2dedb3e985d1f525dd16df6f528,Metadata:
&PodSandboxMetadata{Name:kube-scheduler-multinode-178778,Uid:69fbb8cf1a2a95cef841df9f977035fb,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726595118317770623,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69fbb8cf1a2a95cef841df9f977035fb,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 69fbb8cf1a2a95cef841df9f977035fb,kubernetes.io/config.seen: 2024-09-17T17:45:17.637539177Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e5ecbef0bd37bdfcd02135133123463359c80d774fafcb5958232b9774ad226e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-178778,Uid:a9fc36b9d4e13ff6509050eb710296a9,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726595118312800421,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ku
be-apiserver-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9fc36b9d4e13ff6509050eb710296a9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.35:8443,kubernetes.io/config.hash: a9fc36b9d4e13ff6509050eb710296a9,kubernetes.io/config.seen: 2024-09-17T17:45:17.637536489Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ef58d0b7c446555e14dc25abe5669655cd8462fd8481c2861e892a9ae34e6f8b,Metadata:&PodSandboxMetadata{Name:etcd-multinode-178778,Uid:d9bed66fba5ec211216ea1925b1d31c6,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726595118308462095,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9bed66fba5ec211216ea1925b1d31c6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://1
92.168.39.35:2379,kubernetes.io/config.hash: d9bed66fba5ec211216ea1925b1d31c6,kubernetes.io/config.seen: 2024-09-17T17:45:17.637531993Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=af84e6a9-344b-4d24-9522-2d2dc76c271b name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 17 17:56:14 multinode-178778 crio[2738]: time="2024-09-17 17:56:14.864255133Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92af72bc-d255-4ef3-aa04-2e159f560fab name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:56:14 multinode-178778 crio[2738]: time="2024-09-17 17:56:14.864324819Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92af72bc-d255-4ef3-aa04-2e159f560fab name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 17:56:14 multinode-178778 crio[2738]: time="2024-09-17 17:56:14.864693615Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0405fbdc7bc87c6d18e30e57a3dde93975cf4b4755c329eff726577ab4aa08a0,PodSandboxId:0aba6a8ac13afc5a95e7c297bd868c36df27571c928eff1c85b2ba7a72219c81,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726595566329753966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dh729,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03328709-2219-4b31-bc7c-ef68c899f81e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7747cc5a58254db89857ca054f1c4b1d749a368af47cb16d144d10b3806e7b3,PodSandboxId:4b6c0d09ebedf46dfb7f81ac1735fa9cd0b008e40693d69d078232411c2a63e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726595532787415439,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jpqbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57cc35fa-61ef-4dfa-bfb5-5a32ab2ceac3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86c20e143c46a855604ba041aca6c0047438a8117585fc36c6d9d2c0b57a6135,PodSandboxId:ccf0cada1da70fd12c77e6997b7da7765c0e2813b61c5c89829bddfff48c9d00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726595532761249026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qp52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9bc7e7-c862-462e-ad97-d92e2376af58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6b24db391cb933da944ac182a332ce0db5f3474386509f7b9cf4cfb08e2463,PodSandboxId:b284dd0c0aaa1c67afc59e12432aa935a15681ba708ed08ceb02de5dfadf8190,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726595532640388820,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xgjrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a9ffb4-b223-4d4d-8330-76cbedfc944b,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32d8d953255d0e96ba3b975da855906b9544159875990479b40ddf69a330960,PodSandboxId:64eb283025385a8effd7bd872ed209627d1acc98d049fc912dd18b8262866bc0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726595532580777245,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8f59650-fc1a-4e96-859e-6a630e6d50eb,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cc24c2367951f0d65aac87876e0e147ad370db1e920c526de00dac0e4a8f24,PodSandboxId:d8ca75c103a340ea6ad2c46d609a6d8e2a436bda7a00b48ebc2215ad3b870c81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726595527806149659,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69fbb8cf1a2a95cef841df9f977035fb,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d9d8cddcad905c1ed3d3dd98d944dd690b2f22cf2d5e258ef7a9195a1c77a4,PodSandboxId:c6e67650f8fe9fde77a87372e1e358baf900fd3c2904da863fd4cd4a7645c014,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726595527807650266,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7e63432ab647f618158ba4465a7666,},Annotations:map[string]string{io.kub
ernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b6f4978954af28b30cf25e85875e8ee37fed382fa7752a10870ad1dceb734db,PodSandboxId:9f322ac6a5eedf6ebde783fa1ea75d8ccac60089ed54a53f3ef5eadc184cfbb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726595527696697113,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9fc36b9d4e13ff6509050eb710296a9,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c84521880b71b7452275804f3b79ff0aa785ebfc3d85ce0cfda1e7aaf958b0b3,PodSandboxId:d7b4d4533a9fc7e14bb13b775e065910f44677f4614d2428ec6c0e56ec7475c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726595527691472842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9bed66fba5ec211216ea1925b1d31c6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c935cf93b3484ed31fa337cecde14b652e2ceb4d008e20cf9dec2d21ca99c5d8,PodSandboxId:b09fedf3bddf89f29d25509a6796691e01c325e88e44cbde8cbdf443fedd7f34,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726595198502732789,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dh729,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03328709-2219-4b31-bc7c-ef68c899f81e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a42732ed4168b2b853710bf32449c57a2865255fb4184fc1a47f7564d66e866,PodSandboxId:7ea2370d20b04d7c496447e1fa6a0d99fd826a206f6dc6441af2f0001e513261,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726595141844813179,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qp52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9bc7e7-c862-462e-ad97-d92e2376af58,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb9c7ffc975f881a0c87b1434cca5ac265eb1c53b8f3cb8187bc701987009ba2,PodSandboxId:7ca00b7e670bd01d93703889dcdf905c764bd42a0aeaf3d3c4c9f0bc8d4f6a8a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726595141788297943,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: b8f59650-fc1a-4e96-859e-6a630e6d50eb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d92c1bfd527d084237fb6a09891bcebbe2571a43bceb2b535e247e189a275ca9,PodSandboxId:bcd38cf96e3f52061bc550835c8864bdf8c1a0de64f6be952d9ff97bf2ffc1ad,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726595129671702808,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jpqbk,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 57cc35fa-61ef-4dfa-bfb5-5a32ab2ceac3,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b25b58f20590fc6b00792854ea4852554e14daa8c313ec2cf96b998597e2bad,PodSandboxId:52d4b2fe8a56eab9472fa986766b9e75b7c6839634c8f998beb9879f22815850,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726595129388769710,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xgjrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a9ffb4-b223-4d4d-8330
-76cbedfc944b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7e1ab5d7a869fe10b29c149283fee294171d881c41c776a1cd2dfebeee766b,PodSandboxId:8618dedde106d5ebc72c32ea3b655af24dbd2814f39023d8b5a345858a442893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726595118565296729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7e634
32ab647f618158ba4465a7666,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dec6c2647270691d88ea7a950f9ad02bff02a646e0eb5139498c65a76b8471e,PodSandboxId:5f38a03a0f184f778311f158000f5e08b66be2dedb3e985d1f525dd16df6f528,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726595118566962018,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69fbb8cf1a2a95cef841df
9f977035fb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d0d3a5d8108feb075b79f62ab6547aaf2221f7c708a364b10e3acb5ec484402,PodSandboxId:ef58d0b7c446555e14dc25abe5669655cd8462fd8481c2861e892a9ae34e6f8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726595118541635049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9bed66fba5ec211216ea1925b1d31c6,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b632cb69ae054b728a96c3ca2d075682a7a19b51a9d1e38b71e9d0c4185affbc,PodSandboxId:e5ecbef0bd37bdfcd02135133123463359c80d774fafcb5958232b9774ad226e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726595118534276459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-178778,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9fc36b9d4e13ff6509050eb710296a9,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=92af72bc-d255-4ef3-aa04-2e159f560fab name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0405fbdc7bc87       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   0aba6a8ac13af       busybox-7dff88458-dh729
	c7747cc5a5825       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   4b6c0d09ebedf       kindnet-jpqbk
	86c20e143c46a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   1                   ccf0cada1da70       coredns-7c65d6cfc9-6qp52
	6e6b24db391cb       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      4 minutes ago       Running             kube-proxy                1                   b284dd0c0aaa1       kube-proxy-xgjrq
	a32d8d953255d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   64eb283025385       storage-provisioner
	f4d9d8cddcad9       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   1                   c6e67650f8fe9       kube-controller-manager-multinode-178778
	e0cc24c236795       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      4 minutes ago       Running             kube-scheduler            1                   d8ca75c103a34       kube-scheduler-multinode-178778
	1b6f4978954af       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            1                   9f322ac6a5eed       kube-apiserver-multinode-178778
	c84521880b71b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   d7b4d4533a9fc       etcd-multinode-178778
	c935cf93b3484       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   b09fedf3bddf8       busybox-7dff88458-dh729
	5a42732ed4168       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      10 minutes ago      Exited              coredns                   0                   7ea2370d20b04       coredns-7c65d6cfc9-6qp52
	bb9c7ffc975f8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   7ca00b7e670bd       storage-provisioner
	d92c1bfd527d0       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      10 minutes ago      Exited              kindnet-cni               0                   bcd38cf96e3f5       kindnet-jpqbk
	1b25b58f20590       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      10 minutes ago      Exited              kube-proxy                0                   52d4b2fe8a56e       kube-proxy-xgjrq
	2dec6c2647270       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      10 minutes ago      Exited              kube-scheduler            0                   5f38a03a0f184       kube-scheduler-multinode-178778
	8d7e1ab5d7a86       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      10 minutes ago      Exited              kube-controller-manager   0                   8618dedde106d       kube-controller-manager-multinode-178778
	4d0d3a5d8108f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   ef58d0b7c4465       etcd-multinode-178778
	b632cb69ae054       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      10 minutes ago      Exited              kube-apiserver            0                   e5ecbef0bd37b       kube-apiserver-multinode-178778
	
	
	==> coredns [5a42732ed4168b2b853710bf32449c57a2865255fb4184fc1a47f7564d66e866] <==
	[INFO] 10.244.1.2:34177 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001763538s
	[INFO] 10.244.1.2:49434 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000145433s
	[INFO] 10.244.1.2:51746 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115236s
	[INFO] 10.244.1.2:49965 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001213737s
	[INFO] 10.244.1.2:56314 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066309s
	[INFO] 10.244.1.2:34888 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093495s
	[INFO] 10.244.1.2:54349 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094506s
	[INFO] 10.244.0.3:40770 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113623s
	[INFO] 10.244.0.3:52920 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138575s
	[INFO] 10.244.0.3:43285 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075872s
	[INFO] 10.244.0.3:46058 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000078939s
	[INFO] 10.244.1.2:56883 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164364s
	[INFO] 10.244.1.2:47461 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000183226s
	[INFO] 10.244.1.2:48640 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130569s
	[INFO] 10.244.1.2:49432 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144736s
	[INFO] 10.244.0.3:52617 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125296s
	[INFO] 10.244.0.3:43114 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000169824s
	[INFO] 10.244.0.3:38825 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00012174s
	[INFO] 10.244.0.3:36682 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000106208s
	[INFO] 10.244.1.2:45018 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147648s
	[INFO] 10.244.1.2:46383 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000164519s
	[INFO] 10.244.1.2:43690 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111118s
	[INFO] 10.244.1.2:39530 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011877s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [86c20e143c46a855604ba041aca6c0047438a8117585fc36c6d9d2c0b57a6135] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48700 - 42719 "HINFO IN 6621620421430811849.8086069219133186416. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01679534s
	
	
	==> describe nodes <==
	Name:               multinode-178778
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-178778
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=multinode-178778
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T17_45_24_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 17:45:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-178778
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:56:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:52:11 +0000   Tue, 17 Sep 2024 17:45:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:52:11 +0000   Tue, 17 Sep 2024 17:45:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:52:11 +0000   Tue, 17 Sep 2024 17:45:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:52:11 +0000   Tue, 17 Sep 2024 17:45:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.35
	  Hostname:    multinode-178778
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7c5753240dc445929e857fe9cb9def72
	  System UUID:                7c575324-0dc4-4592-9e85-7fe9cb9def72
	  Boot ID:                    a8f276e5-ab31-48df-b3de-34e52584cbf0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dh729                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m39s
	  kube-system                 coredns-7c65d6cfc9-6qp52                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-178778                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-jpqbk                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-178778             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-178778    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-xgjrq                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-178778             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)    kubelet          Node multinode-178778 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)    kubelet          Node multinode-178778 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)    kubelet          Node multinode-178778 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-178778 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-178778 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-178778 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-178778 event: Registered Node multinode-178778 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-178778 status is now: NodeReady
	  Normal  Starting                 4m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m8s)  kubelet          Node multinode-178778 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x8 over 4m8s)  kubelet          Node multinode-178778 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x7 over 4m8s)  kubelet          Node multinode-178778 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m1s                 node-controller  Node multinode-178778 event: Registered Node multinode-178778 in Controller
	
	
	Name:               multinode-178778-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-178778-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=multinode-178778
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_17T17_52_51_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 17:52:51 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-178778-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:53:52 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 17 Sep 2024 17:53:21 +0000   Tue, 17 Sep 2024 17:54:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 17 Sep 2024 17:53:21 +0000   Tue, 17 Sep 2024 17:54:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 17 Sep 2024 17:53:21 +0000   Tue, 17 Sep 2024 17:54:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 17 Sep 2024 17:53:21 +0000   Tue, 17 Sep 2024 17:54:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.118
	  Hostname:    multinode-178778-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f4ca9ca4ae3642c18aba8226124e3ebf
	  System UUID:                f4ca9ca4-ae36-42c1-8aba-8226124e3ebf
	  Boot ID:                    232398c8-2a88-4761-9b3b-5221ce050f77
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nw788    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m29s
	  kube-system                 kindnet-2qnbk              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-c8cnr           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m19s                  kube-proxy       
	  Normal  Starting                 9m54s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-178778-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-178778-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-178778-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m42s                  kubelet          Node multinode-178778-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m24s (x2 over 3m24s)  kubelet          Node multinode-178778-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m24s (x2 over 3m24s)  kubelet          Node multinode-178778-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m24s (x2 over 3m24s)  kubelet          Node multinode-178778-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m6s                   kubelet          Node multinode-178778-m02 status is now: NodeReady
	  Normal  NodeNotReady             101s                   node-controller  Node multinode-178778-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.066262] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.164050] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.148836] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.282267] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +4.210236] systemd-fstab-generator[743]: Ignoring "noauto" option for root device
	[  +4.165058] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.060981] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.498598] systemd-fstab-generator[1218]: Ignoring "noauto" option for root device
	[  +0.091673] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.718557] systemd-fstab-generator[1319]: Ignoring "noauto" option for root device
	[  +1.026682] kauditd_printk_skb: 46 callbacks suppressed
	[ +12.366049] kauditd_printk_skb: 41 callbacks suppressed
	[Sep17 17:46] kauditd_printk_skb: 14 callbacks suppressed
	[Sep17 17:51] systemd-fstab-generator[2661]: Ignoring "noauto" option for root device
	[  +0.156396] systemd-fstab-generator[2673]: Ignoring "noauto" option for root device
	[  +0.185248] systemd-fstab-generator[2689]: Ignoring "noauto" option for root device
	[  +0.153418] systemd-fstab-generator[2701]: Ignoring "noauto" option for root device
	[  +0.298054] systemd-fstab-generator[2729]: Ignoring "noauto" option for root device
	[Sep17 17:52] systemd-fstab-generator[2824]: Ignoring "noauto" option for root device
	[  +0.086195] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.073423] systemd-fstab-generator[2947]: Ignoring "noauto" option for root device
	[  +5.664636] kauditd_printk_skb: 74 callbacks suppressed
	[  +6.717026] kauditd_printk_skb: 34 callbacks suppressed
	[  +7.295449] systemd-fstab-generator[3795]: Ignoring "noauto" option for root device
	[ +19.812877] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [4d0d3a5d8108feb075b79f62ab6547aaf2221f7c708a364b10e3acb5ec484402] <==
	{"level":"info","ts":"2024-09-17T17:45:19.610472Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T17:45:19.613212Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"732232f81d76e930","local-member-attributes":"{Name:multinode-178778 ClientURLs:[https://192.168.39.35:2379]}","request-path":"/0/members/732232f81d76e930/attributes","cluster-id":"45f5838de4bd43f1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T17:45:19.613423Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T17:45:19.613741Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T17:45:19.614069Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T17:45:19.614105Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T17:45:19.614729Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T17:45:19.621091Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"45f5838de4bd43f1","local-member-id":"732232f81d76e930","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T17:45:19.621195Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T17:45:19.621237Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T17:45:19.621811Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T17:45:19.622590Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-17T17:45:19.624234Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.35:2379"}
	{"level":"info","ts":"2024-09-17T17:46:15.136060Z","caller":"traceutil/trace.go:171","msg":"trace[1538620714] transaction","detail":"{read_only:false; response_revision:477; number_of_response:1; }","duration":"100.292864ms","start":"2024-09-17T17:46:15.035734Z","end":"2024-09-17T17:46:15.136026Z","steps":["trace[1538620714] 'process raft request'  (duration: 40.580853ms)","trace[1538620714] 'compare'  (duration: 59.562208ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-17T17:46:15.360968Z","caller":"traceutil/trace.go:171","msg":"trace[948571904] transaction","detail":"{read_only:false; response_revision:479; number_of_response:1; }","duration":"157.428375ms","start":"2024-09-17T17:46:15.203520Z","end":"2024-09-17T17:46:15.360948Z","steps":["trace[948571904] 'process raft request'  (duration: 152.690087ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T17:50:23.814397Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-17T17:50:23.814511Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-178778","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.35:2380"],"advertise-client-urls":["https://192.168.39.35:2379"]}
	{"level":"warn","ts":"2024-09-17T17:50:23.814653Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-17T17:50:23.814759Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-17T17:50:23.914368Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.35:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-17T17:50:23.914438Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.35:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-17T17:50:23.914520Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"732232f81d76e930","current-leader-member-id":"732232f81d76e930"}
	{"level":"info","ts":"2024-09-17T17:50:23.917488Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.35:2380"}
	{"level":"info","ts":"2024-09-17T17:50:23.917703Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.35:2380"}
	{"level":"info","ts":"2024-09-17T17:50:23.917741Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-178778","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.35:2380"],"advertise-client-urls":["https://192.168.39.35:2379"]}
	
	
	==> etcd [c84521880b71b7452275804f3b79ff0aa785ebfc3d85ce0cfda1e7aaf958b0b3] <==
	{"level":"info","ts":"2024-09-17T17:52:08.099899Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"45f5838de4bd43f1","local-member-id":"732232f81d76e930","added-peer-id":"732232f81d76e930","added-peer-peer-urls":["https://192.168.39.35:2380"]}
	{"level":"info","ts":"2024-09-17T17:52:08.100092Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"45f5838de4bd43f1","local-member-id":"732232f81d76e930","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T17:52:08.100140Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T17:52:08.102238Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T17:52:08.103955Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-17T17:52:08.111195Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"732232f81d76e930","initial-advertise-peer-urls":["https://192.168.39.35:2380"],"listen-peer-urls":["https://192.168.39.35:2380"],"advertise-client-urls":["https://192.168.39.35:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.35:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-17T17:52:08.111686Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-17T17:52:08.112212Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.35:2380"}
	{"level":"info","ts":"2024-09-17T17:52:08.113009Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.35:2380"}
	{"level":"info","ts":"2024-09-17T17:52:09.730623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"732232f81d76e930 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-17T17:52:09.730675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"732232f81d76e930 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-17T17:52:09.730721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"732232f81d76e930 received MsgPreVoteResp from 732232f81d76e930 at term 2"}
	{"level":"info","ts":"2024-09-17T17:52:09.730750Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"732232f81d76e930 became candidate at term 3"}
	{"level":"info","ts":"2024-09-17T17:52:09.730756Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"732232f81d76e930 received MsgVoteResp from 732232f81d76e930 at term 3"}
	{"level":"info","ts":"2024-09-17T17:52:09.730765Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"732232f81d76e930 became leader at term 3"}
	{"level":"info","ts":"2024-09-17T17:52:09.730774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 732232f81d76e930 elected leader 732232f81d76e930 at term 3"}
	{"level":"info","ts":"2024-09-17T17:52:09.733458Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"732232f81d76e930","local-member-attributes":"{Name:multinode-178778 ClientURLs:[https://192.168.39.35:2379]}","request-path":"/0/members/732232f81d76e930/attributes","cluster-id":"45f5838de4bd43f1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T17:52:09.733671Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T17:52:09.733750Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T17:52:09.733782Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T17:52:09.733800Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T17:52:09.735082Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T17:52:09.736070Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-17T17:52:09.735084Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T17:52:09.736931Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.35:2379"}
	
	
	==> kernel <==
	 17:56:15 up 11 min,  0 users,  load average: 0.03, 0.09, 0.08
	Linux multinode-178778 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c7747cc5a58254db89857ca054f1c4b1d749a368af47cb16d144d10b3806e7b3] <==
	I0917 17:55:13.844946       1 main.go:322] Node multinode-178778-m02 has CIDR [10.244.1.0/24] 
	I0917 17:55:23.852164       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0917 17:55:23.852312       1 main.go:299] handling current node
	I0917 17:55:23.852354       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0917 17:55:23.852375       1 main.go:322] Node multinode-178778-m02 has CIDR [10.244.1.0/24] 
	I0917 17:55:33.853053       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0917 17:55:33.853177       1 main.go:299] handling current node
	I0917 17:55:33.853220       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0917 17:55:33.853227       1 main.go:322] Node multinode-178778-m02 has CIDR [10.244.1.0/24] 
	I0917 17:55:43.844739       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0917 17:55:43.844792       1 main.go:299] handling current node
	I0917 17:55:43.844822       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0917 17:55:43.844828       1 main.go:322] Node multinode-178778-m02 has CIDR [10.244.1.0/24] 
	I0917 17:55:53.853829       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0917 17:55:53.853947       1 main.go:299] handling current node
	I0917 17:55:53.854038       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0917 17:55:53.854066       1 main.go:322] Node multinode-178778-m02 has CIDR [10.244.1.0/24] 
	I0917 17:56:03.852770       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0917 17:56:03.852813       1 main.go:299] handling current node
	I0917 17:56:03.852828       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0917 17:56:03.852833       1 main.go:322] Node multinode-178778-m02 has CIDR [10.244.1.0/24] 
	I0917 17:56:13.844927       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0917 17:56:13.844966       1 main.go:299] handling current node
	I0917 17:56:13.845026       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0917 17:56:13.845032       1 main.go:322] Node multinode-178778-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [d92c1bfd527d084237fb6a09891bcebbe2571a43bceb2b535e247e189a275ca9] <==
	I0917 17:49:40.849687       1 main.go:322] Node multinode-178778-m02 has CIDR [10.244.1.0/24] 
	I0917 17:49:50.852522       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0917 17:49:50.852689       1 main.go:322] Node multinode-178778-m02 has CIDR [10.244.1.0/24] 
	I0917 17:49:50.852863       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0917 17:49:50.852892       1 main.go:322] Node multinode-178778-m03 has CIDR [10.244.4.0/24] 
	I0917 17:49:50.852963       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0917 17:49:50.853072       1 main.go:299] handling current node
	I0917 17:50:00.854210       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0917 17:50:00.854288       1 main.go:299] handling current node
	I0917 17:50:00.854303       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0917 17:50:00.854309       1 main.go:322] Node multinode-178778-m02 has CIDR [10.244.1.0/24] 
	I0917 17:50:00.854487       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0917 17:50:00.854515       1 main.go:322] Node multinode-178778-m03 has CIDR [10.244.4.0/24] 
	I0917 17:50:10.848681       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0917 17:50:10.848751       1 main.go:299] handling current node
	I0917 17:50:10.848769       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0917 17:50:10.848774       1 main.go:322] Node multinode-178778-m02 has CIDR [10.244.1.0/24] 
	I0917 17:50:10.849102       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0917 17:50:10.849127       1 main.go:322] Node multinode-178778-m03 has CIDR [10.244.4.0/24] 
	I0917 17:50:20.848574       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0917 17:50:20.848701       1 main.go:322] Node multinode-178778-m03 has CIDR [10.244.4.0/24] 
	I0917 17:50:20.848906       1 main.go:295] Handling node with IPs: map[192.168.39.35:{}]
	I0917 17:50:20.848954       1 main.go:299] handling current node
	I0917 17:50:20.849075       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0917 17:50:20.849097       1 main.go:322] Node multinode-178778-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [1b6f4978954af28b30cf25e85875e8ee37fed382fa7752a10870ad1dceb734db] <==
	I0917 17:52:11.110301       1 policy_source.go:224] refreshing policies
	I0917 17:52:11.120092       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0917 17:52:11.125366       1 shared_informer.go:320] Caches are synced for configmaps
	I0917 17:52:11.126345       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0917 17:52:11.135361       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0917 17:52:11.135479       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0917 17:52:11.135504       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0917 17:52:11.135604       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0917 17:52:11.137598       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0917 17:52:11.168143       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0917 17:52:11.172326       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0917 17:52:11.172421       1 aggregator.go:171] initial CRD sync complete...
	I0917 17:52:11.172429       1 autoregister_controller.go:144] Starting autoregister controller
	I0917 17:52:11.172434       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0917 17:52:11.172439       1 cache.go:39] Caches are synced for autoregister controller
	I0917 17:52:11.193925       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 17:52:11.200060       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0917 17:52:12.032576       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0917 17:52:13.476072       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0917 17:52:13.671899       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0917 17:52:13.683408       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0917 17:52:13.780369       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0917 17:52:13.793645       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0917 17:52:14.531518       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0917 17:52:14.823179       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [b632cb69ae054b728a96c3ca2d075682a7a19b51a9d1e38b71e9d0c4185affbc] <==
	I0917 17:45:23.874818       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0917 17:45:28.713506       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0917 17:45:28.880432       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0917 17:46:39.713904       1 conn.go:339] Error on socket receive: read tcp 192.168.39.35:8443->192.168.39.1:48720: use of closed network connection
	E0917 17:46:39.886268       1 conn.go:339] Error on socket receive: read tcp 192.168.39.35:8443->192.168.39.1:48734: use of closed network connection
	E0917 17:46:40.067289       1 conn.go:339] Error on socket receive: read tcp 192.168.39.35:8443->192.168.39.1:48750: use of closed network connection
	E0917 17:46:40.255709       1 conn.go:339] Error on socket receive: read tcp 192.168.39.35:8443->192.168.39.1:48764: use of closed network connection
	E0917 17:46:40.431787       1 conn.go:339] Error on socket receive: read tcp 192.168.39.35:8443->192.168.39.1:48784: use of closed network connection
	E0917 17:46:40.602437       1 conn.go:339] Error on socket receive: read tcp 192.168.39.35:8443->192.168.39.1:48798: use of closed network connection
	E0917 17:46:40.901391       1 conn.go:339] Error on socket receive: read tcp 192.168.39.35:8443->192.168.39.1:48830: use of closed network connection
	E0917 17:46:41.077891       1 conn.go:339] Error on socket receive: read tcp 192.168.39.35:8443->192.168.39.1:48840: use of closed network connection
	E0917 17:46:41.257588       1 conn.go:339] Error on socket receive: read tcp 192.168.39.35:8443->192.168.39.1:48852: use of closed network connection
	E0917 17:46:41.424849       1 conn.go:339] Error on socket receive: read tcp 192.168.39.35:8443->192.168.39.1:48870: use of closed network connection
	I0917 17:50:23.813327       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0917 17:50:23.824425       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0917 17:50:23.830692       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0917 17:50:23.830913       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0917 17:50:23.835500       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0917 17:50:23.836512       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0917 17:50:23.836602       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0917 17:50:23.836970       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0917 17:50:23.839127       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0917 17:50:23.842956       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0917 17:50:23.843506       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0917 17:50:23.843595       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	
	
	==> kube-controller-manager [8d7e1ab5d7a869fe10b29c149283fee294171d881c41c776a1cd2dfebeee766b] <==
	I0917 17:47:58.339720       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:47:58.339820       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-178778-m02"
	I0917 17:47:59.451629       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-178778-m03\" does not exist"
	I0917 17:47:59.452538       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-178778-m02"
	I0917 17:47:59.464212       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-178778-m03" podCIDRs=["10.244.4.0/24"]
	I0917 17:47:59.464267       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:47:59.464297       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:47:59.474555       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:47:59.913929       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:48:00.279782       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:48:03.250225       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:48:09.711665       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:48:17.646732       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:48:17.646888       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-178778-m03"
	I0917 17:48:17.661759       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:48:18.184449       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:48:58.205507       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m02"
	I0917 17:48:58.205577       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-178778-m03"
	I0917 17:48:58.223326       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m02"
	I0917 17:48:58.279433       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="19.194791ms"
	I0917 17:48:58.279592       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="44.657µs"
	I0917 17:49:03.273831       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:49:03.292801       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:49:03.355429       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m02"
	I0917 17:49:13.435738       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	
	
	==> kube-controller-manager [f4d9d8cddcad905c1ed3d3dd98d944dd690b2f22cf2d5e258ef7a9195a1c77a4] <==
	I0917 17:53:29.670276       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-178778-m03" podCIDRs=["10.244.2.0/24"]
	I0917 17:53:29.670403       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:53:29.670929       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:53:29.679718       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:53:30.077838       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:53:30.444915       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:53:34.563441       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:53:39.938473       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:53:47.733229       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:53:47.733354       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-178778-m02"
	I0917 17:53:47.749837       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:53:49.556728       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:53:52.737653       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:53:52.753687       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:53:53.339632       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m03"
	I0917 17:53:53.339670       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-178778-m02"
	I0917 17:54:34.502968       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-tvvwv"
	I0917 17:54:34.535209       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-tvvwv"
	I0917 17:54:34.535253       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-m8z6x"
	I0917 17:54:34.580737       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-m8z6x"
	I0917 17:54:34.581267       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m02"
	I0917 17:54:34.598401       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m02"
	I0917 17:54:34.612640       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="9.854887ms"
	I0917 17:54:34.612835       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="45.92µs"
	I0917 17:54:39.683047       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-178778-m02"
	
	
	==> kube-proxy [1b25b58f20590fc6b00792854ea4852554e14daa8c313ec2cf96b998597e2bad] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 17:45:29.604115       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 17:45:29.624029       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.35"]
	E0917 17:45:29.624209       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 17:45:29.686269       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 17:45:29.686323       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 17:45:29.686387       1 server_linux.go:169] "Using iptables Proxier"
	I0917 17:45:29.689082       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 17:45:29.689473       1 server.go:483] "Version info" version="v1.31.1"
	I0917 17:45:29.689500       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:45:29.691114       1 config.go:199] "Starting service config controller"
	I0917 17:45:29.691161       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 17:45:29.691191       1 config.go:105] "Starting endpoint slice config controller"
	I0917 17:45:29.691196       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 17:45:29.693876       1 config.go:328] "Starting node config controller"
	I0917 17:45:29.693908       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 17:45:29.791568       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 17:45:29.791639       1 shared_informer.go:320] Caches are synced for service config
	I0917 17:45:29.794650       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [6e6b24db391cb933da944ac182a332ce0db5f3474386509f7b9cf4cfb08e2463] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 17:52:13.253126       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 17:52:13.278535       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.35"]
	E0917 17:52:13.278672       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 17:52:13.409705       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 17:52:13.409820       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 17:52:13.409862       1 server_linux.go:169] "Using iptables Proxier"
	I0917 17:52:13.418230       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 17:52:13.418601       1 server.go:483] "Version info" version="v1.31.1"
	I0917 17:52:13.418629       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:52:13.422681       1 config.go:328] "Starting node config controller"
	I0917 17:52:13.422772       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 17:52:13.422887       1 config.go:199] "Starting service config controller"
	I0917 17:52:13.422943       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 17:52:13.423031       1 config.go:105] "Starting endpoint slice config controller"
	I0917 17:52:13.423051       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 17:52:13.523426       1 shared_informer.go:320] Caches are synced for node config
	I0917 17:52:13.523560       1 shared_informer.go:320] Caches are synced for service config
	I0917 17:52:13.523666       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2dec6c2647270691d88ea7a950f9ad02bff02a646e0eb5139498c65a76b8471e] <==
	E0917 17:45:22.106240       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 17:45:22.135512       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 17:45:22.135607       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:45:22.158913       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0917 17:45:22.158967       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:45:22.173931       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 17:45:22.174027       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 17:45:22.334363       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 17:45:22.334513       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:45:22.343318       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0917 17:45:22.343371       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:45:22.378497       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0917 17:45:22.378632       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 17:45:22.391028       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 17:45:22.391179       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 17:45:22.472897       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 17:45:22.473067       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:45:22.517325       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0917 17:45:22.517440       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:45:22.543782       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0917 17:45:22.543890       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0917 17:45:22.546448       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0917 17:45:22.546497       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0917 17:45:25.387053       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0917 17:50:23.818061       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e0cc24c2367951f0d65aac87876e0e147ad370db1e920c526de00dac0e4a8f24] <==
	I0917 17:52:09.112927       1 serving.go:386] Generated self-signed cert in-memory
	W0917 17:52:11.080936       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0917 17:52:11.081036       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0917 17:52:11.081049       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0917 17:52:11.081061       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0917 17:52:11.119767       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0917 17:52:11.119827       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:52:11.122633       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 17:52:11.122735       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0917 17:52:11.122807       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0917 17:52:11.122908       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 17:52:11.223701       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 17 17:54:57 multinode-178778 kubelet[2954]: E0917 17:54:57.132354    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595697131691080,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:55:07 multinode-178778 kubelet[2954]: E0917 17:55:07.047148    2954 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 17 17:55:07 multinode-178778 kubelet[2954]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 17 17:55:07 multinode-178778 kubelet[2954]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 17 17:55:07 multinode-178778 kubelet[2954]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 17 17:55:07 multinode-178778 kubelet[2954]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 17 17:55:07 multinode-178778 kubelet[2954]: E0917 17:55:07.135136    2954 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595707134521076,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:55:07 multinode-178778 kubelet[2954]: E0917 17:55:07.135160    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595707134521076,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:55:17 multinode-178778 kubelet[2954]: E0917 17:55:17.137121    2954 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595717136675811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:55:17 multinode-178778 kubelet[2954]: E0917 17:55:17.137543    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595717136675811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:55:27 multinode-178778 kubelet[2954]: E0917 17:55:27.139526    2954 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595727139191352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:55:27 multinode-178778 kubelet[2954]: E0917 17:55:27.139803    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595727139191352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:55:37 multinode-178778 kubelet[2954]: E0917 17:55:37.142048    2954 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595737141559511,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:55:37 multinode-178778 kubelet[2954]: E0917 17:55:37.142088    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595737141559511,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:55:47 multinode-178778 kubelet[2954]: E0917 17:55:47.143509    2954 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595747142872250,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:55:47 multinode-178778 kubelet[2954]: E0917 17:55:47.143562    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595747142872250,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:55:57 multinode-178778 kubelet[2954]: E0917 17:55:57.147429    2954 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595757146260665,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:55:57 multinode-178778 kubelet[2954]: E0917 17:55:57.147542    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595757146260665,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:56:07 multinode-178778 kubelet[2954]: E0917 17:56:07.052914    2954 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 17 17:56:07 multinode-178778 kubelet[2954]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 17 17:56:07 multinode-178778 kubelet[2954]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 17 17:56:07 multinode-178778 kubelet[2954]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 17 17:56:07 multinode-178778 kubelet[2954]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 17 17:56:07 multinode-178778 kubelet[2954]: E0917 17:56:07.149380    2954 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595767148591281,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 17:56:07 multinode-178778 kubelet[2954]: E0917 17:56:07.149520    2954 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726595767148591281,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 17:56:14.407505   49872 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19662-11085/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-178778 -n multinode-178778
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-178778 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.61s)

                                                
                                    
x
+
TestPreload (272.66s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-817939 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0917 18:01:24.985532   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-817939 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m10.330596614s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-817939 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-817939 image pull gcr.io/k8s-minikube/busybox: (2.308521364s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-817939
E0917 18:03:33.605519   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:03:50.535008   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-817939: exit status 82 (2m0.459936639s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-817939"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-817939 failed: exit status 82
panic.go:629: *** TestPreload FAILED at 2024-09-17 18:04:39.475265076 +0000 UTC m=+4148.549837353
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-817939 -n test-preload-817939
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-817939 -n test-preload-817939: exit status 3 (18.621911492s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 18:04:58.093601   52790 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.56:22: connect: no route to host
	E0917 18:04:58.093624   52790 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.56:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-817939" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-817939" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-817939
--- FAIL: TestPreload (272.66s)

                                                
                                    
x
+
TestKubernetesUpgrade (495.09s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-644038 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-644038 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m58.214955586s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-644038] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-644038" primary control-plane node in "kubernetes-upgrade-644038" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 18:06:53.328405   53861 out.go:345] Setting OutFile to fd 1 ...
	I0917 18:06:53.328784   53861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:06:53.328794   53861 out.go:358] Setting ErrFile to fd 2...
	I0917 18:06:53.328799   53861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:06:53.329001   53861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 18:06:53.329617   53861 out.go:352] Setting JSON to false
	I0917 18:06:53.330436   53861 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6528,"bootTime":1726589885,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 18:06:53.330529   53861 start.go:139] virtualization: kvm guest
	I0917 18:06:53.332334   53861 out.go:177] * [kubernetes-upgrade-644038] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 18:06:53.334130   53861 notify.go:220] Checking for updates...
	I0917 18:06:53.335531   53861 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 18:06:53.338025   53861 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 18:06:53.339741   53861 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:06:53.340960   53861 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 18:06:53.342242   53861 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 18:06:53.343402   53861 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 18:06:53.345096   53861 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 18:06:53.384577   53861 out.go:177] * Using the kvm2 driver based on user configuration
	I0917 18:06:53.385990   53861 start.go:297] selected driver: kvm2
	I0917 18:06:53.386006   53861 start.go:901] validating driver "kvm2" against <nil>
	I0917 18:06:53.386027   53861 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 18:06:53.386884   53861 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:06:53.404390   53861 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19662-11085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0917 18:06:53.423179   53861 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0917 18:06:53.423244   53861 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 18:06:53.423556   53861 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 18:06:53.423593   53861 cni.go:84] Creating CNI manager for ""
	I0917 18:06:53.423651   53861 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:06:53.423660   53861 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 18:06:53.423728   53861 start.go:340] cluster config:
	{Name:kubernetes-upgrade-644038 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-644038 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:06:53.423852   53861 iso.go:125] acquiring lock: {Name:mkee7131e09b1c5eb94d106bda884bcbdbf6f906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:06:53.426045   53861 out.go:177] * Starting "kubernetes-upgrade-644038" primary control-plane node in "kubernetes-upgrade-644038" cluster
	I0917 18:06:53.427527   53861 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0917 18:06:53.427588   53861 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0917 18:06:53.427606   53861 cache.go:56] Caching tarball of preloaded images
	I0917 18:06:53.427700   53861 preload.go:172] Found /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 18:06:53.427711   53861 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0917 18:06:53.428227   53861 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/config.json ...
	I0917 18:06:53.428273   53861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/config.json: {Name:mk87ec87bc2db1d6519d28264dbdc26a74eae577 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:06:53.428474   53861 start.go:360] acquireMachinesLock for kubernetes-upgrade-644038: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 18:07:19.802319   53861 start.go:364] duration metric: took 26.373814976s to acquireMachinesLock for "kubernetes-upgrade-644038"
	I0917 18:07:19.802403   53861 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-644038 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-644038 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 18:07:19.802497   53861 start.go:125] createHost starting for "" (driver="kvm2")
	I0917 18:07:19.804949   53861 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 18:07:19.805149   53861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:07:19.805206   53861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:07:19.822583   53861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38903
	I0917 18:07:19.822921   53861 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:07:19.823551   53861 main.go:141] libmachine: Using API Version  1
	I0917 18:07:19.823582   53861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:07:19.823908   53861 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:07:19.824112   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetMachineName
	I0917 18:07:19.824265   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .DriverName
	I0917 18:07:19.824429   53861 start.go:159] libmachine.API.Create for "kubernetes-upgrade-644038" (driver="kvm2")
	I0917 18:07:19.824475   53861 client.go:168] LocalClient.Create starting
	I0917 18:07:19.824504   53861 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem
	I0917 18:07:19.824539   53861 main.go:141] libmachine: Decoding PEM data...
	I0917 18:07:19.824557   53861 main.go:141] libmachine: Parsing certificate...
	I0917 18:07:19.824624   53861 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem
	I0917 18:07:19.824649   53861 main.go:141] libmachine: Decoding PEM data...
	I0917 18:07:19.824661   53861 main.go:141] libmachine: Parsing certificate...
	I0917 18:07:19.824690   53861 main.go:141] libmachine: Running pre-create checks...
	I0917 18:07:19.824713   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .PreCreateCheck
	I0917 18:07:19.825013   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetConfigRaw
	I0917 18:07:19.825433   53861 main.go:141] libmachine: Creating machine...
	I0917 18:07:19.825446   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .Create
	I0917 18:07:19.825599   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Creating KVM machine...
	I0917 18:07:19.826785   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | found existing default KVM network
	I0917 18:07:19.827591   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | I0917 18:07:19.827424   56300 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ce:10:8d} reservation:<nil>}
	I0917 18:07:19.828279   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | I0917 18:07:19.828212   56300 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00024a400}
	I0917 18:07:19.828324   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | created network xml: 
	I0917 18:07:19.828349   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | <network>
	I0917 18:07:19.828363   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG |   <name>mk-kubernetes-upgrade-644038</name>
	I0917 18:07:19.828373   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG |   <dns enable='no'/>
	I0917 18:07:19.828381   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG |   
	I0917 18:07:19.828393   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0917 18:07:19.828408   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG |     <dhcp>
	I0917 18:07:19.828420   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0917 18:07:19.828435   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG |     </dhcp>
	I0917 18:07:19.828450   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG |   </ip>
	I0917 18:07:19.828461   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG |   
	I0917 18:07:19.828468   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | </network>
	I0917 18:07:19.828487   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | 
	I0917 18:07:19.834000   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | trying to create private KVM network mk-kubernetes-upgrade-644038 192.168.50.0/24...
	I0917 18:07:19.906401   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | private KVM network mk-kubernetes-upgrade-644038 192.168.50.0/24 created
	I0917 18:07:19.906434   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Setting up store path in /home/jenkins/minikube-integration/19662-11085/.minikube/machines/kubernetes-upgrade-644038 ...
	I0917 18:07:19.906448   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | I0917 18:07:19.906362   56300 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 18:07:19.906465   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Building disk image from file:///home/jenkins/minikube-integration/19662-11085/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0917 18:07:19.906549   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Downloading /home/jenkins/minikube-integration/19662-11085/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19662-11085/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0917 18:07:20.151213   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | I0917 18:07:20.151078   56300 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/kubernetes-upgrade-644038/id_rsa...
	I0917 18:07:20.258923   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | I0917 18:07:20.258754   56300 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/kubernetes-upgrade-644038/kubernetes-upgrade-644038.rawdisk...
	I0917 18:07:20.258959   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | Writing magic tar header
	I0917 18:07:20.258975   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | Writing SSH key tar header
	I0917 18:07:20.258989   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | I0917 18:07:20.258871   56300 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19662-11085/.minikube/machines/kubernetes-upgrade-644038 ...
	I0917 18:07:20.259011   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/kubernetes-upgrade-644038
	I0917 18:07:20.259029   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube/machines
	I0917 18:07:20.259042   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube/machines/kubernetes-upgrade-644038 (perms=drwx------)
	I0917 18:07:20.259056   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube/machines (perms=drwxr-xr-x)
	I0917 18:07:20.259067   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube (perms=drwxr-xr-x)
	I0917 18:07:20.259081   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085 (perms=drwxrwxr-x)
	I0917 18:07:20.259095   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0917 18:07:20.259109   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 18:07:20.259120   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0917 18:07:20.259135   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085
	I0917 18:07:20.259146   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Creating domain...
	I0917 18:07:20.259158   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0917 18:07:20.259169   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | Checking permissions on dir: /home/jenkins
	I0917 18:07:20.259180   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | Checking permissions on dir: /home
	I0917 18:07:20.259192   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | Skipping /home - not owner
	I0917 18:07:20.260267   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) define libvirt domain using xml: 
	I0917 18:07:20.260295   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) <domain type='kvm'>
	I0917 18:07:20.260307   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)   <name>kubernetes-upgrade-644038</name>
	I0917 18:07:20.260325   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)   <memory unit='MiB'>2200</memory>
	I0917 18:07:20.260338   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)   <vcpu>2</vcpu>
	I0917 18:07:20.260344   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)   <features>
	I0917 18:07:20.260353   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)     <acpi/>
	I0917 18:07:20.260363   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)     <apic/>
	I0917 18:07:20.260378   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)     <pae/>
	I0917 18:07:20.260387   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)     
	I0917 18:07:20.260505   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)   </features>
	I0917 18:07:20.260553   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)   <cpu mode='host-passthrough'>
	I0917 18:07:20.260567   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)   
	I0917 18:07:20.260577   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)   </cpu>
	I0917 18:07:20.260589   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)   <os>
	I0917 18:07:20.260597   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)     <type>hvm</type>
	I0917 18:07:20.260607   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)     <boot dev='cdrom'/>
	I0917 18:07:20.260617   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)     <boot dev='hd'/>
	I0917 18:07:20.260627   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)     <bootmenu enable='no'/>
	I0917 18:07:20.260636   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)   </os>
	I0917 18:07:20.260644   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)   <devices>
	I0917 18:07:20.260656   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)     <disk type='file' device='cdrom'>
	I0917 18:07:20.260684   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)       <source file='/home/jenkins/minikube-integration/19662-11085/.minikube/machines/kubernetes-upgrade-644038/boot2docker.iso'/>
	I0917 18:07:20.260698   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)       <target dev='hdc' bus='scsi'/>
	I0917 18:07:20.260707   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)       <readonly/>
	I0917 18:07:20.260723   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)     </disk>
	I0917 18:07:20.260754   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)     <disk type='file' device='disk'>
	I0917 18:07:20.260782   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0917 18:07:20.260802   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)       <source file='/home/jenkins/minikube-integration/19662-11085/.minikube/machines/kubernetes-upgrade-644038/kubernetes-upgrade-644038.rawdisk'/>
	I0917 18:07:20.260814   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)       <target dev='hda' bus='virtio'/>
	I0917 18:07:20.260827   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)     </disk>
	I0917 18:07:20.260838   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)     <interface type='network'>
	I0917 18:07:20.260849   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)       <source network='mk-kubernetes-upgrade-644038'/>
	I0917 18:07:20.260864   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)       <model type='virtio'/>
	I0917 18:07:20.260876   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)     </interface>
	I0917 18:07:20.260886   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)     <interface type='network'>
	I0917 18:07:20.260897   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)       <source network='default'/>
	I0917 18:07:20.260931   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)       <model type='virtio'/>
	I0917 18:07:20.260958   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)     </interface>
	I0917 18:07:20.260979   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)     <serial type='pty'>
	I0917 18:07:20.260994   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)       <target port='0'/>
	I0917 18:07:20.261005   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)     </serial>
	I0917 18:07:20.261022   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)     <console type='pty'>
	I0917 18:07:20.261041   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)       <target type='serial' port='0'/>
	I0917 18:07:20.261052   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)     </console>
	I0917 18:07:20.261063   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)     <rng model='virtio'>
	I0917 18:07:20.261075   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)       <backend model='random'>/dev/random</backend>
	I0917 18:07:20.261085   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)     </rng>
	I0917 18:07:20.261094   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)     
	I0917 18:07:20.261103   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)     
	I0917 18:07:20.261129   53861 main.go:141] libmachine: (kubernetes-upgrade-644038)   </devices>
	I0917 18:07:20.261149   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) </domain>
	I0917 18:07:20.261164   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) 
	I0917 18:07:20.265588   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:13:10:fc in network default
	I0917 18:07:20.266232   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Ensuring networks are active...
	I0917 18:07:20.266261   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:20.266952   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Ensuring network default is active
	I0917 18:07:20.267350   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Ensuring network mk-kubernetes-upgrade-644038 is active
	I0917 18:07:20.267904   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Getting domain xml...
	I0917 18:07:20.268741   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Creating domain...
	I0917 18:07:21.573333   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Waiting to get IP...
	I0917 18:07:21.574041   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:21.574493   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | unable to find current IP address of domain kubernetes-upgrade-644038 in network mk-kubernetes-upgrade-644038
	I0917 18:07:21.574514   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | I0917 18:07:21.574445   56300 retry.go:31] will retry after 210.992715ms: waiting for machine to come up
	I0917 18:07:21.787153   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:21.787562   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | unable to find current IP address of domain kubernetes-upgrade-644038 in network mk-kubernetes-upgrade-644038
	I0917 18:07:21.787598   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | I0917 18:07:21.787512   56300 retry.go:31] will retry after 239.888825ms: waiting for machine to come up
	I0917 18:07:22.029144   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:22.029663   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | unable to find current IP address of domain kubernetes-upgrade-644038 in network mk-kubernetes-upgrade-644038
	I0917 18:07:22.029723   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | I0917 18:07:22.029615   56300 retry.go:31] will retry after 408.949289ms: waiting for machine to come up
	I0917 18:07:22.440409   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:22.440937   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | unable to find current IP address of domain kubernetes-upgrade-644038 in network mk-kubernetes-upgrade-644038
	I0917 18:07:22.440967   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | I0917 18:07:22.440883   56300 retry.go:31] will retry after 567.984048ms: waiting for machine to come up
	I0917 18:07:23.010714   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:23.011227   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | unable to find current IP address of domain kubernetes-upgrade-644038 in network mk-kubernetes-upgrade-644038
	I0917 18:07:23.011263   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | I0917 18:07:23.011167   56300 retry.go:31] will retry after 611.509894ms: waiting for machine to come up
	I0917 18:07:23.624030   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:23.624495   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | unable to find current IP address of domain kubernetes-upgrade-644038 in network mk-kubernetes-upgrade-644038
	I0917 18:07:23.624524   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | I0917 18:07:23.624448   56300 retry.go:31] will retry after 636.896443ms: waiting for machine to come up
	I0917 18:07:24.263269   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:24.263773   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | unable to find current IP address of domain kubernetes-upgrade-644038 in network mk-kubernetes-upgrade-644038
	I0917 18:07:24.263804   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | I0917 18:07:24.263687   56300 retry.go:31] will retry after 1.093653171s: waiting for machine to come up
	I0917 18:07:25.359651   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:25.360152   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | unable to find current IP address of domain kubernetes-upgrade-644038 in network mk-kubernetes-upgrade-644038
	I0917 18:07:25.360187   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | I0917 18:07:25.360108   56300 retry.go:31] will retry after 953.619235ms: waiting for machine to come up
	I0917 18:07:26.317244   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:26.317845   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | unable to find current IP address of domain kubernetes-upgrade-644038 in network mk-kubernetes-upgrade-644038
	I0917 18:07:26.317863   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | I0917 18:07:26.317815   56300 retry.go:31] will retry after 1.198693597s: waiting for machine to come up
	I0917 18:07:27.517881   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:27.518317   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | unable to find current IP address of domain kubernetes-upgrade-644038 in network mk-kubernetes-upgrade-644038
	I0917 18:07:27.518344   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | I0917 18:07:27.518264   56300 retry.go:31] will retry after 1.440673786s: waiting for machine to come up
	I0917 18:07:28.960056   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:28.960589   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | unable to find current IP address of domain kubernetes-upgrade-644038 in network mk-kubernetes-upgrade-644038
	I0917 18:07:28.960623   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | I0917 18:07:28.960495   56300 retry.go:31] will retry after 2.075272801s: waiting for machine to come up
	I0917 18:07:31.037328   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:31.037894   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | unable to find current IP address of domain kubernetes-upgrade-644038 in network mk-kubernetes-upgrade-644038
	I0917 18:07:31.037940   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | I0917 18:07:31.037839   56300 retry.go:31] will retry after 3.210623617s: waiting for machine to come up
	I0917 18:07:34.249896   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:34.250440   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | unable to find current IP address of domain kubernetes-upgrade-644038 in network mk-kubernetes-upgrade-644038
	I0917 18:07:34.250469   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | I0917 18:07:34.250387   56300 retry.go:31] will retry after 4.497392918s: waiting for machine to come up
	I0917 18:07:38.752400   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:38.752775   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | unable to find current IP address of domain kubernetes-upgrade-644038 in network mk-kubernetes-upgrade-644038
	I0917 18:07:38.752801   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | I0917 18:07:38.752755   56300 retry.go:31] will retry after 4.1956813s: waiting for machine to come up
	I0917 18:07:42.950222   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:42.950712   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Found IP for machine: 192.168.50.134
	I0917 18:07:42.950740   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Reserving static IP address...
	I0917 18:07:42.950781   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has current primary IP address 192.168.50.134 and MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:42.951196   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-644038", mac: "52:54:00:73:ec:bf", ip: "192.168.50.134"} in network mk-kubernetes-upgrade-644038
	I0917 18:07:43.028802   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | Getting to WaitForSSH function...
	I0917 18:07:43.028836   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Reserved static IP address: 192.168.50.134
	I0917 18:07:43.028850   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Waiting for SSH to be available...
	I0917 18:07:43.031378   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:43.031855   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ec:bf", ip: ""} in network mk-kubernetes-upgrade-644038: {Iface:virbr2 ExpiryTime:2024-09-17 19:07:35 +0000 UTC Type:0 Mac:52:54:00:73:ec:bf Iaid: IPaddr:192.168.50.134 Prefix:24 Hostname:minikube Clientid:01:52:54:00:73:ec:bf}
	I0917 18:07:43.031890   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined IP address 192.168.50.134 and MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:43.032137   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | Using SSH client type: external
	I0917 18:07:43.032163   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/kubernetes-upgrade-644038/id_rsa (-rw-------)
	I0917 18:07:43.032206   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.134 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/kubernetes-upgrade-644038/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:07:43.032221   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | About to run SSH command:
	I0917 18:07:43.032233   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | exit 0
	I0917 18:07:43.161769   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | SSH cmd err, output: <nil>: 
	I0917 18:07:43.162013   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) KVM machine creation complete!
	I0917 18:07:43.162364   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetConfigRaw
	I0917 18:07:43.163149   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .DriverName
	I0917 18:07:43.163361   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .DriverName
	I0917 18:07:43.163530   53861 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0917 18:07:43.163548   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetState
	I0917 18:07:43.165093   53861 main.go:141] libmachine: Detecting operating system of created instance...
	I0917 18:07:43.165109   53861 main.go:141] libmachine: Waiting for SSH to be available...
	I0917 18:07:43.165132   53861 main.go:141] libmachine: Getting to WaitForSSH function...
	I0917 18:07:43.165145   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHHostname
	I0917 18:07:43.167781   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:43.168200   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ec:bf", ip: ""} in network mk-kubernetes-upgrade-644038: {Iface:virbr2 ExpiryTime:2024-09-17 19:07:35 +0000 UTC Type:0 Mac:52:54:00:73:ec:bf Iaid: IPaddr:192.168.50.134 Prefix:24 Hostname:kubernetes-upgrade-644038 Clientid:01:52:54:00:73:ec:bf}
	I0917 18:07:43.168229   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined IP address 192.168.50.134 and MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:43.168355   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHPort
	I0917 18:07:43.168529   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHKeyPath
	I0917 18:07:43.168682   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHKeyPath
	I0917 18:07:43.168816   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHUsername
	I0917 18:07:43.169028   53861 main.go:141] libmachine: Using SSH client type: native
	I0917 18:07:43.169287   53861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.134 22 <nil> <nil>}
	I0917 18:07:43.169303   53861 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0917 18:07:43.285810   53861 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:07:43.285837   53861 main.go:141] libmachine: Detecting the provisioner...
	I0917 18:07:43.285845   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHHostname
	I0917 18:07:43.288895   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:43.289257   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ec:bf", ip: ""} in network mk-kubernetes-upgrade-644038: {Iface:virbr2 ExpiryTime:2024-09-17 19:07:35 +0000 UTC Type:0 Mac:52:54:00:73:ec:bf Iaid: IPaddr:192.168.50.134 Prefix:24 Hostname:kubernetes-upgrade-644038 Clientid:01:52:54:00:73:ec:bf}
	I0917 18:07:43.289300   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined IP address 192.168.50.134 and MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:43.289505   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHPort
	I0917 18:07:43.289697   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHKeyPath
	I0917 18:07:43.289862   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHKeyPath
	I0917 18:07:43.290006   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHUsername
	I0917 18:07:43.290203   53861 main.go:141] libmachine: Using SSH client type: native
	I0917 18:07:43.290428   53861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.134 22 <nil> <nil>}
	I0917 18:07:43.290446   53861 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0917 18:07:43.410701   53861 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0917 18:07:43.410779   53861 main.go:141] libmachine: found compatible host: buildroot
	I0917 18:07:43.410789   53861 main.go:141] libmachine: Provisioning with buildroot...
	I0917 18:07:43.410799   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetMachineName
	I0917 18:07:43.411080   53861 buildroot.go:166] provisioning hostname "kubernetes-upgrade-644038"
	I0917 18:07:43.411104   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetMachineName
	I0917 18:07:43.411302   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHHostname
	I0917 18:07:43.414336   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:43.414733   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ec:bf", ip: ""} in network mk-kubernetes-upgrade-644038: {Iface:virbr2 ExpiryTime:2024-09-17 19:07:35 +0000 UTC Type:0 Mac:52:54:00:73:ec:bf Iaid: IPaddr:192.168.50.134 Prefix:24 Hostname:kubernetes-upgrade-644038 Clientid:01:52:54:00:73:ec:bf}
	I0917 18:07:43.414765   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined IP address 192.168.50.134 and MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:43.414951   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHPort
	I0917 18:07:43.415161   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHKeyPath
	I0917 18:07:43.415347   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHKeyPath
	I0917 18:07:43.415507   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHUsername
	I0917 18:07:43.415708   53861 main.go:141] libmachine: Using SSH client type: native
	I0917 18:07:43.415913   53861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.134 22 <nil> <nil>}
	I0917 18:07:43.415926   53861 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-644038 && echo "kubernetes-upgrade-644038" | sudo tee /etc/hostname
	I0917 18:07:43.555427   53861 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-644038
	
	I0917 18:07:43.555460   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHHostname
	I0917 18:07:43.558416   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:43.558894   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ec:bf", ip: ""} in network mk-kubernetes-upgrade-644038: {Iface:virbr2 ExpiryTime:2024-09-17 19:07:35 +0000 UTC Type:0 Mac:52:54:00:73:ec:bf Iaid: IPaddr:192.168.50.134 Prefix:24 Hostname:kubernetes-upgrade-644038 Clientid:01:52:54:00:73:ec:bf}
	I0917 18:07:43.558926   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined IP address 192.168.50.134 and MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:43.559106   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHPort
	I0917 18:07:43.559299   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHKeyPath
	I0917 18:07:43.559475   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHKeyPath
	I0917 18:07:43.559627   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHUsername
	I0917 18:07:43.559762   53861 main.go:141] libmachine: Using SSH client type: native
	I0917 18:07:43.559958   53861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.134 22 <nil> <nil>}
	I0917 18:07:43.559981   53861 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-644038' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-644038/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-644038' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:07:43.687794   53861 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:07:43.687831   53861 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:07:43.687881   53861 buildroot.go:174] setting up certificates
	I0917 18:07:43.687902   53861 provision.go:84] configureAuth start
	I0917 18:07:43.687918   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetMachineName
	I0917 18:07:43.688208   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetIP
	I0917 18:07:43.691321   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:43.691733   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ec:bf", ip: ""} in network mk-kubernetes-upgrade-644038: {Iface:virbr2 ExpiryTime:2024-09-17 19:07:35 +0000 UTC Type:0 Mac:52:54:00:73:ec:bf Iaid: IPaddr:192.168.50.134 Prefix:24 Hostname:kubernetes-upgrade-644038 Clientid:01:52:54:00:73:ec:bf}
	I0917 18:07:43.691766   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined IP address 192.168.50.134 and MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:43.691963   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHHostname
	I0917 18:07:43.694559   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:43.694876   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ec:bf", ip: ""} in network mk-kubernetes-upgrade-644038: {Iface:virbr2 ExpiryTime:2024-09-17 19:07:35 +0000 UTC Type:0 Mac:52:54:00:73:ec:bf Iaid: IPaddr:192.168.50.134 Prefix:24 Hostname:kubernetes-upgrade-644038 Clientid:01:52:54:00:73:ec:bf}
	I0917 18:07:43.694903   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined IP address 192.168.50.134 and MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:43.695087   53861 provision.go:143] copyHostCerts
	I0917 18:07:43.695156   53861 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:07:43.695169   53861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:07:43.695241   53861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:07:43.695382   53861 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:07:43.695395   53861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:07:43.695423   53861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:07:43.695536   53861 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:07:43.695550   53861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:07:43.695579   53861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:07:43.695664   53861 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-644038 san=[127.0.0.1 192.168.50.134 kubernetes-upgrade-644038 localhost minikube]
	I0917 18:07:43.878426   53861 provision.go:177] copyRemoteCerts
	I0917 18:07:43.878497   53861 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:07:43.878520   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHHostname
	I0917 18:07:43.881281   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:43.881557   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ec:bf", ip: ""} in network mk-kubernetes-upgrade-644038: {Iface:virbr2 ExpiryTime:2024-09-17 19:07:35 +0000 UTC Type:0 Mac:52:54:00:73:ec:bf Iaid: IPaddr:192.168.50.134 Prefix:24 Hostname:kubernetes-upgrade-644038 Clientid:01:52:54:00:73:ec:bf}
	I0917 18:07:43.881598   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined IP address 192.168.50.134 and MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:43.881844   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHPort
	I0917 18:07:43.882031   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHKeyPath
	I0917 18:07:43.882188   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHUsername
	I0917 18:07:43.882322   53861 sshutil.go:53] new ssh client: &{IP:192.168.50.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/kubernetes-upgrade-644038/id_rsa Username:docker}
	I0917 18:07:43.968382   53861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 18:07:43.998225   53861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:07:44.024367   53861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0917 18:07:44.051921   53861 provision.go:87] duration metric: took 364.002285ms to configureAuth
	I0917 18:07:44.051959   53861 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:07:44.052143   53861 config.go:182] Loaded profile config "kubernetes-upgrade-644038": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0917 18:07:44.052216   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHHostname
	I0917 18:07:44.055157   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:44.055556   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ec:bf", ip: ""} in network mk-kubernetes-upgrade-644038: {Iface:virbr2 ExpiryTime:2024-09-17 19:07:35 +0000 UTC Type:0 Mac:52:54:00:73:ec:bf Iaid: IPaddr:192.168.50.134 Prefix:24 Hostname:kubernetes-upgrade-644038 Clientid:01:52:54:00:73:ec:bf}
	I0917 18:07:44.055584   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined IP address 192.168.50.134 and MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:44.055745   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHPort
	I0917 18:07:44.055942   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHKeyPath
	I0917 18:07:44.056138   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHKeyPath
	I0917 18:07:44.056293   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHUsername
	I0917 18:07:44.056441   53861 main.go:141] libmachine: Using SSH client type: native
	I0917 18:07:44.056660   53861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.134 22 <nil> <nil>}
	I0917 18:07:44.056681   53861 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:07:44.293490   53861 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:07:44.293516   53861 main.go:141] libmachine: Checking connection to Docker...
	I0917 18:07:44.293523   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetURL
	I0917 18:07:44.294881   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | Using libvirt version 6000000
	I0917 18:07:44.297144   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:44.297487   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ec:bf", ip: ""} in network mk-kubernetes-upgrade-644038: {Iface:virbr2 ExpiryTime:2024-09-17 19:07:35 +0000 UTC Type:0 Mac:52:54:00:73:ec:bf Iaid: IPaddr:192.168.50.134 Prefix:24 Hostname:kubernetes-upgrade-644038 Clientid:01:52:54:00:73:ec:bf}
	I0917 18:07:44.297512   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined IP address 192.168.50.134 and MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:44.297652   53861 main.go:141] libmachine: Docker is up and running!
	I0917 18:07:44.297665   53861 main.go:141] libmachine: Reticulating splines...
	I0917 18:07:44.297691   53861 client.go:171] duration metric: took 24.473188368s to LocalClient.Create
	I0917 18:07:44.297718   53861 start.go:167] duration metric: took 24.473289013s to libmachine.API.Create "kubernetes-upgrade-644038"
	I0917 18:07:44.297730   53861 start.go:293] postStartSetup for "kubernetes-upgrade-644038" (driver="kvm2")
	I0917 18:07:44.297743   53861 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:07:44.297767   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .DriverName
	I0917 18:07:44.298000   53861 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:07:44.298024   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHHostname
	I0917 18:07:44.299862   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:44.300223   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ec:bf", ip: ""} in network mk-kubernetes-upgrade-644038: {Iface:virbr2 ExpiryTime:2024-09-17 19:07:35 +0000 UTC Type:0 Mac:52:54:00:73:ec:bf Iaid: IPaddr:192.168.50.134 Prefix:24 Hostname:kubernetes-upgrade-644038 Clientid:01:52:54:00:73:ec:bf}
	I0917 18:07:44.300248   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined IP address 192.168.50.134 and MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:44.300348   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHPort
	I0917 18:07:44.300525   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHKeyPath
	I0917 18:07:44.300684   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHUsername
	I0917 18:07:44.300832   53861 sshutil.go:53] new ssh client: &{IP:192.168.50.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/kubernetes-upgrade-644038/id_rsa Username:docker}
	I0917 18:07:44.388638   53861 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:07:44.393337   53861 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:07:44.393374   53861 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:07:44.393447   53861 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:07:44.393541   53861 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:07:44.393635   53861 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:07:44.403323   53861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:07:44.429770   53861 start.go:296] duration metric: took 132.026619ms for postStartSetup
	I0917 18:07:44.429825   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetConfigRaw
	I0917 18:07:44.430424   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetIP
	I0917 18:07:44.433012   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:44.433351   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ec:bf", ip: ""} in network mk-kubernetes-upgrade-644038: {Iface:virbr2 ExpiryTime:2024-09-17 19:07:35 +0000 UTC Type:0 Mac:52:54:00:73:ec:bf Iaid: IPaddr:192.168.50.134 Prefix:24 Hostname:kubernetes-upgrade-644038 Clientid:01:52:54:00:73:ec:bf}
	I0917 18:07:44.433383   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined IP address 192.168.50.134 and MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:44.433604   53861 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/config.json ...
	I0917 18:07:44.433808   53861 start.go:128] duration metric: took 24.631301342s to createHost
	I0917 18:07:44.433833   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHHostname
	I0917 18:07:44.436175   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:44.436511   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ec:bf", ip: ""} in network mk-kubernetes-upgrade-644038: {Iface:virbr2 ExpiryTime:2024-09-17 19:07:35 +0000 UTC Type:0 Mac:52:54:00:73:ec:bf Iaid: IPaddr:192.168.50.134 Prefix:24 Hostname:kubernetes-upgrade-644038 Clientid:01:52:54:00:73:ec:bf}
	I0917 18:07:44.436542   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined IP address 192.168.50.134 and MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:44.436672   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHPort
	I0917 18:07:44.436882   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHKeyPath
	I0917 18:07:44.437001   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHKeyPath
	I0917 18:07:44.437149   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHUsername
	I0917 18:07:44.437300   53861 main.go:141] libmachine: Using SSH client type: native
	I0917 18:07:44.437473   53861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.134 22 <nil> <nil>}
	I0917 18:07:44.437483   53861 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:07:44.550175   53861 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726596464.523002060
	
	I0917 18:07:44.550200   53861 fix.go:216] guest clock: 1726596464.523002060
	I0917 18:07:44.550210   53861 fix.go:229] Guest: 2024-09-17 18:07:44.52300206 +0000 UTC Remote: 2024-09-17 18:07:44.433822343 +0000 UTC m=+51.151330377 (delta=89.179717ms)
	I0917 18:07:44.550250   53861 fix.go:200] guest clock delta is within tolerance: 89.179717ms
	I0917 18:07:44.550255   53861 start.go:83] releasing machines lock for "kubernetes-upgrade-644038", held for 24.747895517s
	I0917 18:07:44.550283   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .DriverName
	I0917 18:07:44.550561   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetIP
	I0917 18:07:44.553704   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:44.554155   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ec:bf", ip: ""} in network mk-kubernetes-upgrade-644038: {Iface:virbr2 ExpiryTime:2024-09-17 19:07:35 +0000 UTC Type:0 Mac:52:54:00:73:ec:bf Iaid: IPaddr:192.168.50.134 Prefix:24 Hostname:kubernetes-upgrade-644038 Clientid:01:52:54:00:73:ec:bf}
	I0917 18:07:44.554193   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined IP address 192.168.50.134 and MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:44.554401   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .DriverName
	I0917 18:07:44.554935   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .DriverName
	I0917 18:07:44.555141   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .DriverName
	I0917 18:07:44.555234   53861 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:07:44.555271   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHHostname
	I0917 18:07:44.555322   53861 ssh_runner.go:195] Run: cat /version.json
	I0917 18:07:44.555350   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHHostname
	I0917 18:07:44.558534   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:44.558793   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:44.558959   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ec:bf", ip: ""} in network mk-kubernetes-upgrade-644038: {Iface:virbr2 ExpiryTime:2024-09-17 19:07:35 +0000 UTC Type:0 Mac:52:54:00:73:ec:bf Iaid: IPaddr:192.168.50.134 Prefix:24 Hostname:kubernetes-upgrade-644038 Clientid:01:52:54:00:73:ec:bf}
	I0917 18:07:44.558990   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined IP address 192.168.50.134 and MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:44.559201   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHPort
	I0917 18:07:44.559293   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ec:bf", ip: ""} in network mk-kubernetes-upgrade-644038: {Iface:virbr2 ExpiryTime:2024-09-17 19:07:35 +0000 UTC Type:0 Mac:52:54:00:73:ec:bf Iaid: IPaddr:192.168.50.134 Prefix:24 Hostname:kubernetes-upgrade-644038 Clientid:01:52:54:00:73:ec:bf}
	I0917 18:07:44.559327   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined IP address 192.168.50.134 and MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:44.559466   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHPort
	I0917 18:07:44.559556   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHKeyPath
	I0917 18:07:44.559655   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHKeyPath
	I0917 18:07:44.559732   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHUsername
	I0917 18:07:44.559774   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHUsername
	I0917 18:07:44.559821   53861 sshutil.go:53] new ssh client: &{IP:192.168.50.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/kubernetes-upgrade-644038/id_rsa Username:docker}
	I0917 18:07:44.560147   53861 sshutil.go:53] new ssh client: &{IP:192.168.50.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/kubernetes-upgrade-644038/id_rsa Username:docker}
	I0917 18:07:44.669505   53861 ssh_runner.go:195] Run: systemctl --version
	I0917 18:07:44.677096   53861 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:07:44.850036   53861 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:07:44.858942   53861 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:07:44.859003   53861 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:07:44.881720   53861 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:07:44.881748   53861 start.go:495] detecting cgroup driver to use...
	I0917 18:07:44.881820   53861 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:07:44.906605   53861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:07:44.922562   53861 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:07:44.922649   53861 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:07:44.940192   53861 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:07:44.957290   53861 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:07:45.085284   53861 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:07:45.266891   53861 docker.go:233] disabling docker service ...
	I0917 18:07:45.266954   53861 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:07:45.282460   53861 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:07:45.296907   53861 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:07:45.425537   53861 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:07:45.581573   53861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:07:45.596148   53861 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:07:45.616443   53861 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0917 18:07:45.616543   53861 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:07:45.627740   53861 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:07:45.627817   53861 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:07:45.639377   53861 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:07:45.651655   53861 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:07:45.663557   53861 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:07:45.675955   53861 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:07:45.686820   53861 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:07:45.686883   53861 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:07:45.700920   53861 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:07:45.711271   53861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:07:45.863440   53861 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:07:45.975341   53861 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:07:45.975419   53861 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:07:45.980534   53861 start.go:563] Will wait 60s for crictl version
	I0917 18:07:45.980605   53861 ssh_runner.go:195] Run: which crictl
	I0917 18:07:45.984858   53861 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:07:46.030339   53861 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:07:46.030443   53861 ssh_runner.go:195] Run: crio --version
	I0917 18:07:46.070389   53861 ssh_runner.go:195] Run: crio --version
	I0917 18:07:46.103248   53861 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0917 18:07:46.104458   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetIP
	I0917 18:07:46.107570   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:46.108031   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ec:bf", ip: ""} in network mk-kubernetes-upgrade-644038: {Iface:virbr2 ExpiryTime:2024-09-17 19:07:35 +0000 UTC Type:0 Mac:52:54:00:73:ec:bf Iaid: IPaddr:192.168.50.134 Prefix:24 Hostname:kubernetes-upgrade-644038 Clientid:01:52:54:00:73:ec:bf}
	I0917 18:07:46.108056   53861 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined IP address 192.168.50.134 and MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:07:46.108323   53861 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0917 18:07:46.113031   53861 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:07:46.128223   53861 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-644038 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-644038 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.134 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:07:46.128411   53861 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0917 18:07:46.128481   53861 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:07:46.169448   53861 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0917 18:07:46.169529   53861 ssh_runner.go:195] Run: which lz4
	I0917 18:07:46.173933   53861 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 18:07:46.179914   53861 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 18:07:46.179943   53861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0917 18:07:48.067216   53861 crio.go:462] duration metric: took 1.893313361s to copy over tarball
	I0917 18:07:48.067313   53861 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 18:07:50.794061   53861 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.726707856s)
	I0917 18:07:50.794098   53861 crio.go:469] duration metric: took 2.726848927s to extract the tarball
	I0917 18:07:50.794107   53861 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 18:07:50.843876   53861 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:07:50.892549   53861 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0917 18:07:50.892582   53861 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0917 18:07:50.892669   53861 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:07:50.892704   53861 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0917 18:07:50.892677   53861 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:07:50.892728   53861 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:07:50.892741   53861 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0917 18:07:50.892751   53861 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0917 18:07:50.892727   53861 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:07:50.892817   53861 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:07:50.894509   53861 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0917 18:07:50.894520   53861 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:07:50.894545   53861 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:07:50.894499   53861 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:07:50.894515   53861 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:07:50.894587   53861 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0917 18:07:50.894646   53861 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:07:50.894596   53861 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0917 18:07:51.070343   53861 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:07:51.072335   53861 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:07:51.086295   53861 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:07:51.087070   53861 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:07:51.098112   53861 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0917 18:07:51.098335   53861 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0917 18:07:51.116733   53861 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0917 18:07:51.176882   53861 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0917 18:07:51.176942   53861 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:07:51.176993   53861 ssh_runner.go:195] Run: which crictl
	I0917 18:07:51.196076   53861 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0917 18:07:51.196124   53861 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:07:51.196175   53861 ssh_runner.go:195] Run: which crictl
	I0917 18:07:51.270176   53861 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0917 18:07:51.270211   53861 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0917 18:07:51.270222   53861 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:07:51.270246   53861 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:07:51.270278   53861 ssh_runner.go:195] Run: which crictl
	I0917 18:07:51.270293   53861 ssh_runner.go:195] Run: which crictl
	I0917 18:07:51.270351   53861 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0917 18:07:51.270393   53861 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0917 18:07:51.270430   53861 ssh_runner.go:195] Run: which crictl
	I0917 18:07:51.277017   53861 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0917 18:07:51.277062   53861 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0917 18:07:51.277097   53861 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0917 18:07:51.277109   53861 ssh_runner.go:195] Run: which crictl
	I0917 18:07:51.277130   53861 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0917 18:07:51.277159   53861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:07:51.277169   53861 ssh_runner.go:195] Run: which crictl
	I0917 18:07:51.277242   53861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:07:51.287751   53861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:07:51.287791   53861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:07:51.287839   53861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0917 18:07:51.354943   53861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:07:51.354975   53861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0917 18:07:51.396612   53861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0917 18:07:51.396674   53861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:07:51.396617   53861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:07:51.457855   53861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0917 18:07:51.457882   53861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:07:51.537382   53861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0917 18:07:51.537485   53861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:07:51.543380   53861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0917 18:07:51.588938   53861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:07:51.588943   53861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:07:51.645266   53861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0917 18:07:51.645289   53861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:07:51.648350   53861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0917 18:07:51.648417   53861 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0917 18:07:51.739035   53861 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0917 18:07:51.739157   53861 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0917 18:07:51.760465   53861 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0917 18:07:51.798588   53861 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0917 18:07:51.798675   53861 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0917 18:07:51.798736   53861 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0917 18:07:51.804653   53861 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0917 18:07:51.826803   53861 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:07:51.968958   53861 cache_images.go:92] duration metric: took 1.076353619s to LoadCachedImages
	W0917 18:07:51.969052   53861 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0917 18:07:51.969065   53861 kubeadm.go:934] updating node { 192.168.50.134 8443 v1.20.0 crio true true} ...
	I0917 18:07:51.969191   53861 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-644038 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-644038 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:07:51.969300   53861 ssh_runner.go:195] Run: crio config
	I0917 18:07:52.017759   53861 cni.go:84] Creating CNI manager for ""
	I0917 18:07:52.017787   53861 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:07:52.017800   53861 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:07:52.017826   53861 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.134 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-644038 NodeName:kubernetes-upgrade-644038 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0917 18:07:52.018034   53861 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.134
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-644038"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:07:52.018110   53861 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0917 18:07:52.029567   53861 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:07:52.029650   53861 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:07:52.039895   53861 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0917 18:07:52.058535   53861 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:07:52.076959   53861 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0917 18:07:52.095785   53861 ssh_runner.go:195] Run: grep 192.168.50.134	control-plane.minikube.internal$ /etc/hosts
	I0917 18:07:52.100230   53861 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.134	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:07:52.114155   53861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:07:52.247491   53861 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:07:52.265331   53861 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038 for IP: 192.168.50.134
	I0917 18:07:52.265355   53861 certs.go:194] generating shared ca certs ...
	I0917 18:07:52.265370   53861 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:07:52.265542   53861 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:07:52.265586   53861 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:07:52.265596   53861 certs.go:256] generating profile certs ...
	I0917 18:07:52.265646   53861 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/client.key
	I0917 18:07:52.265659   53861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/client.crt with IP's: []
	I0917 18:07:52.489328   53861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/client.crt ...
	I0917 18:07:52.489359   53861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/client.crt: {Name:mk7976ee72bc880db589a8e24a804849fefc5fed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:07:52.489528   53861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/client.key ...
	I0917 18:07:52.489542   53861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/client.key: {Name:mk77831b3f17cd8fb192ed8d49174671e5f76a45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:07:52.489623   53861 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/apiserver.key.82a12f2a
	I0917 18:07:52.489643   53861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/apiserver.crt.82a12f2a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.134]
	I0917 18:07:52.591974   53861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/apiserver.crt.82a12f2a ...
	I0917 18:07:52.592009   53861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/apiserver.crt.82a12f2a: {Name:mk59fedc806b86e8e3fdf6393dfc11b661f5f8a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:07:52.592174   53861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/apiserver.key.82a12f2a ...
	I0917 18:07:52.592190   53861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/apiserver.key.82a12f2a: {Name:mk2f9721714cb46d43ae601d8baab38d19df3321 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:07:52.592402   53861 certs.go:381] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/apiserver.crt.82a12f2a -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/apiserver.crt
	I0917 18:07:52.592546   53861 certs.go:385] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/apiserver.key.82a12f2a -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/apiserver.key
	I0917 18:07:52.592663   53861 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/proxy-client.key
	I0917 18:07:52.592691   53861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/proxy-client.crt with IP's: []
	I0917 18:07:52.746970   53861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/proxy-client.crt ...
	I0917 18:07:52.747003   53861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/proxy-client.crt: {Name:mk10b4eae5b74770f7d103d470d31bbc8455488f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:07:52.747187   53861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/proxy-client.key ...
	I0917 18:07:52.747206   53861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/proxy-client.key: {Name:mk38c41df315080d13aa6c78c6cb7f7842e0430c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:07:52.747431   53861 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:07:52.747520   53861 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:07:52.747536   53861 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:07:52.747590   53861 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:07:52.747628   53861 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:07:52.747669   53861 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:07:52.747732   53861 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:07:52.748368   53861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:07:52.779234   53861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:07:52.806047   53861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:07:52.833173   53861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:07:52.863154   53861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0917 18:07:52.889474   53861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 18:07:52.917304   53861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:07:52.945796   53861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 18:07:52.974534   53861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:07:53.006329   53861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:07:53.036353   53861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:07:53.065891   53861 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:07:53.090466   53861 ssh_runner.go:195] Run: openssl version
	I0917 18:07:53.099582   53861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:07:53.117489   53861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:07:53.124027   53861 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:07:53.124095   53861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:07:53.133255   53861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:07:53.151908   53861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:07:53.168315   53861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:07:53.177340   53861 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:07:53.177429   53861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:07:53.190266   53861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:07:53.208490   53861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:07:53.241975   53861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:07:53.250117   53861 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:07:53.250195   53861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:07:53.257624   53861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:07:53.270800   53861 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:07:53.275503   53861 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 18:07:53.275574   53861 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-644038 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-644038 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.134 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:07:53.275696   53861 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:07:53.275796   53861 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:07:53.318168   53861 cri.go:89] found id: ""
	I0917 18:07:53.318261   53861 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:07:53.331244   53861 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:07:53.344275   53861 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:07:53.355682   53861 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:07:53.355707   53861 kubeadm.go:157] found existing configuration files:
	
	I0917 18:07:53.355762   53861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:07:53.366666   53861 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:07:53.366746   53861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:07:53.379473   53861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:07:53.391992   53861 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:07:53.392065   53861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:07:53.404203   53861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:07:53.418415   53861 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:07:53.418498   53861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:07:53.434532   53861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:07:53.446569   53861 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:07:53.446654   53861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:07:53.462005   53861 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:07:53.610102   53861 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0917 18:07:53.610179   53861 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:07:53.788894   53861 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:07:53.789054   53861 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:07:53.789183   53861 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0917 18:07:54.017250   53861 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:07:54.059558   53861 out.go:235]   - Generating certificates and keys ...
	I0917 18:07:54.059716   53861 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:07:54.059832   53861 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:07:54.254684   53861 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 18:07:54.411053   53861 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 18:07:54.657135   53861 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 18:07:54.738078   53861 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 18:07:54.920234   53861 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 18:07:54.920488   53861 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-644038 localhost] and IPs [192.168.50.134 127.0.0.1 ::1]
	I0917 18:07:54.973769   53861 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 18:07:54.973971   53861 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-644038 localhost] and IPs [192.168.50.134 127.0.0.1 ::1]
	I0917 18:07:55.237914   53861 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 18:07:55.373349   53861 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 18:07:55.632047   53861 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 18:07:55.632332   53861 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:07:55.749747   53861 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:07:55.885684   53861 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:07:56.102467   53861 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:07:56.409175   53861 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:07:56.429596   53861 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:07:56.431349   53861 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:07:56.431426   53861 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:07:56.597127   53861 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:07:56.599810   53861 out.go:235]   - Booting up control plane ...
	I0917 18:07:56.599954   53861 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:07:56.616826   53861 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:07:56.618311   53861 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:07:56.619580   53861 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:07:56.626083   53861 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0917 18:08:36.620284   53861 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0917 18:08:36.620420   53861 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:08:36.620668   53861 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:08:41.621147   53861 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:08:41.621445   53861 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:08:51.620144   53861 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:08:51.620351   53861 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:09:11.619824   53861 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:09:11.620115   53861 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:09:51.622380   53861 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:09:51.622658   53861 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:09:51.622690   53861 kubeadm.go:310] 
	I0917 18:09:51.622747   53861 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0917 18:09:51.622799   53861 kubeadm.go:310] 		timed out waiting for the condition
	I0917 18:09:51.622809   53861 kubeadm.go:310] 
	I0917 18:09:51.622883   53861 kubeadm.go:310] 	This error is likely caused by:
	I0917 18:09:51.622948   53861 kubeadm.go:310] 		- The kubelet is not running
	I0917 18:09:51.623126   53861 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0917 18:09:51.623138   53861 kubeadm.go:310] 
	I0917 18:09:51.623287   53861 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0917 18:09:51.623333   53861 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0917 18:09:51.623381   53861 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0917 18:09:51.623389   53861 kubeadm.go:310] 
	I0917 18:09:51.623591   53861 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0917 18:09:51.623729   53861 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0917 18:09:51.623757   53861 kubeadm.go:310] 
	I0917 18:09:51.623908   53861 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0917 18:09:51.624038   53861 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0917 18:09:51.624172   53861 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0917 18:09:51.624261   53861 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0917 18:09:51.624275   53861 kubeadm.go:310] 
	I0917 18:09:51.624739   53861 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:09:51.624842   53861 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0917 18:09:51.624909   53861 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0917 18:09:51.625045   53861 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-644038 localhost] and IPs [192.168.50.134 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-644038 localhost] and IPs [192.168.50.134 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-644038 localhost] and IPs [192.168.50.134 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-644038 localhost] and IPs [192.168.50.134 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0917 18:09:51.625094   53861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:09:54.268163   53861 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.643032228s)
	I0917 18:09:54.268242   53861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:09:54.286973   53861 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:09:54.301769   53861 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:09:54.301799   53861 kubeadm.go:157] found existing configuration files:
	
	I0917 18:09:54.301842   53861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:09:54.312838   53861 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:09:54.312910   53861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:09:54.327193   53861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:09:54.340593   53861 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:09:54.340665   53861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:09:54.352552   53861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:09:54.362978   53861 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:09:54.363050   53861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:09:54.374143   53861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:09:54.385043   53861 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:09:54.385110   53861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:09:54.396713   53861 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:09:54.474689   53861 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0917 18:09:54.474838   53861 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:09:54.641268   53861 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:09:54.641466   53861 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:09:54.641619   53861 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0917 18:09:54.840755   53861 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:09:54.842947   53861 out.go:235]   - Generating certificates and keys ...
	I0917 18:09:54.843055   53861 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:09:54.843151   53861 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:09:54.843271   53861 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:09:54.843360   53861 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:09:54.843470   53861 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:09:54.843549   53861 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:09:54.843636   53861 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:09:54.843722   53861 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:09:54.843816   53861 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:09:54.843918   53861 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:09:54.843968   53861 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:09:54.844040   53861 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:09:54.943818   53861 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:09:55.210643   53861 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:09:55.347188   53861 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:09:55.646290   53861 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:09:55.664965   53861 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:09:55.665345   53861 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:09:55.665693   53861 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:09:55.823921   53861 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:09:55.825845   53861 out.go:235]   - Booting up control plane ...
	I0917 18:09:55.825963   53861 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:09:55.835013   53861 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:09:55.836024   53861 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:09:55.836769   53861 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:09:55.838953   53861 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0917 18:10:35.842232   53861 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0917 18:10:35.842700   53861 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:10:35.842912   53861 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:10:40.843371   53861 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:10:40.843676   53861 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:10:50.844416   53861 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:10:50.844723   53861 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:11:10.843277   53861 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:11:10.843514   53861 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:11:50.842914   53861 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:11:50.843165   53861 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:11:50.843381   53861 kubeadm.go:310] 
	I0917 18:11:50.843466   53861 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0917 18:11:50.843645   53861 kubeadm.go:310] 		timed out waiting for the condition
	I0917 18:11:50.843660   53861 kubeadm.go:310] 
	I0917 18:11:50.843715   53861 kubeadm.go:310] 	This error is likely caused by:
	I0917 18:11:50.843758   53861 kubeadm.go:310] 		- The kubelet is not running
	I0917 18:11:50.843930   53861 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0917 18:11:50.843952   53861 kubeadm.go:310] 
	I0917 18:11:50.844087   53861 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0917 18:11:50.844120   53861 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0917 18:11:50.844151   53861 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0917 18:11:50.844157   53861 kubeadm.go:310] 
	I0917 18:11:50.844252   53861 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0917 18:11:50.844342   53861 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0917 18:11:50.844350   53861 kubeadm.go:310] 
	I0917 18:11:50.844484   53861 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0917 18:11:50.844605   53861 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0917 18:11:50.844705   53861 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0917 18:11:50.844804   53861 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0917 18:11:50.844816   53861 kubeadm.go:310] 
	I0917 18:11:50.846329   53861 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:11:50.846470   53861 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0917 18:11:50.846565   53861 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0917 18:11:50.846648   53861 kubeadm.go:394] duration metric: took 3m57.571080891s to StartCluster
	I0917 18:11:50.846724   53861 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:11:50.846789   53861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:11:50.890069   53861 cri.go:89] found id: ""
	I0917 18:11:50.890099   53861 logs.go:276] 0 containers: []
	W0917 18:11:50.890109   53861 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:11:50.890118   53861 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:11:50.890175   53861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:11:50.927987   53861 cri.go:89] found id: ""
	I0917 18:11:50.928020   53861 logs.go:276] 0 containers: []
	W0917 18:11:50.928036   53861 logs.go:278] No container was found matching "etcd"
	I0917 18:11:50.928044   53861 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:11:50.928111   53861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:11:50.966302   53861 cri.go:89] found id: ""
	I0917 18:11:50.966329   53861 logs.go:276] 0 containers: []
	W0917 18:11:50.966340   53861 logs.go:278] No container was found matching "coredns"
	I0917 18:11:50.966348   53861 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:11:50.966412   53861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:11:51.003714   53861 cri.go:89] found id: ""
	I0917 18:11:51.003744   53861 logs.go:276] 0 containers: []
	W0917 18:11:51.003752   53861 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:11:51.003758   53861 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:11:51.003828   53861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:11:51.041112   53861 cri.go:89] found id: ""
	I0917 18:11:51.041143   53861 logs.go:276] 0 containers: []
	W0917 18:11:51.041154   53861 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:11:51.041162   53861 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:11:51.041241   53861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:11:51.079300   53861 cri.go:89] found id: ""
	I0917 18:11:51.079328   53861 logs.go:276] 0 containers: []
	W0917 18:11:51.079338   53861 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:11:51.079346   53861 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:11:51.079410   53861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:11:51.117878   53861 cri.go:89] found id: ""
	I0917 18:11:51.117913   53861 logs.go:276] 0 containers: []
	W0917 18:11:51.117925   53861 logs.go:278] No container was found matching "kindnet"
	I0917 18:11:51.117936   53861 logs.go:123] Gathering logs for kubelet ...
	I0917 18:11:51.117951   53861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:11:51.170987   53861 logs.go:123] Gathering logs for dmesg ...
	I0917 18:11:51.171033   53861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:11:51.186774   53861 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:11:51.186810   53861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:11:51.318931   53861 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:11:51.318968   53861 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:11:51.318983   53861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:11:51.425725   53861 logs.go:123] Gathering logs for container status ...
	I0917 18:11:51.425762   53861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0917 18:11:51.481413   53861 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0917 18:11:51.481479   53861 out.go:270] * 
	* 
	W0917 18:11:51.481531   53861 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0917 18:11:51.481548   53861 out.go:270] * 
	* 
	W0917 18:11:51.482529   53861 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 18:11:51.486288   53861 out.go:201] 
	W0917 18:11:51.487422   53861 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0917 18:11:51.487464   53861 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0917 18:11:51.487489   53861 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0917 18:11:51.488877   53861 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-644038 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-644038
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-644038: (1.392798687s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-644038 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-644038 status --format={{.Host}}: exit status 7 (75.314518ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-644038 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-644038 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m20.97386891s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-644038 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-644038 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-644038 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (85.314471ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-644038] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-644038
	    minikube start -p kubernetes-upgrade-644038 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6440382 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-644038 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-644038 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-644038 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m49.883824543s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-09-17 18:15:04.010341671 +0000 UTC m=+4773.084913960
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-644038 -n kubernetes-upgrade-644038
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-644038 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-644038 logs -n 25: (2.134038313s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p NoKubernetes-267093                | NoKubernetes-267093       | jenkins | v1.34.0 | 17 Sep 24 18:10 UTC | 17 Sep 24 18:10 UTC |
	| start   | -p NoKubernetes-267093                | NoKubernetes-267093       | jenkins | v1.34.0 | 17 Sep 24 18:10 UTC | 17 Sep 24 18:11 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p pause-246701                       | pause-246701              | jenkins | v1.34.0 | 17 Sep 24 18:11 UTC | 17 Sep 24 18:11 UTC |
	| start   | -p force-systemd-env-085164           | force-systemd-env-085164  | jenkins | v1.34.0 | 17 Sep 24 18:11 UTC | 17 Sep 24 18:12 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-722424 ssh cat     | force-systemd-flag-722424 | jenkins | v1.34.0 | 17 Sep 24 18:11 UTC | 17 Sep 24 18:11 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-722424          | force-systemd-flag-722424 | jenkins | v1.34.0 | 17 Sep 24 18:11 UTC | 17 Sep 24 18:11 UTC |
	| start   | -p cert-options-111998                | cert-options-111998       | jenkins | v1.34.0 | 17 Sep 24 18:11 UTC | 17 Sep 24 18:12 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-267093 sudo           | NoKubernetes-267093       | jenkins | v1.34.0 | 17 Sep 24 18:11 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-267093                | NoKubernetes-267093       | jenkins | v1.34.0 | 17 Sep 24 18:11 UTC | 17 Sep 24 18:11 UTC |
	| start   | -p NoKubernetes-267093                | NoKubernetes-267093       | jenkins | v1.34.0 | 17 Sep 24 18:11 UTC | 17 Sep 24 18:12 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-644038          | kubernetes-upgrade-644038 | jenkins | v1.34.0 | 17 Sep 24 18:11 UTC | 17 Sep 24 18:11 UTC |
	| start   | -p kubernetes-upgrade-644038          | kubernetes-upgrade-644038 | jenkins | v1.34.0 | 17 Sep 24 18:11 UTC | 17 Sep 24 18:13 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-085164           | force-systemd-env-085164  | jenkins | v1.34.0 | 17 Sep 24 18:12 UTC | 17 Sep 24 18:12 UTC |
	| start   | -p cert-expiration-297256             | cert-expiration-297256    | jenkins | v1.34.0 | 17 Sep 24 18:12 UTC | 17 Sep 24 18:13 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-111998 ssh               | cert-options-111998       | jenkins | v1.34.0 | 17 Sep 24 18:12 UTC | 17 Sep 24 18:12 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-111998 -- sudo        | cert-options-111998       | jenkins | v1.34.0 | 17 Sep 24 18:12 UTC | 17 Sep 24 18:12 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-111998                | cert-options-111998       | jenkins | v1.34.0 | 17 Sep 24 18:12 UTC | 17 Sep 24 18:12 UTC |
	| start   | -p auto-639892 --memory=3072          | auto-639892               | jenkins | v1.34.0 | 17 Sep 24 18:12 UTC | 17 Sep 24 18:14 UTC |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-267093 sudo           | NoKubernetes-267093       | jenkins | v1.34.0 | 17 Sep 24 18:12 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-267093                | NoKubernetes-267093       | jenkins | v1.34.0 | 17 Sep 24 18:12 UTC | 17 Sep 24 18:12 UTC |
	| start   | -p kindnet-639892                     | kindnet-639892            | jenkins | v1.34.0 | 17 Sep 24 18:12 UTC | 17 Sep 24 18:14 UTC |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-644038          | kubernetes-upgrade-644038 | jenkins | v1.34.0 | 17 Sep 24 18:13 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-644038          | kubernetes-upgrade-644038 | jenkins | v1.34.0 | 17 Sep 24 18:13 UTC | 17 Sep 24 18:15 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p auto-639892 pgrep -a               | auto-639892               | jenkins | v1.34.0 | 17 Sep 24 18:14 UTC | 17 Sep 24 18:14 UTC |
	|         | kubelet                               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-639892 pgrep -a            | kindnet-639892            | jenkins | v1.34.0 | 17 Sep 24 18:15 UTC | 17 Sep 24 18:15 UTC |
	|         | kubelet                               |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 18:13:14
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 18:13:14.169176   61797 out.go:345] Setting OutFile to fd 1 ...
	I0917 18:13:14.169343   61797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:13:14.169354   61797 out.go:358] Setting ErrFile to fd 2...
	I0917 18:13:14.169359   61797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:13:14.169606   61797 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 18:13:14.170344   61797 out.go:352] Setting JSON to false
	I0917 18:13:14.171569   61797 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6909,"bootTime":1726589885,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 18:13:14.171695   61797 start.go:139] virtualization: kvm guest
	I0917 18:13:14.173774   61797 out.go:177] * [kubernetes-upgrade-644038] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 18:13:14.175044   61797 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 18:13:14.175069   61797 notify.go:220] Checking for updates...
	I0917 18:13:14.177253   61797 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 18:13:14.178554   61797 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:13:14.179846   61797 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 18:13:14.180968   61797 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 18:13:14.182148   61797 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 18:13:14.183636   61797 config.go:182] Loaded profile config "kubernetes-upgrade-644038": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:13:14.184105   61797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:13:14.184180   61797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:13:14.199676   61797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46459
	I0917 18:13:14.200056   61797 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:13:14.200581   61797 main.go:141] libmachine: Using API Version  1
	I0917 18:13:14.200603   61797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:13:14.200941   61797 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:13:14.201162   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .DriverName
	I0917 18:13:14.201423   61797 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 18:13:14.201752   61797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:13:14.201811   61797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:13:14.216742   61797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35611
	I0917 18:13:14.217122   61797 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:13:14.217697   61797 main.go:141] libmachine: Using API Version  1
	I0917 18:13:14.217724   61797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:13:14.218106   61797 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:13:14.218388   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .DriverName
	I0917 18:13:14.258925   61797 out.go:177] * Using the kvm2 driver based on existing profile
	I0917 18:13:14.260214   61797 start.go:297] selected driver: kvm2
	I0917 18:13:14.260230   61797 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-644038 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-644038 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:13:14.260349   61797 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 18:13:14.261034   61797 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:13:14.261113   61797 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19662-11085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0917 18:13:14.276986   61797 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0917 18:13:14.277411   61797 cni.go:84] Creating CNI manager for ""
	I0917 18:13:14.277464   61797 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:13:14.277499   61797 start.go:340] cluster config:
	{Name:kubernetes-upgrade-644038 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-644038 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:13:14.277611   61797 iso.go:125] acquiring lock: {Name:mkee7131e09b1c5eb94d106bda884bcbdbf6f906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:13:14.279761   61797 out.go:177] * Starting "kubernetes-upgrade-644038" primary control-plane node in "kubernetes-upgrade-644038" cluster
	I0917 18:13:16.954740   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:16.955230   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | unable to find current IP address of domain cert-expiration-297256 in network mk-cert-expiration-297256
	I0917 18:13:16.955240   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | I0917 18:13:16.955197   61524 retry.go:31] will retry after 5.592638676s: waiting for machine to come up
	I0917 18:13:14.280821   61797 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 18:13:14.280861   61797 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0917 18:13:14.280870   61797 cache.go:56] Caching tarball of preloaded images
	I0917 18:13:14.280946   61797 preload.go:172] Found /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 18:13:14.280956   61797 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0917 18:13:14.281034   61797 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/config.json ...
	I0917 18:13:14.281206   61797 start.go:360] acquireMachinesLock for kubernetes-upgrade-644038: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 18:13:22.550856   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:22.551343   60862 main.go:141] libmachine: (cert-expiration-297256) Found IP for machine: 192.168.39.135
	I0917 18:13:22.551360   60862 main.go:141] libmachine: (cert-expiration-297256) Reserving static IP address...
	I0917 18:13:22.551370   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has current primary IP address 192.168.39.135 and MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:22.551783   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | unable to find host DHCP lease matching {name: "cert-expiration-297256", mac: "52:54:00:22:e1:f2", ip: "192.168.39.135"} in network mk-cert-expiration-297256
	I0917 18:13:22.631244   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | Getting to WaitForSSH function...
	I0917 18:13:22.631268   60862 main.go:141] libmachine: (cert-expiration-297256) Reserved static IP address: 192.168.39.135
	I0917 18:13:22.631281   60862 main.go:141] libmachine: (cert-expiration-297256) Waiting for SSH to be available...
	I0917 18:13:22.633633   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:22.633944   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:e1:f2", ip: ""} in network mk-cert-expiration-297256: {Iface:virbr1 ExpiryTime:2024-09-17 19:13:13 +0000 UTC Type:0 Mac:52:54:00:22:e1:f2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:minikube Clientid:01:52:54:00:22:e1:f2}
	I0917 18:13:22.633968   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined IP address 192.168.39.135 and MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:22.634090   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | Using SSH client type: external
	I0917 18:13:22.634111   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/cert-expiration-297256/id_rsa (-rw-------)
	I0917 18:13:22.634148   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.135 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/cert-expiration-297256/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:13:22.634162   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | About to run SSH command:
	I0917 18:13:22.634175   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | exit 0
	I0917 18:13:22.761640   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | SSH cmd err, output: <nil>: 
	I0917 18:13:22.761942   60862 main.go:141] libmachine: (cert-expiration-297256) KVM machine creation complete!
	I0917 18:13:22.762178   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetConfigRaw
	I0917 18:13:22.762852   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .DriverName
	I0917 18:13:22.763011   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .DriverName
	I0917 18:13:22.763160   60862 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0917 18:13:22.763207   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetState
	I0917 18:13:22.764449   60862 main.go:141] libmachine: Detecting operating system of created instance...
	I0917 18:13:22.764456   60862 main.go:141] libmachine: Waiting for SSH to be available...
	I0917 18:13:22.764459   60862 main.go:141] libmachine: Getting to WaitForSSH function...
	I0917 18:13:22.764464   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHHostname
	I0917 18:13:22.766740   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:22.767120   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:e1:f2", ip: ""} in network mk-cert-expiration-297256: {Iface:virbr1 ExpiryTime:2024-09-17 19:13:13 +0000 UTC Type:0 Mac:52:54:00:22:e1:f2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:cert-expiration-297256 Clientid:01:52:54:00:22:e1:f2}
	I0917 18:13:22.767140   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined IP address 192.168.39.135 and MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:22.767298   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHPort
	I0917 18:13:22.767468   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHKeyPath
	I0917 18:13:22.767606   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHKeyPath
	I0917 18:13:22.767695   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHUsername
	I0917 18:13:22.767808   60862 main.go:141] libmachine: Using SSH client type: native
	I0917 18:13:22.767988   60862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0917 18:13:22.767993   60862 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0917 18:13:22.876598   60862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:13:22.876611   60862 main.go:141] libmachine: Detecting the provisioner...
	I0917 18:13:22.876617   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHHostname
	I0917 18:13:22.879486   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:22.879831   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:e1:f2", ip: ""} in network mk-cert-expiration-297256: {Iface:virbr1 ExpiryTime:2024-09-17 19:13:13 +0000 UTC Type:0 Mac:52:54:00:22:e1:f2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:cert-expiration-297256 Clientid:01:52:54:00:22:e1:f2}
	I0917 18:13:22.879856   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined IP address 192.168.39.135 and MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:22.879993   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHPort
	I0917 18:13:22.880219   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHKeyPath
	I0917 18:13:22.880367   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHKeyPath
	I0917 18:13:22.880491   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHUsername
	I0917 18:13:22.880645   60862 main.go:141] libmachine: Using SSH client type: native
	I0917 18:13:22.880861   60862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0917 18:13:22.880869   60862 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0917 18:13:22.990288   60862 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0917 18:13:22.990351   60862 main.go:141] libmachine: found compatible host: buildroot
	I0917 18:13:22.990357   60862 main.go:141] libmachine: Provisioning with buildroot...
	I0917 18:13:22.990363   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetMachineName
	I0917 18:13:22.990569   60862 buildroot.go:166] provisioning hostname "cert-expiration-297256"
	I0917 18:13:22.990591   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetMachineName
	I0917 18:13:22.990765   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHHostname
	I0917 18:13:22.993274   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:22.993610   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:e1:f2", ip: ""} in network mk-cert-expiration-297256: {Iface:virbr1 ExpiryTime:2024-09-17 19:13:13 +0000 UTC Type:0 Mac:52:54:00:22:e1:f2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:cert-expiration-297256 Clientid:01:52:54:00:22:e1:f2}
	I0917 18:13:22.993625   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined IP address 192.168.39.135 and MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:22.993777   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHPort
	I0917 18:13:22.993920   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHKeyPath
	I0917 18:13:22.994047   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHKeyPath
	I0917 18:13:22.994195   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHUsername
	I0917 18:13:22.994312   60862 main.go:141] libmachine: Using SSH client type: native
	I0917 18:13:22.994476   60862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0917 18:13:22.994482   60862 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-297256 && echo "cert-expiration-297256" | sudo tee /etc/hostname
	I0917 18:13:23.116416   60862 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-297256
	
	I0917 18:13:23.116435   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHHostname
	I0917 18:13:23.119117   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:23.119459   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:e1:f2", ip: ""} in network mk-cert-expiration-297256: {Iface:virbr1 ExpiryTime:2024-09-17 19:13:13 +0000 UTC Type:0 Mac:52:54:00:22:e1:f2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:cert-expiration-297256 Clientid:01:52:54:00:22:e1:f2}
	I0917 18:13:23.119476   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined IP address 192.168.39.135 and MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:23.119620   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHPort
	I0917 18:13:23.119785   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHKeyPath
	I0917 18:13:23.119957   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHKeyPath
	I0917 18:13:23.120080   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHUsername
	I0917 18:13:23.120211   60862 main.go:141] libmachine: Using SSH client type: native
	I0917 18:13:23.120382   60862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0917 18:13:23.120393   60862 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-297256' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-297256/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-297256' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:13:23.238566   60862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:13:23.238586   60862 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:13:23.238622   60862 buildroot.go:174] setting up certificates
	I0917 18:13:23.238630   60862 provision.go:84] configureAuth start
	I0917 18:13:23.238638   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetMachineName
	I0917 18:13:23.238942   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetIP
	I0917 18:13:23.241601   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:23.241969   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:e1:f2", ip: ""} in network mk-cert-expiration-297256: {Iface:virbr1 ExpiryTime:2024-09-17 19:13:13 +0000 UTC Type:0 Mac:52:54:00:22:e1:f2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:cert-expiration-297256 Clientid:01:52:54:00:22:e1:f2}
	I0917 18:13:23.241992   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined IP address 192.168.39.135 and MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:23.242111   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHHostname
	I0917 18:13:23.244279   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:23.244672   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:e1:f2", ip: ""} in network mk-cert-expiration-297256: {Iface:virbr1 ExpiryTime:2024-09-17 19:13:13 +0000 UTC Type:0 Mac:52:54:00:22:e1:f2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:cert-expiration-297256 Clientid:01:52:54:00:22:e1:f2}
	I0917 18:13:23.244715   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined IP address 192.168.39.135 and MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:23.244828   60862 provision.go:143] copyHostCerts
	I0917 18:13:23.244887   60862 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:13:23.244895   60862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:13:23.244953   60862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:13:23.245033   60862 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:13:23.245036   60862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:13:23.245054   60862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:13:23.245097   60862 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:13:23.245107   60862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:13:23.245122   60862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:13:23.245166   60862 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-297256 san=[127.0.0.1 192.168.39.135 cert-expiration-297256 localhost minikube]
	I0917 18:13:23.300753   60862 provision.go:177] copyRemoteCerts
	I0917 18:13:23.300797   60862 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:13:23.300819   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHHostname
	I0917 18:13:23.304504   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:23.304855   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:e1:f2", ip: ""} in network mk-cert-expiration-297256: {Iface:virbr1 ExpiryTime:2024-09-17 19:13:13 +0000 UTC Type:0 Mac:52:54:00:22:e1:f2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:cert-expiration-297256 Clientid:01:52:54:00:22:e1:f2}
	I0917 18:13:23.304876   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined IP address 192.168.39.135 and MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:23.305067   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHPort
	I0917 18:13:23.305274   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHKeyPath
	I0917 18:13:23.305401   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHUsername
	I0917 18:13:23.305502   60862 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/cert-expiration-297256/id_rsa Username:docker}
	I0917 18:13:23.387880   60862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0917 18:13:23.413799   60862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 18:13:23.438840   60862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:13:23.463033   60862 provision.go:87] duration metric: took 224.392275ms to configureAuth
	I0917 18:13:23.463053   60862 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:13:23.463230   60862 config.go:182] Loaded profile config "cert-expiration-297256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:13:23.463298   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHHostname
	I0917 18:13:23.466164   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:23.466474   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:e1:f2", ip: ""} in network mk-cert-expiration-297256: {Iface:virbr1 ExpiryTime:2024-09-17 19:13:13 +0000 UTC Type:0 Mac:52:54:00:22:e1:f2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:cert-expiration-297256 Clientid:01:52:54:00:22:e1:f2}
	I0917 18:13:23.466505   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined IP address 192.168.39.135 and MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:23.466695   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHPort
	I0917 18:13:23.466909   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHKeyPath
	I0917 18:13:23.467064   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHKeyPath
	I0917 18:13:23.467172   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHUsername
	I0917 18:13:23.467324   60862 main.go:141] libmachine: Using SSH client type: native
	I0917 18:13:23.467483   60862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0917 18:13:23.467491   60862 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:13:23.690178   60862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:13:23.690197   60862 main.go:141] libmachine: Checking connection to Docker...
	I0917 18:13:23.690207   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetURL
	I0917 18:13:23.691857   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | Using libvirt version 6000000
	I0917 18:13:23.693972   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:23.694272   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:e1:f2", ip: ""} in network mk-cert-expiration-297256: {Iface:virbr1 ExpiryTime:2024-09-17 19:13:13 +0000 UTC Type:0 Mac:52:54:00:22:e1:f2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:cert-expiration-297256 Clientid:01:52:54:00:22:e1:f2}
	I0917 18:13:23.694296   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined IP address 192.168.39.135 and MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:23.694434   60862 main.go:141] libmachine: Docker is up and running!
	I0917 18:13:23.694441   60862 main.go:141] libmachine: Reticulating splines...
	I0917 18:13:23.694447   60862 client.go:171] duration metric: took 25.070274271s to LocalClient.Create
	I0917 18:13:23.694471   60862 start.go:167] duration metric: took 25.070348741s to libmachine.API.Create "cert-expiration-297256"
	I0917 18:13:23.694477   60862 start.go:293] postStartSetup for "cert-expiration-297256" (driver="kvm2")
	I0917 18:13:23.694484   60862 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:13:23.694497   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .DriverName
	I0917 18:13:23.694714   60862 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:13:23.694731   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHHostname
	I0917 18:13:23.697017   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:23.697310   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:e1:f2", ip: ""} in network mk-cert-expiration-297256: {Iface:virbr1 ExpiryTime:2024-09-17 19:13:13 +0000 UTC Type:0 Mac:52:54:00:22:e1:f2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:cert-expiration-297256 Clientid:01:52:54:00:22:e1:f2}
	I0917 18:13:23.697352   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined IP address 192.168.39.135 and MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:23.697510   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHPort
	I0917 18:13:23.697686   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHKeyPath
	I0917 18:13:23.697808   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHUsername
	I0917 18:13:23.697941   60862 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/cert-expiration-297256/id_rsa Username:docker}
	I0917 18:13:23.946490   61298 start.go:364] duration metric: took 44.452678271s to acquireMachinesLock for "auto-639892"
	I0917 18:13:23.946591   61298 start.go:93] Provisioning new machine with config: &{Name:auto-639892 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:auto-639892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 18:13:23.946737   61298 start.go:125] createHost starting for "" (driver="kvm2")
	I0917 18:13:23.949164   61298 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 18:13:23.949365   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:13:23.949424   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:13:23.969903   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38677
	I0917 18:13:23.970351   61298 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:13:23.970887   61298 main.go:141] libmachine: Using API Version  1
	I0917 18:13:23.970909   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:13:23.971252   61298 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:13:23.971454   61298 main.go:141] libmachine: (auto-639892) Calling .GetMachineName
	I0917 18:13:23.971631   61298 main.go:141] libmachine: (auto-639892) Calling .DriverName
	I0917 18:13:23.971804   61298 start.go:159] libmachine.API.Create for "auto-639892" (driver="kvm2")
	I0917 18:13:23.971848   61298 client.go:168] LocalClient.Create starting
	I0917 18:13:23.971891   61298 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem
	I0917 18:13:23.971941   61298 main.go:141] libmachine: Decoding PEM data...
	I0917 18:13:23.971965   61298 main.go:141] libmachine: Parsing certificate...
	I0917 18:13:23.972041   61298 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem
	I0917 18:13:23.972074   61298 main.go:141] libmachine: Decoding PEM data...
	I0917 18:13:23.972094   61298 main.go:141] libmachine: Parsing certificate...
	I0917 18:13:23.972116   61298 main.go:141] libmachine: Running pre-create checks...
	I0917 18:13:23.972133   61298 main.go:141] libmachine: (auto-639892) Calling .PreCreateCheck
	I0917 18:13:23.972502   61298 main.go:141] libmachine: (auto-639892) Calling .GetConfigRaw
	I0917 18:13:23.972934   61298 main.go:141] libmachine: Creating machine...
	I0917 18:13:23.972953   61298 main.go:141] libmachine: (auto-639892) Calling .Create
	I0917 18:13:23.973081   61298 main.go:141] libmachine: (auto-639892) Creating KVM machine...
	I0917 18:13:23.974356   61298 main.go:141] libmachine: (auto-639892) DBG | found existing default KVM network
	I0917 18:13:23.975657   61298 main.go:141] libmachine: (auto-639892) DBG | I0917 18:13:23.975491   61898 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:16:03:57} reservation:<nil>}
	I0917 18:13:23.976404   61298 main.go:141] libmachine: (auto-639892) DBG | I0917 18:13:23.976336   61898 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:89:17:1a} reservation:<nil>}
	I0917 18:13:23.977521   61298 main.go:141] libmachine: (auto-639892) DBG | I0917 18:13:23.977429   61898 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000260ec0}
	I0917 18:13:23.977542   61298 main.go:141] libmachine: (auto-639892) DBG | created network xml: 
	I0917 18:13:23.977549   61298 main.go:141] libmachine: (auto-639892) DBG | <network>
	I0917 18:13:23.977555   61298 main.go:141] libmachine: (auto-639892) DBG |   <name>mk-auto-639892</name>
	I0917 18:13:23.977562   61298 main.go:141] libmachine: (auto-639892) DBG |   <dns enable='no'/>
	I0917 18:13:23.977571   61298 main.go:141] libmachine: (auto-639892) DBG |   
	I0917 18:13:23.977581   61298 main.go:141] libmachine: (auto-639892) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0917 18:13:23.977605   61298 main.go:141] libmachine: (auto-639892) DBG |     <dhcp>
	I0917 18:13:23.977621   61298 main.go:141] libmachine: (auto-639892) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0917 18:13:23.977631   61298 main.go:141] libmachine: (auto-639892) DBG |     </dhcp>
	I0917 18:13:23.977638   61298 main.go:141] libmachine: (auto-639892) DBG |   </ip>
	I0917 18:13:23.977646   61298 main.go:141] libmachine: (auto-639892) DBG |   
	I0917 18:13:23.977654   61298 main.go:141] libmachine: (auto-639892) DBG | </network>
	I0917 18:13:23.977678   61298 main.go:141] libmachine: (auto-639892) DBG | 
	I0917 18:13:23.983034   61298 main.go:141] libmachine: (auto-639892) DBG | trying to create private KVM network mk-auto-639892 192.168.61.0/24...
	I0917 18:13:24.056297   61298 main.go:141] libmachine: (auto-639892) DBG | private KVM network mk-auto-639892 192.168.61.0/24 created
	I0917 18:13:24.056339   61298 main.go:141] libmachine: (auto-639892) DBG | I0917 18:13:24.056241   61898 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 18:13:24.056351   61298 main.go:141] libmachine: (auto-639892) Setting up store path in /home/jenkins/minikube-integration/19662-11085/.minikube/machines/auto-639892 ...
	I0917 18:13:24.056383   61298 main.go:141] libmachine: (auto-639892) Building disk image from file:///home/jenkins/minikube-integration/19662-11085/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0917 18:13:24.056410   61298 main.go:141] libmachine: (auto-639892) Downloading /home/jenkins/minikube-integration/19662-11085/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19662-11085/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0917 18:13:24.307508   61298 main.go:141] libmachine: (auto-639892) DBG | I0917 18:13:24.307344   61898 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/auto-639892/id_rsa...
	I0917 18:13:23.780208   60862 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:13:23.784885   60862 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:13:23.784900   60862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:13:23.784971   60862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:13:23.785056   60862 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:13:23.785168   60862 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:13:23.795287   60862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:13:23.820457   60862 start.go:296] duration metric: took 125.967819ms for postStartSetup
	I0917 18:13:23.820495   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetConfigRaw
	I0917 18:13:23.821116   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetIP
	I0917 18:13:23.823760   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:23.824055   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:e1:f2", ip: ""} in network mk-cert-expiration-297256: {Iface:virbr1 ExpiryTime:2024-09-17 19:13:13 +0000 UTC Type:0 Mac:52:54:00:22:e1:f2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:cert-expiration-297256 Clientid:01:52:54:00:22:e1:f2}
	I0917 18:13:23.824065   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined IP address 192.168.39.135 and MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:23.824293   60862 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/cert-expiration-297256/config.json ...
	I0917 18:13:23.824559   60862 start.go:128] duration metric: took 25.225060667s to createHost
	I0917 18:13:23.824580   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHHostname
	I0917 18:13:23.827169   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:23.827520   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:e1:f2", ip: ""} in network mk-cert-expiration-297256: {Iface:virbr1 ExpiryTime:2024-09-17 19:13:13 +0000 UTC Type:0 Mac:52:54:00:22:e1:f2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:cert-expiration-297256 Clientid:01:52:54:00:22:e1:f2}
	I0917 18:13:23.827542   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined IP address 192.168.39.135 and MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:23.827725   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHPort
	I0917 18:13:23.827924   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHKeyPath
	I0917 18:13:23.828096   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHKeyPath
	I0917 18:13:23.828234   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHUsername
	I0917 18:13:23.828420   60862 main.go:141] libmachine: Using SSH client type: native
	I0917 18:13:23.828619   60862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0917 18:13:23.828629   60862 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:13:23.946349   60862 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726596803.914303403
	
	I0917 18:13:23.946373   60862 fix.go:216] guest clock: 1726596803.914303403
	I0917 18:13:23.946381   60862 fix.go:229] Guest: 2024-09-17 18:13:23.914303403 +0000 UTC Remote: 2024-09-17 18:13:23.824568168 +0000 UTC m=+70.123981544 (delta=89.735235ms)
	I0917 18:13:23.946404   60862 fix.go:200] guest clock delta is within tolerance: 89.735235ms
	I0917 18:13:23.946408   60862 start.go:83] releasing machines lock for "cert-expiration-297256", held for 25.347078227s
	I0917 18:13:23.946441   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .DriverName
	I0917 18:13:23.946765   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetIP
	I0917 18:13:23.949483   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:23.949906   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:e1:f2", ip: ""} in network mk-cert-expiration-297256: {Iface:virbr1 ExpiryTime:2024-09-17 19:13:13 +0000 UTC Type:0 Mac:52:54:00:22:e1:f2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:cert-expiration-297256 Clientid:01:52:54:00:22:e1:f2}
	I0917 18:13:23.949930   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined IP address 192.168.39.135 and MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:23.950102   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .DriverName
	I0917 18:13:23.950660   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .DriverName
	I0917 18:13:23.950834   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .DriverName
	I0917 18:13:23.950921   60862 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:13:23.950961   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHHostname
	I0917 18:13:23.951069   60862 ssh_runner.go:195] Run: cat /version.json
	I0917 18:13:23.951080   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHHostname
	I0917 18:13:23.953613   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:23.953916   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:e1:f2", ip: ""} in network mk-cert-expiration-297256: {Iface:virbr1 ExpiryTime:2024-09-17 19:13:13 +0000 UTC Type:0 Mac:52:54:00:22:e1:f2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:cert-expiration-297256 Clientid:01:52:54:00:22:e1:f2}
	I0917 18:13:23.953936   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:23.953956   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined IP address 192.168.39.135 and MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:23.954084   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHPort
	I0917 18:13:23.954227   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHKeyPath
	I0917 18:13:23.954330   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHUsername
	I0917 18:13:23.954358   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:e1:f2", ip: ""} in network mk-cert-expiration-297256: {Iface:virbr1 ExpiryTime:2024-09-17 19:13:13 +0000 UTC Type:0 Mac:52:54:00:22:e1:f2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:cert-expiration-297256 Clientid:01:52:54:00:22:e1:f2}
	I0917 18:13:23.954371   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined IP address 192.168.39.135 and MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:23.954480   60862 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/cert-expiration-297256/id_rsa Username:docker}
	I0917 18:13:23.954541   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHPort
	I0917 18:13:23.954683   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHKeyPath
	I0917 18:13:23.954831   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHUsername
	I0917 18:13:23.954973   60862 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/cert-expiration-297256/id_rsa Username:docker}
	I0917 18:13:24.042189   60862 ssh_runner.go:195] Run: systemctl --version
	I0917 18:13:24.069459   60862 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:13:24.256226   60862 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:13:24.262718   60862 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:13:24.262773   60862 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:13:24.279836   60862 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:13:24.279852   60862 start.go:495] detecting cgroup driver to use...
	I0917 18:13:24.279923   60862 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:13:24.297489   60862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:13:24.312333   60862 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:13:24.312398   60862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:13:24.326933   60862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:13:24.342515   60862 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:13:24.460603   60862 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:13:24.645291   60862 docker.go:233] disabling docker service ...
	I0917 18:13:24.645368   60862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:13:24.664025   60862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:13:24.678158   60862 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:13:24.813353   60862 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:13:24.934916   60862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:13:24.951618   60862 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:13:24.971444   60862 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 18:13:24.971503   60862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:13:24.982513   60862 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:13:24.982571   60862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:13:24.995002   60862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:13:25.007447   60862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:13:25.019393   60862 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:13:25.032127   60862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:13:25.043946   60862 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:13:25.063023   60862 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:13:25.076018   60862 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:13:25.089064   60862 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:13:25.089124   60862 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:13:25.103971   60862 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:13:25.118465   60862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:13:25.243197   60862 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:13:25.348928   60862 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:13:25.348989   60862 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:13:25.354838   60862 start.go:563] Will wait 60s for crictl version
	I0917 18:13:25.354890   60862 ssh_runner.go:195] Run: which crictl
	I0917 18:13:25.358852   60862 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:13:25.402691   60862 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:13:25.402776   60862 ssh_runner.go:195] Run: crio --version
	I0917 18:13:25.434514   60862 ssh_runner.go:195] Run: crio --version
	I0917 18:13:25.469047   60862 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 18:13:25.470570   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetIP
	I0917 18:13:25.473504   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:25.473872   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:e1:f2", ip: ""} in network mk-cert-expiration-297256: {Iface:virbr1 ExpiryTime:2024-09-17 19:13:13 +0000 UTC Type:0 Mac:52:54:00:22:e1:f2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:cert-expiration-297256 Clientid:01:52:54:00:22:e1:f2}
	I0917 18:13:25.473894   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined IP address 192.168.39.135 and MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:25.474074   60862 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0917 18:13:25.478823   60862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:13:25.492233   60862 kubeadm.go:883] updating cluster {Name:cert-expiration-297256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.1 ClusterName:cert-expiration-297256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:13:25.492336   60862 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 18:13:25.492380   60862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:13:25.540963   60862 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0917 18:13:25.541017   60862 ssh_runner.go:195] Run: which lz4
	I0917 18:13:25.545729   60862 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 18:13:25.550478   60862 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 18:13:25.550514   60862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0917 18:13:27.054783   60862 crio.go:462] duration metric: took 1.50908848s to copy over tarball
	I0917 18:13:27.054869   60862 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 18:13:24.423980   61298 main.go:141] libmachine: (auto-639892) DBG | I0917 18:13:24.423854   61898 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/auto-639892/auto-639892.rawdisk...
	I0917 18:13:24.424010   61298 main.go:141] libmachine: (auto-639892) DBG | Writing magic tar header
	I0917 18:13:24.424024   61298 main.go:141] libmachine: (auto-639892) DBG | Writing SSH key tar header
	I0917 18:13:24.424033   61298 main.go:141] libmachine: (auto-639892) DBG | I0917 18:13:24.423966   61898 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19662-11085/.minikube/machines/auto-639892 ...
	I0917 18:13:24.424093   61298 main.go:141] libmachine: (auto-639892) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/auto-639892
	I0917 18:13:24.424162   61298 main.go:141] libmachine: (auto-639892) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube/machines/auto-639892 (perms=drwx------)
	I0917 18:13:24.424187   61298 main.go:141] libmachine: (auto-639892) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube/machines (perms=drwxr-xr-x)
	I0917 18:13:24.424202   61298 main.go:141] libmachine: (auto-639892) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube/machines
	I0917 18:13:24.424217   61298 main.go:141] libmachine: (auto-639892) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube (perms=drwxr-xr-x)
	I0917 18:13:24.424232   61298 main.go:141] libmachine: (auto-639892) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 18:13:24.424250   61298 main.go:141] libmachine: (auto-639892) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085
	I0917 18:13:24.424262   61298 main.go:141] libmachine: (auto-639892) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0917 18:13:24.424275   61298 main.go:141] libmachine: (auto-639892) DBG | Checking permissions on dir: /home/jenkins
	I0917 18:13:24.424299   61298 main.go:141] libmachine: (auto-639892) DBG | Checking permissions on dir: /home
	I0917 18:13:24.424316   61298 main.go:141] libmachine: (auto-639892) DBG | Skipping /home - not owner
	I0917 18:13:24.424329   61298 main.go:141] libmachine: (auto-639892) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085 (perms=drwxrwxr-x)
	I0917 18:13:24.424347   61298 main.go:141] libmachine: (auto-639892) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0917 18:13:24.424357   61298 main.go:141] libmachine: (auto-639892) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0917 18:13:24.424369   61298 main.go:141] libmachine: (auto-639892) Creating domain...
	I0917 18:13:24.426461   61298 main.go:141] libmachine: (auto-639892) define libvirt domain using xml: 
	I0917 18:13:24.426492   61298 main.go:141] libmachine: (auto-639892) <domain type='kvm'>
	I0917 18:13:24.426503   61298 main.go:141] libmachine: (auto-639892)   <name>auto-639892</name>
	I0917 18:13:24.426514   61298 main.go:141] libmachine: (auto-639892)   <memory unit='MiB'>3072</memory>
	I0917 18:13:24.426523   61298 main.go:141] libmachine: (auto-639892)   <vcpu>2</vcpu>
	I0917 18:13:24.426529   61298 main.go:141] libmachine: (auto-639892)   <features>
	I0917 18:13:24.426536   61298 main.go:141] libmachine: (auto-639892)     <acpi/>
	I0917 18:13:24.426547   61298 main.go:141] libmachine: (auto-639892)     <apic/>
	I0917 18:13:24.426555   61298 main.go:141] libmachine: (auto-639892)     <pae/>
	I0917 18:13:24.426559   61298 main.go:141] libmachine: (auto-639892)     
	I0917 18:13:24.426564   61298 main.go:141] libmachine: (auto-639892)   </features>
	I0917 18:13:24.426568   61298 main.go:141] libmachine: (auto-639892)   <cpu mode='host-passthrough'>
	I0917 18:13:24.426586   61298 main.go:141] libmachine: (auto-639892)   
	I0917 18:13:24.426602   61298 main.go:141] libmachine: (auto-639892)   </cpu>
	I0917 18:13:24.426609   61298 main.go:141] libmachine: (auto-639892)   <os>
	I0917 18:13:24.426614   61298 main.go:141] libmachine: (auto-639892)     <type>hvm</type>
	I0917 18:13:24.426631   61298 main.go:141] libmachine: (auto-639892)     <boot dev='cdrom'/>
	I0917 18:13:24.426640   61298 main.go:141] libmachine: (auto-639892)     <boot dev='hd'/>
	I0917 18:13:24.426648   61298 main.go:141] libmachine: (auto-639892)     <bootmenu enable='no'/>
	I0917 18:13:24.426656   61298 main.go:141] libmachine: (auto-639892)   </os>
	I0917 18:13:24.426663   61298 main.go:141] libmachine: (auto-639892)   <devices>
	I0917 18:13:24.426671   61298 main.go:141] libmachine: (auto-639892)     <disk type='file' device='cdrom'>
	I0917 18:13:24.426683   61298 main.go:141] libmachine: (auto-639892)       <source file='/home/jenkins/minikube-integration/19662-11085/.minikube/machines/auto-639892/boot2docker.iso'/>
	I0917 18:13:24.426701   61298 main.go:141] libmachine: (auto-639892)       <target dev='hdc' bus='scsi'/>
	I0917 18:13:24.426711   61298 main.go:141] libmachine: (auto-639892)       <readonly/>
	I0917 18:13:24.426719   61298 main.go:141] libmachine: (auto-639892)     </disk>
	I0917 18:13:24.426728   61298 main.go:141] libmachine: (auto-639892)     <disk type='file' device='disk'>
	I0917 18:13:24.426738   61298 main.go:141] libmachine: (auto-639892)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0917 18:13:24.426751   61298 main.go:141] libmachine: (auto-639892)       <source file='/home/jenkins/minikube-integration/19662-11085/.minikube/machines/auto-639892/auto-639892.rawdisk'/>
	I0917 18:13:24.426762   61298 main.go:141] libmachine: (auto-639892)       <target dev='hda' bus='virtio'/>
	I0917 18:13:24.426776   61298 main.go:141] libmachine: (auto-639892)     </disk>
	I0917 18:13:24.426787   61298 main.go:141] libmachine: (auto-639892)     <interface type='network'>
	I0917 18:13:24.426802   61298 main.go:141] libmachine: (auto-639892)       <source network='mk-auto-639892'/>
	I0917 18:13:24.426812   61298 main.go:141] libmachine: (auto-639892)       <model type='virtio'/>
	I0917 18:13:24.426822   61298 main.go:141] libmachine: (auto-639892)     </interface>
	I0917 18:13:24.426830   61298 main.go:141] libmachine: (auto-639892)     <interface type='network'>
	I0917 18:13:24.426840   61298 main.go:141] libmachine: (auto-639892)       <source network='default'/>
	I0917 18:13:24.426847   61298 main.go:141] libmachine: (auto-639892)       <model type='virtio'/>
	I0917 18:13:24.426857   61298 main.go:141] libmachine: (auto-639892)     </interface>
	I0917 18:13:24.426865   61298 main.go:141] libmachine: (auto-639892)     <serial type='pty'>
	I0917 18:13:24.426871   61298 main.go:141] libmachine: (auto-639892)       <target port='0'/>
	I0917 18:13:24.426877   61298 main.go:141] libmachine: (auto-639892)     </serial>
	I0917 18:13:24.426882   61298 main.go:141] libmachine: (auto-639892)     <console type='pty'>
	I0917 18:13:24.426890   61298 main.go:141] libmachine: (auto-639892)       <target type='serial' port='0'/>
	I0917 18:13:24.426896   61298 main.go:141] libmachine: (auto-639892)     </console>
	I0917 18:13:24.426904   61298 main.go:141] libmachine: (auto-639892)     <rng model='virtio'>
	I0917 18:13:24.426914   61298 main.go:141] libmachine: (auto-639892)       <backend model='random'>/dev/random</backend>
	I0917 18:13:24.426924   61298 main.go:141] libmachine: (auto-639892)     </rng>
	I0917 18:13:24.426932   61298 main.go:141] libmachine: (auto-639892)     
	I0917 18:13:24.426965   61298 main.go:141] libmachine: (auto-639892)     
	I0917 18:13:24.426987   61298 main.go:141] libmachine: (auto-639892)   </devices>
	I0917 18:13:24.426998   61298 main.go:141] libmachine: (auto-639892) </domain>
	I0917 18:13:24.427007   61298 main.go:141] libmachine: (auto-639892) 
	I0917 18:13:24.431036   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:1b:0b:70 in network default
	I0917 18:13:24.431672   61298 main.go:141] libmachine: (auto-639892) Ensuring networks are active...
	I0917 18:13:24.431692   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:24.432360   61298 main.go:141] libmachine: (auto-639892) Ensuring network default is active
	I0917 18:13:24.432699   61298 main.go:141] libmachine: (auto-639892) Ensuring network mk-auto-639892 is active
	I0917 18:13:24.433289   61298 main.go:141] libmachine: (auto-639892) Getting domain xml...
	I0917 18:13:24.434084   61298 main.go:141] libmachine: (auto-639892) Creating domain...
	I0917 18:13:25.792050   61298 main.go:141] libmachine: (auto-639892) Waiting to get IP...
	I0917 18:13:25.793033   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:25.794621   61298 main.go:141] libmachine: (auto-639892) DBG | unable to find current IP address of domain auto-639892 in network mk-auto-639892
	I0917 18:13:25.794676   61298 main.go:141] libmachine: (auto-639892) DBG | I0917 18:13:25.794602   61898 retry.go:31] will retry after 197.18207ms: waiting for machine to come up
	I0917 18:13:25.993321   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:25.993807   61298 main.go:141] libmachine: (auto-639892) DBG | unable to find current IP address of domain auto-639892 in network mk-auto-639892
	I0917 18:13:25.993839   61298 main.go:141] libmachine: (auto-639892) DBG | I0917 18:13:25.993759   61898 retry.go:31] will retry after 286.490487ms: waiting for machine to come up
	I0917 18:13:26.282486   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:26.283083   61298 main.go:141] libmachine: (auto-639892) DBG | unable to find current IP address of domain auto-639892 in network mk-auto-639892
	I0917 18:13:26.283114   61298 main.go:141] libmachine: (auto-639892) DBG | I0917 18:13:26.283057   61898 retry.go:31] will retry after 295.290965ms: waiting for machine to come up
	I0917 18:13:26.580030   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:26.580561   61298 main.go:141] libmachine: (auto-639892) DBG | unable to find current IP address of domain auto-639892 in network mk-auto-639892
	I0917 18:13:26.580596   61298 main.go:141] libmachine: (auto-639892) DBG | I0917 18:13:26.580493   61898 retry.go:31] will retry after 541.452469ms: waiting for machine to come up
	I0917 18:13:27.123271   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:27.123805   61298 main.go:141] libmachine: (auto-639892) DBG | unable to find current IP address of domain auto-639892 in network mk-auto-639892
	I0917 18:13:27.123838   61298 main.go:141] libmachine: (auto-639892) DBG | I0917 18:13:27.123757   61898 retry.go:31] will retry after 503.675366ms: waiting for machine to come up
	I0917 18:13:27.629561   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:27.630075   61298 main.go:141] libmachine: (auto-639892) DBG | unable to find current IP address of domain auto-639892 in network mk-auto-639892
	I0917 18:13:27.630105   61298 main.go:141] libmachine: (auto-639892) DBG | I0917 18:13:27.630025   61898 retry.go:31] will retry after 792.212389ms: waiting for machine to come up
	I0917 18:13:28.423925   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:28.424423   61298 main.go:141] libmachine: (auto-639892) DBG | unable to find current IP address of domain auto-639892 in network mk-auto-639892
	I0917 18:13:28.424470   61298 main.go:141] libmachine: (auto-639892) DBG | I0917 18:13:28.424377   61898 retry.go:31] will retry after 1.10707713s: waiting for machine to come up
	I0917 18:13:29.212628   60862 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.157738767s)
	I0917 18:13:29.212645   60862 crio.go:469] duration metric: took 2.157844539s to extract the tarball
	I0917 18:13:29.212651   60862 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 18:13:29.250967   60862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:13:29.299889   60862 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 18:13:29.299901   60862 cache_images.go:84] Images are preloaded, skipping loading
	I0917 18:13:29.299907   60862 kubeadm.go:934] updating node { 192.168.39.135 8443 v1.31.1 crio true true} ...
	I0917 18:13:29.300010   60862 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-297256 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.135
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:cert-expiration-297256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:13:29.300062   60862 ssh_runner.go:195] Run: crio config
	I0917 18:13:29.353803   60862 cni.go:84] Creating CNI manager for ""
	I0917 18:13:29.353817   60862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:13:29.353828   60862 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:13:29.353854   60862 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.135 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-297256 NodeName:cert-expiration-297256 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.135"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.135 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 18:13:29.354015   60862 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.135
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-297256"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.135
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.135"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:13:29.354081   60862 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 18:13:29.365279   60862 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:13:29.365349   60862 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:13:29.375798   60862 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0917 18:13:29.393391   60862 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:13:29.411173   60862 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0917 18:13:29.430011   60862 ssh_runner.go:195] Run: grep 192.168.39.135	control-plane.minikube.internal$ /etc/hosts
	I0917 18:13:29.434437   60862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.135	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:13:29.448075   60862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:13:29.568011   60862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:13:29.586964   60862 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/cert-expiration-297256 for IP: 192.168.39.135
	I0917 18:13:29.586977   60862 certs.go:194] generating shared ca certs ...
	I0917 18:13:29.586992   60862 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:13:29.587158   60862 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:13:29.587191   60862 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:13:29.587196   60862 certs.go:256] generating profile certs ...
	I0917 18:13:29.587243   60862 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/cert-expiration-297256/client.key
	I0917 18:13:29.587262   60862 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/cert-expiration-297256/client.crt with IP's: []
	I0917 18:13:29.874376   60862 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/cert-expiration-297256/client.crt ...
	I0917 18:13:29.874390   60862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/cert-expiration-297256/client.crt: {Name:mkd3b069bc58c44cb0716b3693a1b7f0477f6573 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:13:29.874592   60862 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/cert-expiration-297256/client.key ...
	I0917 18:13:29.874602   60862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/cert-expiration-297256/client.key: {Name:mk3331b089be793099af6d997c7818c254d274e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:13:29.874721   60862 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/cert-expiration-297256/apiserver.key.8e5ca3f4
	I0917 18:13:29.874733   60862 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/cert-expiration-297256/apiserver.crt.8e5ca3f4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.135]
	I0917 18:13:29.951197   60862 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/cert-expiration-297256/apiserver.crt.8e5ca3f4 ...
	I0917 18:13:29.951211   60862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/cert-expiration-297256/apiserver.crt.8e5ca3f4: {Name:mk11bf59d793c557d0f89ff42ff8797684d135ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:13:29.951396   60862 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/cert-expiration-297256/apiserver.key.8e5ca3f4 ...
	I0917 18:13:29.951407   60862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/cert-expiration-297256/apiserver.key.8e5ca3f4: {Name:mkc122822a67dc438370769d1c3f48068a48b643 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:13:29.951510   60862 certs.go:381] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/cert-expiration-297256/apiserver.crt.8e5ca3f4 -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/cert-expiration-297256/apiserver.crt
	I0917 18:13:29.951595   60862 certs.go:385] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/cert-expiration-297256/apiserver.key.8e5ca3f4 -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/cert-expiration-297256/apiserver.key
	I0917 18:13:29.951654   60862 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/cert-expiration-297256/proxy-client.key
	I0917 18:13:29.951664   60862 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/cert-expiration-297256/proxy-client.crt with IP's: []
	I0917 18:13:30.062092   60862 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/cert-expiration-297256/proxy-client.crt ...
	I0917 18:13:30.062108   60862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/cert-expiration-297256/proxy-client.crt: {Name:mke053bb7f39ca1296922aa71054a418f119797b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:13:30.062302   60862 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/cert-expiration-297256/proxy-client.key ...
	I0917 18:13:30.062312   60862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/cert-expiration-297256/proxy-client.key: {Name:mkfbb4274f726fe6b3a9e78d7e0dcab1a1ae9451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:13:30.062521   60862 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:13:30.062556   60862 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:13:30.062571   60862 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:13:30.062592   60862 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:13:30.062635   60862 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:13:30.062671   60862 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:13:30.062708   60862 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:13:30.063275   60862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:13:30.093664   60862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:13:30.120209   60862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:13:30.150084   60862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:13:30.178744   60862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/cert-expiration-297256/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 18:13:30.208776   60862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/cert-expiration-297256/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 18:13:30.250463   60862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/cert-expiration-297256/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:13:30.276920   60862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/cert-expiration-297256/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 18:13:30.302828   60862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:13:30.331466   60862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:13:30.360375   60862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:13:30.397180   60862 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:13:30.418731   60862 ssh_runner.go:195] Run: openssl version
	I0917 18:13:30.433009   60862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:13:30.450671   60862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:13:30.457211   60862 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:13:30.457292   60862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:13:30.463914   60862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:13:30.476233   60862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:13:30.493011   60862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:13:30.499216   60862 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:13:30.499269   60862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:13:30.507574   60862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:13:30.522622   60862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:13:30.537263   60862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:13:30.542717   60862 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:13:30.542782   60862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:13:30.549786   60862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:13:30.562607   60862 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:13:30.567441   60862 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 18:13:30.567491   60862 kubeadm.go:392] StartCluster: {Name:cert-expiration-297256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.31.1 ClusterName:cert-expiration-297256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:13:30.567552   60862 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:13:30.567602   60862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:13:30.620763   60862 cri.go:89] found id: ""
	I0917 18:13:30.620818   60862 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:13:30.635744   60862 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:13:30.650046   60862 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:13:30.661213   60862 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:13:30.661224   60862 kubeadm.go:157] found existing configuration files:
	
	I0917 18:13:30.661299   60862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:13:30.671548   60862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:13:30.671599   60862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:13:30.683920   60862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:13:30.697396   60862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:13:30.697442   60862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:13:30.712239   60862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:13:30.723159   60862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:13:30.723210   60862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:13:30.735765   60862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:13:30.748486   60862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:13:30.748562   60862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:13:30.761165   60862 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:13:30.885442   60862 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 18:13:30.885514   60862 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:13:30.993392   60862 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:13:30.993548   60862 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:13:30.993653   60862 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 18:13:31.006161   60862 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:13:31.058409   60862 out.go:235]   - Generating certificates and keys ...
	I0917 18:13:31.058653   60862 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:13:31.058738   60862 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:13:31.109400   60862 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 18:13:31.428001   60862 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 18:13:31.491498   60862 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 18:13:31.685044   60862 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 18:13:31.783563   60862 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 18:13:31.783894   60862 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-297256 localhost] and IPs [192.168.39.135 127.0.0.1 ::1]
	I0917 18:13:31.874507   60862 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 18:13:31.874939   60862 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-297256 localhost] and IPs [192.168.39.135 127.0.0.1 ::1]
	I0917 18:13:31.948165   60862 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 18:13:32.066398   60862 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 18:13:32.403547   60862 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 18:13:32.403909   60862 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:13:32.535323   60862 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:13:32.652001   60862 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 18:13:32.809881   60862 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:13:32.988500   60862 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:13:33.327589   60862 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:13:33.328059   60862 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:13:33.331562   60862 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:13:33.333695   60862 out.go:235]   - Booting up control plane ...
	I0917 18:13:33.333817   60862 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:13:33.333906   60862 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:13:33.333976   60862 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:13:33.353222   60862 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:13:33.359688   60862 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:13:33.359771   60862 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:13:33.501467   60862 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 18:13:33.501647   60862 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 18:13:29.533142   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:29.533536   61298 main.go:141] libmachine: (auto-639892) DBG | unable to find current IP address of domain auto-639892 in network mk-auto-639892
	I0917 18:13:29.533591   61298 main.go:141] libmachine: (auto-639892) DBG | I0917 18:13:29.533526   61898 retry.go:31] will retry after 1.076414897s: waiting for machine to come up
	I0917 18:13:30.611911   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:30.612451   61298 main.go:141] libmachine: (auto-639892) DBG | unable to find current IP address of domain auto-639892 in network mk-auto-639892
	I0917 18:13:30.612481   61298 main.go:141] libmachine: (auto-639892) DBG | I0917 18:13:30.612393   61898 retry.go:31] will retry after 1.784339689s: waiting for machine to come up
	I0917 18:13:32.398544   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:32.399076   61298 main.go:141] libmachine: (auto-639892) DBG | unable to find current IP address of domain auto-639892 in network mk-auto-639892
	I0917 18:13:32.399109   61298 main.go:141] libmachine: (auto-639892) DBG | I0917 18:13:32.399022   61898 retry.go:31] will retry after 1.831418661s: waiting for machine to come up
	I0917 18:13:34.231496   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:34.231942   61298 main.go:141] libmachine: (auto-639892) DBG | unable to find current IP address of domain auto-639892 in network mk-auto-639892
	I0917 18:13:34.231969   61298 main.go:141] libmachine: (auto-639892) DBG | I0917 18:13:34.231896   61898 retry.go:31] will retry after 2.180206592s: waiting for machine to come up
	I0917 18:13:34.002414   60862 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.413591ms
	I0917 18:13:34.002934   60862 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 18:13:36.414355   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:36.414896   61298 main.go:141] libmachine: (auto-639892) DBG | unable to find current IP address of domain auto-639892 in network mk-auto-639892
	I0917 18:13:36.414929   61298 main.go:141] libmachine: (auto-639892) DBG | I0917 18:13:36.414833   61898 retry.go:31] will retry after 2.617601374s: waiting for machine to come up
	I0917 18:13:39.033842   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:39.034289   61298 main.go:141] libmachine: (auto-639892) DBG | unable to find current IP address of domain auto-639892 in network mk-auto-639892
	I0917 18:13:39.034319   61298 main.go:141] libmachine: (auto-639892) DBG | I0917 18:13:39.034208   61898 retry.go:31] will retry after 3.019820937s: waiting for machine to come up
	I0917 18:13:39.504183   60862 kubeadm.go:310] [api-check] The API server is healthy after 5.502458971s
	I0917 18:13:39.519420   60862 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 18:13:39.541995   60862 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 18:13:39.588836   60862 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 18:13:39.589052   60862 kubeadm.go:310] [mark-control-plane] Marking the node cert-expiration-297256 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 18:13:39.606512   60862 kubeadm.go:310] [bootstrap-token] Using token: l7q2pk.32qfungjj5zujolp
	I0917 18:13:39.608360   60862 out.go:235]   - Configuring RBAC rules ...
	I0917 18:13:39.608514   60862 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 18:13:39.614512   60862 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 18:13:39.630470   60862 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 18:13:39.638333   60862 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 18:13:39.645878   60862 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 18:13:39.650867   60862 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 18:13:39.909154   60862 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 18:13:40.344036   60862 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 18:13:40.909335   60862 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 18:13:40.909347   60862 kubeadm.go:310] 
	I0917 18:13:40.909396   60862 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 18:13:40.909400   60862 kubeadm.go:310] 
	I0917 18:13:40.909522   60862 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 18:13:40.909532   60862 kubeadm.go:310] 
	I0917 18:13:40.909565   60862 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 18:13:40.909645   60862 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 18:13:40.909727   60862 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 18:13:40.909734   60862 kubeadm.go:310] 
	I0917 18:13:40.909808   60862 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 18:13:40.909813   60862 kubeadm.go:310] 
	I0917 18:13:40.909879   60862 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 18:13:40.909885   60862 kubeadm.go:310] 
	I0917 18:13:40.909941   60862 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 18:13:40.910037   60862 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 18:13:40.910128   60862 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 18:13:40.910135   60862 kubeadm.go:310] 
	I0917 18:13:40.910241   60862 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 18:13:40.910346   60862 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 18:13:40.910354   60862 kubeadm.go:310] 
	I0917 18:13:40.910463   60862 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token l7q2pk.32qfungjj5zujolp \
	I0917 18:13:40.910558   60862 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 \
	I0917 18:13:40.910574   60862 kubeadm.go:310] 	--control-plane 
	I0917 18:13:40.910577   60862 kubeadm.go:310] 
	I0917 18:13:40.910666   60862 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 18:13:40.910680   60862 kubeadm.go:310] 
	I0917 18:13:40.910778   60862 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token l7q2pk.32qfungjj5zujolp \
	I0917 18:13:40.910913   60862 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 
	I0917 18:13:40.911603   60862 kubeadm.go:310] W0917 18:13:30.855203     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:13:40.912015   60862 kubeadm.go:310] W0917 18:13:30.856145     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:13:40.912155   60862 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:13:40.912168   60862 cni.go:84] Creating CNI manager for ""
	I0917 18:13:40.912174   60862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:13:40.914165   60862 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:13:40.915492   60862 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:13:40.928780   60862 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:13:40.955884   60862 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 18:13:40.955994   60862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-297256 minikube.k8s.io/updated_at=2024_09_17T18_13_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=cert-expiration-297256 minikube.k8s.io/primary=true
	I0917 18:13:40.956010   60862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:13:40.987600   60862 ops.go:34] apiserver oom_adj: -16
	I0917 18:13:41.218736   60862 kubeadm.go:1113] duration metric: took 262.795916ms to wait for elevateKubeSystemPrivileges
	I0917 18:13:41.218760   60862 kubeadm.go:394] duration metric: took 10.651273602s to StartCluster
	I0917 18:13:41.218778   60862 settings.go:142] acquiring lock: {Name:mk34898861b3ed534da4bd8404feab95fe690001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:13:41.218858   60862 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:13:41.220219   60862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:13:41.220467   60862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 18:13:41.220478   60862 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 18:13:41.220587   60862 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 18:13:41.220712   60862 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-297256"
	I0917 18:13:41.220713   60862 config.go:182] Loaded profile config "cert-expiration-297256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:13:41.220729   60862 addons.go:234] Setting addon storage-provisioner=true in "cert-expiration-297256"
	I0917 18:13:41.220766   60862 host.go:66] Checking if "cert-expiration-297256" exists ...
	I0917 18:13:41.220761   60862 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-297256"
	I0917 18:13:41.220805   60862 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-297256"
	I0917 18:13:41.221357   60862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:13:41.221405   60862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:13:41.221413   60862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:13:41.221460   60862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:13:41.222223   60862 out.go:177] * Verifying Kubernetes components...
	I0917 18:13:41.223409   60862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:13:41.237877   60862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41929
	I0917 18:13:41.237900   60862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44401
	I0917 18:13:41.238442   60862 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:13:41.238499   60862 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:13:41.239018   60862 main.go:141] libmachine: Using API Version  1
	I0917 18:13:41.239023   60862 main.go:141] libmachine: Using API Version  1
	I0917 18:13:41.239035   60862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:13:41.239038   60862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:13:41.239371   60862 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:13:41.239405   60862 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:13:41.239583   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetState
	I0917 18:13:41.239930   60862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:13:41.239987   60862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:13:41.243089   60862 addons.go:234] Setting addon default-storageclass=true in "cert-expiration-297256"
	I0917 18:13:41.243120   60862 host.go:66] Checking if "cert-expiration-297256" exists ...
	I0917 18:13:41.243484   60862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:13:41.243524   60862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:13:41.257120   60862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32925
	I0917 18:13:41.257662   60862 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:13:41.258195   60862 main.go:141] libmachine: Using API Version  1
	I0917 18:13:41.258208   60862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:13:41.258787   60862 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:13:41.258861   60862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39953
	I0917 18:13:41.258997   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetState
	I0917 18:13:41.259320   60862 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:13:41.259966   60862 main.go:141] libmachine: Using API Version  1
	I0917 18:13:41.259977   60862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:13:41.260437   60862 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:13:41.261117   60862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:13:41.261150   60862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:13:41.261340   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .DriverName
	I0917 18:13:41.263490   60862 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:13:41.264770   60862 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:13:41.264779   60862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 18:13:41.264792   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHHostname
	I0917 18:13:41.267835   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:41.268251   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:e1:f2", ip: ""} in network mk-cert-expiration-297256: {Iface:virbr1 ExpiryTime:2024-09-17 19:13:13 +0000 UTC Type:0 Mac:52:54:00:22:e1:f2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:cert-expiration-297256 Clientid:01:52:54:00:22:e1:f2}
	I0917 18:13:41.268267   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined IP address 192.168.39.135 and MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:41.268406   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHPort
	I0917 18:13:41.268571   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHKeyPath
	I0917 18:13:41.268692   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHUsername
	I0917 18:13:41.268793   60862 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/cert-expiration-297256/id_rsa Username:docker}
	I0917 18:13:41.280353   60862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33733
	I0917 18:13:41.280854   60862 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:13:41.281492   60862 main.go:141] libmachine: Using API Version  1
	I0917 18:13:41.281516   60862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:13:41.281849   60862 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:13:41.282043   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetState
	I0917 18:13:41.283801   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .DriverName
	I0917 18:13:41.284014   60862 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 18:13:41.284027   60862 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 18:13:41.284046   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHHostname
	I0917 18:13:41.287026   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:41.287468   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:e1:f2", ip: ""} in network mk-cert-expiration-297256: {Iface:virbr1 ExpiryTime:2024-09-17 19:13:13 +0000 UTC Type:0 Mac:52:54:00:22:e1:f2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:cert-expiration-297256 Clientid:01:52:54:00:22:e1:f2}
	I0917 18:13:41.287479   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | domain cert-expiration-297256 has defined IP address 192.168.39.135 and MAC address 52:54:00:22:e1:f2 in network mk-cert-expiration-297256
	I0917 18:13:41.287669   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHPort
	I0917 18:13:41.287822   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHKeyPath
	I0917 18:13:41.287900   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .GetSSHUsername
	I0917 18:13:41.287965   60862 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/cert-expiration-297256/id_rsa Username:docker}
	I0917 18:13:41.448335   60862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:13:41.448335   60862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0917 18:13:41.549905   60862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:13:41.589216   60862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 18:13:41.923070   60862 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0917 18:13:41.924006   60862 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:13:41.924049   60862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:13:42.170366   60862 main.go:141] libmachine: Making call to close driver server
	I0917 18:13:42.170378   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .Close
	I0917 18:13:42.170410   60862 main.go:141] libmachine: Making call to close driver server
	I0917 18:13:42.170422   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .Close
	I0917 18:13:42.170518   60862 api_server.go:72] duration metric: took 949.964569ms to wait for apiserver process to appear ...
	I0917 18:13:42.170531   60862 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:13:42.170550   60862 api_server.go:253] Checking apiserver healthz at https://192.168.39.135:8443/healthz ...
	I0917 18:13:42.170722   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | Closing plugin on server side
	I0917 18:13:42.170742   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | Closing plugin on server side
	I0917 18:13:42.170770   60862 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:13:42.170776   60862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:13:42.170782   60862 main.go:141] libmachine: Making call to close driver server
	I0917 18:13:42.170787   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .Close
	I0917 18:13:42.170793   60862 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:13:42.170810   60862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:13:42.170817   60862 main.go:141] libmachine: Making call to close driver server
	I0917 18:13:42.170824   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .Close
	I0917 18:13:42.171040   60862 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:13:42.171049   60862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:13:42.171055   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | Closing plugin on server side
	I0917 18:13:42.171214   60862 main.go:141] libmachine: (cert-expiration-297256) DBG | Closing plugin on server side
	I0917 18:13:42.171226   60862 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:13:42.171235   60862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:13:42.184538   60862 main.go:141] libmachine: Making call to close driver server
	I0917 18:13:42.184554   60862 main.go:141] libmachine: (cert-expiration-297256) Calling .Close
	I0917 18:13:42.184781   60862 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:13:42.184797   60862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:13:42.186549   60862 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0917 18:13:42.187858   60862 addons.go:510] duration metric: took 967.279286ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0917 18:13:42.189966   60862 api_server.go:279] https://192.168.39.135:8443/healthz returned 200:
	ok
	I0917 18:13:42.191174   60862 api_server.go:141] control plane version: v1.31.1
	I0917 18:13:42.191198   60862 api_server.go:131] duration metric: took 20.662658ms to wait for apiserver health ...
	I0917 18:13:42.191205   60862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:13:42.199745   60862 system_pods.go:59] 5 kube-system pods found
	I0917 18:13:42.199763   60862 system_pods.go:61] "etcd-cert-expiration-297256" [e1628f0b-df78-4241-b91e-1a2c2cfa3d87] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 18:13:42.199769   60862 system_pods.go:61] "kube-apiserver-cert-expiration-297256" [3c0b794d-7425-4339-a504-4ea74ead8c5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 18:13:42.199774   60862 system_pods.go:61] "kube-controller-manager-cert-expiration-297256" [c33288ef-794c-4840-98f5-5d233c0b43c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 18:13:42.199779   60862 system_pods.go:61] "kube-scheduler-cert-expiration-297256" [a7fa8b91-160c-4345-81de-ce41211cd7c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 18:13:42.199783   60862 system_pods.go:61] "storage-provisioner" [63390b36-85be-448a-a414-4d38158db08c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0917 18:13:42.199793   60862 system_pods.go:74] duration metric: took 8.578965ms to wait for pod list to return data ...
	I0917 18:13:42.199801   60862 kubeadm.go:582] duration metric: took 979.251233ms to wait for: map[apiserver:true system_pods:true]
	I0917 18:13:42.199810   60862 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:13:42.203985   60862 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:13:42.203997   60862 node_conditions.go:123] node cpu capacity is 2
	I0917 18:13:42.204007   60862 node_conditions.go:105] duration metric: took 4.194105ms to run NodePressure ...
	I0917 18:13:42.204017   60862 start.go:241] waiting for startup goroutines ...
	I0917 18:13:42.427511   60862 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-expiration-297256" context rescaled to 1 replicas
	I0917 18:13:42.427545   60862 start.go:246] waiting for cluster config update ...
	I0917 18:13:42.427557   60862 start.go:255] writing updated cluster config ...
	I0917 18:13:42.427808   60862 ssh_runner.go:195] Run: rm -f paused
	I0917 18:13:42.475636   60862 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 18:13:42.477857   60862 out.go:177] * Done! kubectl is now configured to use "cert-expiration-297256" cluster and "default" namespace by default
	I0917 18:13:42.055364   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:42.055802   61298 main.go:141] libmachine: (auto-639892) DBG | unable to find current IP address of domain auto-639892 in network mk-auto-639892
	I0917 18:13:42.055825   61298 main.go:141] libmachine: (auto-639892) DBG | I0917 18:13:42.055745   61898 retry.go:31] will retry after 4.68126757s: waiting for machine to come up
	I0917 18:13:46.738508   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:46.739042   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has current primary IP address 192.168.61.95 and MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:46.739063   61298 main.go:141] libmachine: (auto-639892) Found IP for machine: 192.168.61.95
	I0917 18:13:46.739077   61298 main.go:141] libmachine: (auto-639892) Reserving static IP address...
	I0917 18:13:46.739462   61298 main.go:141] libmachine: (auto-639892) DBG | unable to find host DHCP lease matching {name: "auto-639892", mac: "52:54:00:c6:ab:b8", ip: "192.168.61.95"} in network mk-auto-639892
	I0917 18:13:46.821662   61298 main.go:141] libmachine: (auto-639892) DBG | Getting to WaitForSSH function...
	I0917 18:13:46.821694   61298 main.go:141] libmachine: (auto-639892) Reserved static IP address: 192.168.61.95
	I0917 18:13:46.821714   61298 main.go:141] libmachine: (auto-639892) Waiting for SSH to be available...
	I0917 18:13:46.824762   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:46.825120   61298 main.go:141] libmachine: (auto-639892) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:c6:ab:b8", ip: ""} in network mk-auto-639892
	I0917 18:13:46.825144   61298 main.go:141] libmachine: (auto-639892) DBG | unable to find defined IP address of network mk-auto-639892 interface with MAC address 52:54:00:c6:ab:b8
	I0917 18:13:46.825310   61298 main.go:141] libmachine: (auto-639892) DBG | Using SSH client type: external
	I0917 18:13:46.825348   61298 main.go:141] libmachine: (auto-639892) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/auto-639892/id_rsa (-rw-------)
	I0917 18:13:46.825414   61298 main.go:141] libmachine: (auto-639892) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/auto-639892/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:13:46.825444   61298 main.go:141] libmachine: (auto-639892) DBG | About to run SSH command:
	I0917 18:13:46.825463   61298 main.go:141] libmachine: (auto-639892) DBG | exit 0
	I0917 18:13:46.829220   61298 main.go:141] libmachine: (auto-639892) DBG | SSH cmd err, output: exit status 255: 
	I0917 18:13:46.829265   61298 main.go:141] libmachine: (auto-639892) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0917 18:13:46.829276   61298 main.go:141] libmachine: (auto-639892) DBG | command : exit 0
	I0917 18:13:46.829283   61298 main.go:141] libmachine: (auto-639892) DBG | err     : exit status 255
	I0917 18:13:46.829294   61298 main.go:141] libmachine: (auto-639892) DBG | output  : 
	I0917 18:13:51.322388   61418 start.go:364] duration metric: took 1m10.302241973s to acquireMachinesLock for "kindnet-639892"
	I0917 18:13:51.322475   61418 start.go:93] Provisioning new machine with config: &{Name:kindnet-639892 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:kindnet-639892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 18:13:51.322594   61418 start.go:125] createHost starting for "" (driver="kvm2")
	I0917 18:13:49.829422   61298 main.go:141] libmachine: (auto-639892) DBG | Getting to WaitForSSH function...
	I0917 18:13:49.831807   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:49.832137   61298 main.go:141] libmachine: (auto-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:ab:b8", ip: ""} in network mk-auto-639892: {Iface:virbr3 ExpiryTime:2024-09-17 19:13:39 +0000 UTC Type:0 Mac:52:54:00:c6:ab:b8 Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:auto-639892 Clientid:01:52:54:00:c6:ab:b8}
	I0917 18:13:49.832162   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined IP address 192.168.61.95 and MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:49.832302   61298 main.go:141] libmachine: (auto-639892) DBG | Using SSH client type: external
	I0917 18:13:49.832323   61298 main.go:141] libmachine: (auto-639892) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/auto-639892/id_rsa (-rw-------)
	I0917 18:13:49.832349   61298 main.go:141] libmachine: (auto-639892) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.95 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/auto-639892/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:13:49.832361   61298 main.go:141] libmachine: (auto-639892) DBG | About to run SSH command:
	I0917 18:13:49.832375   61298 main.go:141] libmachine: (auto-639892) DBG | exit 0
	I0917 18:13:49.957459   61298 main.go:141] libmachine: (auto-639892) DBG | SSH cmd err, output: <nil>: 
	I0917 18:13:49.957799   61298 main.go:141] libmachine: (auto-639892) KVM machine creation complete!
	I0917 18:13:49.958122   61298 main.go:141] libmachine: (auto-639892) Calling .GetConfigRaw
	I0917 18:13:49.958788   61298 main.go:141] libmachine: (auto-639892) Calling .DriverName
	I0917 18:13:49.958969   61298 main.go:141] libmachine: (auto-639892) Calling .DriverName
	I0917 18:13:49.959177   61298 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0917 18:13:49.959193   61298 main.go:141] libmachine: (auto-639892) Calling .GetState
	I0917 18:13:49.960577   61298 main.go:141] libmachine: Detecting operating system of created instance...
	I0917 18:13:49.960593   61298 main.go:141] libmachine: Waiting for SSH to be available...
	I0917 18:13:49.960598   61298 main.go:141] libmachine: Getting to WaitForSSH function...
	I0917 18:13:49.960619   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHHostname
	I0917 18:13:49.963257   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:49.963605   61298 main.go:141] libmachine: (auto-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:ab:b8", ip: ""} in network mk-auto-639892: {Iface:virbr3 ExpiryTime:2024-09-17 19:13:39 +0000 UTC Type:0 Mac:52:54:00:c6:ab:b8 Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:auto-639892 Clientid:01:52:54:00:c6:ab:b8}
	I0917 18:13:49.963632   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined IP address 192.168.61.95 and MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:49.963819   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHPort
	I0917 18:13:49.963978   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHKeyPath
	I0917 18:13:49.964117   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHKeyPath
	I0917 18:13:49.964268   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHUsername
	I0917 18:13:49.964431   61298 main.go:141] libmachine: Using SSH client type: native
	I0917 18:13:49.964692   61298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.95 22 <nil> <nil>}
	I0917 18:13:49.964707   61298 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0917 18:13:50.076728   61298 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:13:50.076756   61298 main.go:141] libmachine: Detecting the provisioner...
	I0917 18:13:50.076767   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHHostname
	I0917 18:13:50.079613   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:50.079991   61298 main.go:141] libmachine: (auto-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:ab:b8", ip: ""} in network mk-auto-639892: {Iface:virbr3 ExpiryTime:2024-09-17 19:13:39 +0000 UTC Type:0 Mac:52:54:00:c6:ab:b8 Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:auto-639892 Clientid:01:52:54:00:c6:ab:b8}
	I0917 18:13:50.080018   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined IP address 192.168.61.95 and MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:50.080122   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHPort
	I0917 18:13:50.080343   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHKeyPath
	I0917 18:13:50.080535   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHKeyPath
	I0917 18:13:50.080674   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHUsername
	I0917 18:13:50.080825   61298 main.go:141] libmachine: Using SSH client type: native
	I0917 18:13:50.081035   61298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.95 22 <nil> <nil>}
	I0917 18:13:50.081051   61298 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0917 18:13:50.194535   61298 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0917 18:13:50.194621   61298 main.go:141] libmachine: found compatible host: buildroot
	I0917 18:13:50.194627   61298 main.go:141] libmachine: Provisioning with buildroot...
	I0917 18:13:50.194634   61298 main.go:141] libmachine: (auto-639892) Calling .GetMachineName
	I0917 18:13:50.194894   61298 buildroot.go:166] provisioning hostname "auto-639892"
	I0917 18:13:50.194931   61298 main.go:141] libmachine: (auto-639892) Calling .GetMachineName
	I0917 18:13:50.195155   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHHostname
	I0917 18:13:50.197598   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:50.197973   61298 main.go:141] libmachine: (auto-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:ab:b8", ip: ""} in network mk-auto-639892: {Iface:virbr3 ExpiryTime:2024-09-17 19:13:39 +0000 UTC Type:0 Mac:52:54:00:c6:ab:b8 Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:auto-639892 Clientid:01:52:54:00:c6:ab:b8}
	I0917 18:13:50.198003   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined IP address 192.168.61.95 and MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:50.198161   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHPort
	I0917 18:13:50.198351   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHKeyPath
	I0917 18:13:50.198520   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHKeyPath
	I0917 18:13:50.198668   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHUsername
	I0917 18:13:50.198820   61298 main.go:141] libmachine: Using SSH client type: native
	I0917 18:13:50.198998   61298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.95 22 <nil> <nil>}
	I0917 18:13:50.199010   61298 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-639892 && echo "auto-639892" | sudo tee /etc/hostname
	I0917 18:13:50.326389   61298 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-639892
	
	I0917 18:13:50.326423   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHHostname
	I0917 18:13:50.329251   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:50.329558   61298 main.go:141] libmachine: (auto-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:ab:b8", ip: ""} in network mk-auto-639892: {Iface:virbr3 ExpiryTime:2024-09-17 19:13:39 +0000 UTC Type:0 Mac:52:54:00:c6:ab:b8 Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:auto-639892 Clientid:01:52:54:00:c6:ab:b8}
	I0917 18:13:50.329583   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined IP address 192.168.61.95 and MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:50.329765   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHPort
	I0917 18:13:50.329969   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHKeyPath
	I0917 18:13:50.330144   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHKeyPath
	I0917 18:13:50.330289   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHUsername
	I0917 18:13:50.330440   61298 main.go:141] libmachine: Using SSH client type: native
	I0917 18:13:50.330599   61298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.95 22 <nil> <nil>}
	I0917 18:13:50.330614   61298 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-639892' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-639892/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-639892' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:13:50.450961   61298 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:13:50.450990   61298 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:13:50.451028   61298 buildroot.go:174] setting up certificates
	I0917 18:13:50.451037   61298 provision.go:84] configureAuth start
	I0917 18:13:50.451045   61298 main.go:141] libmachine: (auto-639892) Calling .GetMachineName
	I0917 18:13:50.451344   61298 main.go:141] libmachine: (auto-639892) Calling .GetIP
	I0917 18:13:50.454060   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:50.454429   61298 main.go:141] libmachine: (auto-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:ab:b8", ip: ""} in network mk-auto-639892: {Iface:virbr3 ExpiryTime:2024-09-17 19:13:39 +0000 UTC Type:0 Mac:52:54:00:c6:ab:b8 Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:auto-639892 Clientid:01:52:54:00:c6:ab:b8}
	I0917 18:13:50.454458   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined IP address 192.168.61.95 and MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:50.454586   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHHostname
	I0917 18:13:50.456699   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:50.457035   61298 main.go:141] libmachine: (auto-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:ab:b8", ip: ""} in network mk-auto-639892: {Iface:virbr3 ExpiryTime:2024-09-17 19:13:39 +0000 UTC Type:0 Mac:52:54:00:c6:ab:b8 Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:auto-639892 Clientid:01:52:54:00:c6:ab:b8}
	I0917 18:13:50.457061   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined IP address 192.168.61.95 and MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:50.457181   61298 provision.go:143] copyHostCerts
	I0917 18:13:50.457286   61298 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:13:50.457302   61298 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:13:50.457364   61298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:13:50.457481   61298 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:13:50.457491   61298 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:13:50.457513   61298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:13:50.457596   61298 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:13:50.457604   61298 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:13:50.457621   61298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:13:50.457679   61298 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.auto-639892 san=[127.0.0.1 192.168.61.95 auto-639892 localhost minikube]
	I0917 18:13:50.649140   61298 provision.go:177] copyRemoteCerts
	I0917 18:13:50.649221   61298 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:13:50.649299   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHHostname
	I0917 18:13:50.651855   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:50.652212   61298 main.go:141] libmachine: (auto-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:ab:b8", ip: ""} in network mk-auto-639892: {Iface:virbr3 ExpiryTime:2024-09-17 19:13:39 +0000 UTC Type:0 Mac:52:54:00:c6:ab:b8 Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:auto-639892 Clientid:01:52:54:00:c6:ab:b8}
	I0917 18:13:50.652232   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined IP address 192.168.61.95 and MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:50.652515   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHPort
	I0917 18:13:50.652753   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHKeyPath
	I0917 18:13:50.652922   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHUsername
	I0917 18:13:50.653050   61298 sshutil.go:53] new ssh client: &{IP:192.168.61.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/auto-639892/id_rsa Username:docker}
	I0917 18:13:50.739849   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:13:50.764823   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0917 18:13:50.787456   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 18:13:50.811659   61298 provision.go:87] duration metric: took 360.609512ms to configureAuth
	I0917 18:13:50.811683   61298 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:13:50.811871   61298 config.go:182] Loaded profile config "auto-639892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:13:50.811945   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHHostname
	I0917 18:13:50.814685   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:50.815116   61298 main.go:141] libmachine: (auto-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:ab:b8", ip: ""} in network mk-auto-639892: {Iface:virbr3 ExpiryTime:2024-09-17 19:13:39 +0000 UTC Type:0 Mac:52:54:00:c6:ab:b8 Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:auto-639892 Clientid:01:52:54:00:c6:ab:b8}
	I0917 18:13:50.815145   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined IP address 192.168.61.95 and MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:50.815327   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHPort
	I0917 18:13:50.815533   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHKeyPath
	I0917 18:13:50.815712   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHKeyPath
	I0917 18:13:50.815854   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHUsername
	I0917 18:13:50.816046   61298 main.go:141] libmachine: Using SSH client type: native
	I0917 18:13:50.816244   61298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.95 22 <nil> <nil>}
	I0917 18:13:50.816260   61298 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:13:51.062181   61298 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:13:51.062213   61298 main.go:141] libmachine: Checking connection to Docker...
	I0917 18:13:51.062225   61298 main.go:141] libmachine: (auto-639892) Calling .GetURL
	I0917 18:13:51.063661   61298 main.go:141] libmachine: (auto-639892) DBG | Using libvirt version 6000000
	I0917 18:13:51.066103   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:51.066454   61298 main.go:141] libmachine: (auto-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:ab:b8", ip: ""} in network mk-auto-639892: {Iface:virbr3 ExpiryTime:2024-09-17 19:13:39 +0000 UTC Type:0 Mac:52:54:00:c6:ab:b8 Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:auto-639892 Clientid:01:52:54:00:c6:ab:b8}
	I0917 18:13:51.066484   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined IP address 192.168.61.95 and MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:51.066633   61298 main.go:141] libmachine: Docker is up and running!
	I0917 18:13:51.066651   61298 main.go:141] libmachine: Reticulating splines...
	I0917 18:13:51.066657   61298 client.go:171] duration metric: took 27.094799913s to LocalClient.Create
	I0917 18:13:51.066681   61298 start.go:167] duration metric: took 27.094877923s to libmachine.API.Create "auto-639892"
	I0917 18:13:51.066692   61298 start.go:293] postStartSetup for "auto-639892" (driver="kvm2")
	I0917 18:13:51.066714   61298 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:13:51.066733   61298 main.go:141] libmachine: (auto-639892) Calling .DriverName
	I0917 18:13:51.066998   61298 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:13:51.067033   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHHostname
	I0917 18:13:51.069481   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:51.069828   61298 main.go:141] libmachine: (auto-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:ab:b8", ip: ""} in network mk-auto-639892: {Iface:virbr3 ExpiryTime:2024-09-17 19:13:39 +0000 UTC Type:0 Mac:52:54:00:c6:ab:b8 Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:auto-639892 Clientid:01:52:54:00:c6:ab:b8}
	I0917 18:13:51.069850   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined IP address 192.168.61.95 and MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:51.070036   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHPort
	I0917 18:13:51.070229   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHKeyPath
	I0917 18:13:51.070410   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHUsername
	I0917 18:13:51.070551   61298 sshutil.go:53] new ssh client: &{IP:192.168.61.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/auto-639892/id_rsa Username:docker}
	I0917 18:13:51.156493   61298 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:13:51.161080   61298 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:13:51.161107   61298 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:13:51.161189   61298 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:13:51.161301   61298 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:13:51.161417   61298 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:13:51.171578   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:13:51.196567   61298 start.go:296] duration metric: took 129.857648ms for postStartSetup
	I0917 18:13:51.196620   61298 main.go:141] libmachine: (auto-639892) Calling .GetConfigRaw
	I0917 18:13:51.197285   61298 main.go:141] libmachine: (auto-639892) Calling .GetIP
	I0917 18:13:51.200131   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:51.200476   61298 main.go:141] libmachine: (auto-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:ab:b8", ip: ""} in network mk-auto-639892: {Iface:virbr3 ExpiryTime:2024-09-17 19:13:39 +0000 UTC Type:0 Mac:52:54:00:c6:ab:b8 Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:auto-639892 Clientid:01:52:54:00:c6:ab:b8}
	I0917 18:13:51.200502   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined IP address 192.168.61.95 and MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:51.200792   61298 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/config.json ...
	I0917 18:13:51.200997   61298 start.go:128] duration metric: took 27.254246671s to createHost
	I0917 18:13:51.201024   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHHostname
	I0917 18:13:51.203314   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:51.203578   61298 main.go:141] libmachine: (auto-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:ab:b8", ip: ""} in network mk-auto-639892: {Iface:virbr3 ExpiryTime:2024-09-17 19:13:39 +0000 UTC Type:0 Mac:52:54:00:c6:ab:b8 Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:auto-639892 Clientid:01:52:54:00:c6:ab:b8}
	I0917 18:13:51.203611   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined IP address 192.168.61.95 and MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:51.203728   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHPort
	I0917 18:13:51.203909   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHKeyPath
	I0917 18:13:51.204066   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHKeyPath
	I0917 18:13:51.204213   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHUsername
	I0917 18:13:51.204356   61298 main.go:141] libmachine: Using SSH client type: native
	I0917 18:13:51.204569   61298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.95 22 <nil> <nil>}
	I0917 18:13:51.204586   61298 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:13:51.322225   61298 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726596831.282790836
	
	I0917 18:13:51.322251   61298 fix.go:216] guest clock: 1726596831.282790836
	I0917 18:13:51.322261   61298 fix.go:229] Guest: 2024-09-17 18:13:51.282790836 +0000 UTC Remote: 2024-09-17 18:13:51.201008861 +0000 UTC m=+71.827644232 (delta=81.781975ms)
	I0917 18:13:51.322283   61298 fix.go:200] guest clock delta is within tolerance: 81.781975ms
	I0917 18:13:51.322290   61298 start.go:83] releasing machines lock for "auto-639892", held for 27.37574216s
	I0917 18:13:51.322316   61298 main.go:141] libmachine: (auto-639892) Calling .DriverName
	I0917 18:13:51.322612   61298 main.go:141] libmachine: (auto-639892) Calling .GetIP
	I0917 18:13:51.325439   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:51.325858   61298 main.go:141] libmachine: (auto-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:ab:b8", ip: ""} in network mk-auto-639892: {Iface:virbr3 ExpiryTime:2024-09-17 19:13:39 +0000 UTC Type:0 Mac:52:54:00:c6:ab:b8 Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:auto-639892 Clientid:01:52:54:00:c6:ab:b8}
	I0917 18:13:51.325889   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined IP address 192.168.61.95 and MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:51.326043   61298 main.go:141] libmachine: (auto-639892) Calling .DriverName
	I0917 18:13:51.326577   61298 main.go:141] libmachine: (auto-639892) Calling .DriverName
	I0917 18:13:51.326743   61298 main.go:141] libmachine: (auto-639892) Calling .DriverName
	I0917 18:13:51.326799   61298 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:13:51.326855   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHHostname
	I0917 18:13:51.326976   61298 ssh_runner.go:195] Run: cat /version.json
	I0917 18:13:51.327003   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHHostname
	I0917 18:13:51.329527   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:51.329663   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:51.329944   61298 main.go:141] libmachine: (auto-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:ab:b8", ip: ""} in network mk-auto-639892: {Iface:virbr3 ExpiryTime:2024-09-17 19:13:39 +0000 UTC Type:0 Mac:52:54:00:c6:ab:b8 Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:auto-639892 Clientid:01:52:54:00:c6:ab:b8}
	I0917 18:13:51.329965   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined IP address 192.168.61.95 and MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:51.329990   61298 main.go:141] libmachine: (auto-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:ab:b8", ip: ""} in network mk-auto-639892: {Iface:virbr3 ExpiryTime:2024-09-17 19:13:39 +0000 UTC Type:0 Mac:52:54:00:c6:ab:b8 Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:auto-639892 Clientid:01:52:54:00:c6:ab:b8}
	I0917 18:13:51.330012   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined IP address 192.168.61.95 and MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:51.330159   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHPort
	I0917 18:13:51.330266   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHPort
	I0917 18:13:51.330357   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHKeyPath
	I0917 18:13:51.330416   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHKeyPath
	I0917 18:13:51.330524   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHUsername
	I0917 18:13:51.330664   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHUsername
	I0917 18:13:51.330675   61298 sshutil.go:53] new ssh client: &{IP:192.168.61.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/auto-639892/id_rsa Username:docker}
	I0917 18:13:51.330809   61298 sshutil.go:53] new ssh client: &{IP:192.168.61.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/auto-639892/id_rsa Username:docker}
	I0917 18:13:51.423061   61298 ssh_runner.go:195] Run: systemctl --version
	I0917 18:13:51.446633   61298 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:13:51.613379   61298 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:13:51.620466   61298 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:13:51.620552   61298 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:13:51.640900   61298 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:13:51.640932   61298 start.go:495] detecting cgroup driver to use...
	I0917 18:13:51.641010   61298 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:13:51.658600   61298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:13:51.674236   61298 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:13:51.674306   61298 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:13:51.692068   61298 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:13:51.710015   61298 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:13:51.836090   61298 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:13:51.979978   61298 docker.go:233] disabling docker service ...
	I0917 18:13:51.980058   61298 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:13:51.995044   61298 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:13:52.009620   61298 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:13:52.157432   61298 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:13:52.326044   61298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:13:52.342235   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:13:52.364441   61298 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 18:13:52.364516   61298 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:13:52.376038   61298 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:13:52.376112   61298 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:13:52.388510   61298 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:13:52.399165   61298 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:13:52.409900   61298 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:13:52.421169   61298 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:13:52.432869   61298 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:13:52.453607   61298 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:13:52.465869   61298 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:13:52.476421   61298 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:13:52.476491   61298 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:13:52.490352   61298 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:13:52.501593   61298 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:13:52.636128   61298 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:13:52.739590   61298 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:13:52.739677   61298 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:13:52.745364   61298 start.go:563] Will wait 60s for crictl version
	I0917 18:13:52.745426   61298 ssh_runner.go:195] Run: which crictl
	I0917 18:13:52.749411   61298 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:13:52.795879   61298 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:13:52.796004   61298 ssh_runner.go:195] Run: crio --version
	I0917 18:13:52.827173   61298 ssh_runner.go:195] Run: crio --version
	I0917 18:13:52.862462   61298 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 18:13:52.863605   61298 main.go:141] libmachine: (auto-639892) Calling .GetIP
	I0917 18:13:52.866308   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:52.866739   61298 main.go:141] libmachine: (auto-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:ab:b8", ip: ""} in network mk-auto-639892: {Iface:virbr3 ExpiryTime:2024-09-17 19:13:39 +0000 UTC Type:0 Mac:52:54:00:c6:ab:b8 Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:auto-639892 Clientid:01:52:54:00:c6:ab:b8}
	I0917 18:13:52.866781   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined IP address 192.168.61.95 and MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:13:52.866979   61298 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0917 18:13:52.871707   61298 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:13:52.887274   61298 kubeadm.go:883] updating cluster {Name:auto-639892 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:auto-639892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.95 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:13:52.887406   61298 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 18:13:52.887488   61298 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:13:52.934224   61298 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0917 18:13:52.934320   61298 ssh_runner.go:195] Run: which lz4
	I0917 18:13:52.938637   61298 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 18:13:52.943064   61298 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 18:13:52.943098   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0917 18:13:51.324806   61418 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 18:13:51.325001   61418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:13:51.325053   61418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:13:51.342466   61418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43709
	I0917 18:13:51.342963   61418 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:13:51.343604   61418 main.go:141] libmachine: Using API Version  1
	I0917 18:13:51.343638   61418 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:13:51.343966   61418 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:13:51.344197   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetMachineName
	I0917 18:13:51.344338   61418 main.go:141] libmachine: (kindnet-639892) Calling .DriverName
	I0917 18:13:51.344494   61418 start.go:159] libmachine.API.Create for "kindnet-639892" (driver="kvm2")
	I0917 18:13:51.344524   61418 client.go:168] LocalClient.Create starting
	I0917 18:13:51.344558   61418 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem
	I0917 18:13:51.344594   61418 main.go:141] libmachine: Decoding PEM data...
	I0917 18:13:51.344624   61418 main.go:141] libmachine: Parsing certificate...
	I0917 18:13:51.344687   61418 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem
	I0917 18:13:51.344713   61418 main.go:141] libmachine: Decoding PEM data...
	I0917 18:13:51.344729   61418 main.go:141] libmachine: Parsing certificate...
	I0917 18:13:51.344755   61418 main.go:141] libmachine: Running pre-create checks...
	I0917 18:13:51.344766   61418 main.go:141] libmachine: (kindnet-639892) Calling .PreCreateCheck
	I0917 18:13:51.345259   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetConfigRaw
	I0917 18:13:51.345672   61418 main.go:141] libmachine: Creating machine...
	I0917 18:13:51.345687   61418 main.go:141] libmachine: (kindnet-639892) Calling .Create
	I0917 18:13:51.345856   61418 main.go:141] libmachine: (kindnet-639892) Creating KVM machine...
	I0917 18:13:51.347058   61418 main.go:141] libmachine: (kindnet-639892) DBG | found existing default KVM network
	I0917 18:13:51.348651   61418 main.go:141] libmachine: (kindnet-639892) DBG | I0917 18:13:51.348466   62205 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:16:03:57} reservation:<nil>}
	I0917 18:13:51.349747   61418 main.go:141] libmachine: (kindnet-639892) DBG | I0917 18:13:51.349644   62205 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:89:17:1a} reservation:<nil>}
	I0917 18:13:51.350956   61418 main.go:141] libmachine: (kindnet-639892) DBG | I0917 18:13:51.350872   62205 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:26:59:e6} reservation:<nil>}
	I0917 18:13:51.352408   61418 main.go:141] libmachine: (kindnet-639892) DBG | I0917 18:13:51.352298   62205 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003d6610}
	I0917 18:13:51.352497   61418 main.go:141] libmachine: (kindnet-639892) DBG | created network xml: 
	I0917 18:13:51.352513   61418 main.go:141] libmachine: (kindnet-639892) DBG | <network>
	I0917 18:13:51.352522   61418 main.go:141] libmachine: (kindnet-639892) DBG |   <name>mk-kindnet-639892</name>
	I0917 18:13:51.352526   61418 main.go:141] libmachine: (kindnet-639892) DBG |   <dns enable='no'/>
	I0917 18:13:51.352610   61418 main.go:141] libmachine: (kindnet-639892) DBG |   
	I0917 18:13:51.352651   61418 main.go:141] libmachine: (kindnet-639892) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0917 18:13:51.352675   61418 main.go:141] libmachine: (kindnet-639892) DBG |     <dhcp>
	I0917 18:13:51.352689   61418 main.go:141] libmachine: (kindnet-639892) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0917 18:13:51.352704   61418 main.go:141] libmachine: (kindnet-639892) DBG |     </dhcp>
	I0917 18:13:51.352711   61418 main.go:141] libmachine: (kindnet-639892) DBG |   </ip>
	I0917 18:13:51.352716   61418 main.go:141] libmachine: (kindnet-639892) DBG |   
	I0917 18:13:51.352720   61418 main.go:141] libmachine: (kindnet-639892) DBG | </network>
	I0917 18:13:51.352727   61418 main.go:141] libmachine: (kindnet-639892) DBG | 
	I0917 18:13:51.358554   61418 main.go:141] libmachine: (kindnet-639892) DBG | trying to create private KVM network mk-kindnet-639892 192.168.72.0/24...
	I0917 18:13:51.439006   61418 main.go:141] libmachine: (kindnet-639892) DBG | private KVM network mk-kindnet-639892 192.168.72.0/24 created
	I0917 18:13:51.439041   61418 main.go:141] libmachine: (kindnet-639892) Setting up store path in /home/jenkins/minikube-integration/19662-11085/.minikube/machines/kindnet-639892 ...
	I0917 18:13:51.439073   61418 main.go:141] libmachine: (kindnet-639892) DBG | I0917 18:13:51.438877   62205 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 18:13:51.439096   61418 main.go:141] libmachine: (kindnet-639892) Building disk image from file:///home/jenkins/minikube-integration/19662-11085/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0917 18:13:51.439112   61418 main.go:141] libmachine: (kindnet-639892) Downloading /home/jenkins/minikube-integration/19662-11085/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19662-11085/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0917 18:13:51.680057   61418 main.go:141] libmachine: (kindnet-639892) DBG | I0917 18:13:51.679905   62205 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/kindnet-639892/id_rsa...
	I0917 18:13:51.803193   61418 main.go:141] libmachine: (kindnet-639892) DBG | I0917 18:13:51.803076   62205 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/kindnet-639892/kindnet-639892.rawdisk...
	I0917 18:13:51.803220   61418 main.go:141] libmachine: (kindnet-639892) DBG | Writing magic tar header
	I0917 18:13:51.803233   61418 main.go:141] libmachine: (kindnet-639892) DBG | Writing SSH key tar header
	I0917 18:13:51.803245   61418 main.go:141] libmachine: (kindnet-639892) DBG | I0917 18:13:51.803182   62205 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19662-11085/.minikube/machines/kindnet-639892 ...
	I0917 18:13:51.803299   61418 main.go:141] libmachine: (kindnet-639892) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/kindnet-639892
	I0917 18:13:51.803349   61418 main.go:141] libmachine: (kindnet-639892) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube/machines
	I0917 18:13:51.803368   61418 main.go:141] libmachine: (kindnet-639892) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube/machines/kindnet-639892 (perms=drwx------)
	I0917 18:13:51.803378   61418 main.go:141] libmachine: (kindnet-639892) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 18:13:51.803396   61418 main.go:141] libmachine: (kindnet-639892) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085
	I0917 18:13:51.803405   61418 main.go:141] libmachine: (kindnet-639892) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0917 18:13:51.803411   61418 main.go:141] libmachine: (kindnet-639892) DBG | Checking permissions on dir: /home/jenkins
	I0917 18:13:51.803421   61418 main.go:141] libmachine: (kindnet-639892) DBG | Checking permissions on dir: /home
	I0917 18:13:51.803438   61418 main.go:141] libmachine: (kindnet-639892) DBG | Skipping /home - not owner
	I0917 18:13:51.803464   61418 main.go:141] libmachine: (kindnet-639892) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube/machines (perms=drwxr-xr-x)
	I0917 18:13:51.803479   61418 main.go:141] libmachine: (kindnet-639892) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube (perms=drwxr-xr-x)
	I0917 18:13:51.803490   61418 main.go:141] libmachine: (kindnet-639892) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085 (perms=drwxrwxr-x)
	I0917 18:13:51.803501   61418 main.go:141] libmachine: (kindnet-639892) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0917 18:13:51.803508   61418 main.go:141] libmachine: (kindnet-639892) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0917 18:13:51.803514   61418 main.go:141] libmachine: (kindnet-639892) Creating domain...
	I0917 18:13:51.804747   61418 main.go:141] libmachine: (kindnet-639892) define libvirt domain using xml: 
	I0917 18:13:51.804771   61418 main.go:141] libmachine: (kindnet-639892) <domain type='kvm'>
	I0917 18:13:51.804782   61418 main.go:141] libmachine: (kindnet-639892)   <name>kindnet-639892</name>
	I0917 18:13:51.804793   61418 main.go:141] libmachine: (kindnet-639892)   <memory unit='MiB'>3072</memory>
	I0917 18:13:51.804801   61418 main.go:141] libmachine: (kindnet-639892)   <vcpu>2</vcpu>
	I0917 18:13:51.804807   61418 main.go:141] libmachine: (kindnet-639892)   <features>
	I0917 18:13:51.804824   61418 main.go:141] libmachine: (kindnet-639892)     <acpi/>
	I0917 18:13:51.804833   61418 main.go:141] libmachine: (kindnet-639892)     <apic/>
	I0917 18:13:51.804850   61418 main.go:141] libmachine: (kindnet-639892)     <pae/>
	I0917 18:13:51.804859   61418 main.go:141] libmachine: (kindnet-639892)     
	I0917 18:13:51.804888   61418 main.go:141] libmachine: (kindnet-639892)   </features>
	I0917 18:13:51.804911   61418 main.go:141] libmachine: (kindnet-639892)   <cpu mode='host-passthrough'>
	I0917 18:13:51.804948   61418 main.go:141] libmachine: (kindnet-639892)   
	I0917 18:13:51.804974   61418 main.go:141] libmachine: (kindnet-639892)   </cpu>
	I0917 18:13:51.804986   61418 main.go:141] libmachine: (kindnet-639892)   <os>
	I0917 18:13:51.804999   61418 main.go:141] libmachine: (kindnet-639892)     <type>hvm</type>
	I0917 18:13:51.805010   61418 main.go:141] libmachine: (kindnet-639892)     <boot dev='cdrom'/>
	I0917 18:13:51.805024   61418 main.go:141] libmachine: (kindnet-639892)     <boot dev='hd'/>
	I0917 18:13:51.805033   61418 main.go:141] libmachine: (kindnet-639892)     <bootmenu enable='no'/>
	I0917 18:13:51.805042   61418 main.go:141] libmachine: (kindnet-639892)   </os>
	I0917 18:13:51.805049   61418 main.go:141] libmachine: (kindnet-639892)   <devices>
	I0917 18:13:51.805060   61418 main.go:141] libmachine: (kindnet-639892)     <disk type='file' device='cdrom'>
	I0917 18:13:51.805074   61418 main.go:141] libmachine: (kindnet-639892)       <source file='/home/jenkins/minikube-integration/19662-11085/.minikube/machines/kindnet-639892/boot2docker.iso'/>
	I0917 18:13:51.805085   61418 main.go:141] libmachine: (kindnet-639892)       <target dev='hdc' bus='scsi'/>
	I0917 18:13:51.805093   61418 main.go:141] libmachine: (kindnet-639892)       <readonly/>
	I0917 18:13:51.805106   61418 main.go:141] libmachine: (kindnet-639892)     </disk>
	I0917 18:13:51.805126   61418 main.go:141] libmachine: (kindnet-639892)     <disk type='file' device='disk'>
	I0917 18:13:51.805138   61418 main.go:141] libmachine: (kindnet-639892)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0917 18:13:51.805151   61418 main.go:141] libmachine: (kindnet-639892)       <source file='/home/jenkins/minikube-integration/19662-11085/.minikube/machines/kindnet-639892/kindnet-639892.rawdisk'/>
	I0917 18:13:51.805162   61418 main.go:141] libmachine: (kindnet-639892)       <target dev='hda' bus='virtio'/>
	I0917 18:13:51.805174   61418 main.go:141] libmachine: (kindnet-639892)     </disk>
	I0917 18:13:51.805189   61418 main.go:141] libmachine: (kindnet-639892)     <interface type='network'>
	I0917 18:13:51.805212   61418 main.go:141] libmachine: (kindnet-639892)       <source network='mk-kindnet-639892'/>
	I0917 18:13:51.805223   61418 main.go:141] libmachine: (kindnet-639892)       <model type='virtio'/>
	I0917 18:13:51.805268   61418 main.go:141] libmachine: (kindnet-639892)     </interface>
	I0917 18:13:51.805289   61418 main.go:141] libmachine: (kindnet-639892)     <interface type='network'>
	I0917 18:13:51.805325   61418 main.go:141] libmachine: (kindnet-639892)       <source network='default'/>
	I0917 18:13:51.805344   61418 main.go:141] libmachine: (kindnet-639892)       <model type='virtio'/>
	I0917 18:13:51.805353   61418 main.go:141] libmachine: (kindnet-639892)     </interface>
	I0917 18:13:51.805363   61418 main.go:141] libmachine: (kindnet-639892)     <serial type='pty'>
	I0917 18:13:51.805378   61418 main.go:141] libmachine: (kindnet-639892)       <target port='0'/>
	I0917 18:13:51.805387   61418 main.go:141] libmachine: (kindnet-639892)     </serial>
	I0917 18:13:51.805408   61418 main.go:141] libmachine: (kindnet-639892)     <console type='pty'>
	I0917 18:13:51.805419   61418 main.go:141] libmachine: (kindnet-639892)       <target type='serial' port='0'/>
	I0917 18:13:51.805424   61418 main.go:141] libmachine: (kindnet-639892)     </console>
	I0917 18:13:51.805431   61418 main.go:141] libmachine: (kindnet-639892)     <rng model='virtio'>
	I0917 18:13:51.805441   61418 main.go:141] libmachine: (kindnet-639892)       <backend model='random'>/dev/random</backend>
	I0917 18:13:51.805451   61418 main.go:141] libmachine: (kindnet-639892)     </rng>
	I0917 18:13:51.805459   61418 main.go:141] libmachine: (kindnet-639892)     
	I0917 18:13:51.805468   61418 main.go:141] libmachine: (kindnet-639892)     
	I0917 18:13:51.805476   61418 main.go:141] libmachine: (kindnet-639892)   </devices>
	I0917 18:13:51.805485   61418 main.go:141] libmachine: (kindnet-639892) </domain>
	I0917 18:13:51.805494   61418 main.go:141] libmachine: (kindnet-639892) 
	I0917 18:13:51.812181   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:01:03:ba in network default
	I0917 18:13:51.812825   61418 main.go:141] libmachine: (kindnet-639892) Ensuring networks are active...
	I0917 18:13:51.812851   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:13:51.813765   61418 main.go:141] libmachine: (kindnet-639892) Ensuring network default is active
	I0917 18:13:51.814131   61418 main.go:141] libmachine: (kindnet-639892) Ensuring network mk-kindnet-639892 is active
	I0917 18:13:51.814761   61418 main.go:141] libmachine: (kindnet-639892) Getting domain xml...
	I0917 18:13:51.815674   61418 main.go:141] libmachine: (kindnet-639892) Creating domain...
	I0917 18:13:53.237071   61418 main.go:141] libmachine: (kindnet-639892) Waiting to get IP...
	I0917 18:13:53.238104   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:13:53.238754   61418 main.go:141] libmachine: (kindnet-639892) DBG | unable to find current IP address of domain kindnet-639892 in network mk-kindnet-639892
	I0917 18:13:53.238782   61418 main.go:141] libmachine: (kindnet-639892) DBG | I0917 18:13:53.238719   62205 retry.go:31] will retry after 246.978296ms: waiting for machine to come up
	I0917 18:13:53.487405   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:13:53.488082   61418 main.go:141] libmachine: (kindnet-639892) DBG | unable to find current IP address of domain kindnet-639892 in network mk-kindnet-639892
	I0917 18:13:53.488106   61418 main.go:141] libmachine: (kindnet-639892) DBG | I0917 18:13:53.488033   62205 retry.go:31] will retry after 359.133666ms: waiting for machine to come up
	I0917 18:13:53.848745   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:13:53.849313   61418 main.go:141] libmachine: (kindnet-639892) DBG | unable to find current IP address of domain kindnet-639892 in network mk-kindnet-639892
	I0917 18:13:53.849337   61418 main.go:141] libmachine: (kindnet-639892) DBG | I0917 18:13:53.849263   62205 retry.go:31] will retry after 400.183619ms: waiting for machine to come up
	I0917 18:13:54.250969   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:13:54.251550   61418 main.go:141] libmachine: (kindnet-639892) DBG | unable to find current IP address of domain kindnet-639892 in network mk-kindnet-639892
	I0917 18:13:54.251578   61418 main.go:141] libmachine: (kindnet-639892) DBG | I0917 18:13:54.251506   62205 retry.go:31] will retry after 395.425806ms: waiting for machine to come up
	I0917 18:13:54.648164   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:13:54.648674   61418 main.go:141] libmachine: (kindnet-639892) DBG | unable to find current IP address of domain kindnet-639892 in network mk-kindnet-639892
	I0917 18:13:54.648703   61418 main.go:141] libmachine: (kindnet-639892) DBG | I0917 18:13:54.648608   62205 retry.go:31] will retry after 665.520774ms: waiting for machine to come up
	I0917 18:13:55.315617   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:13:55.316198   61418 main.go:141] libmachine: (kindnet-639892) DBG | unable to find current IP address of domain kindnet-639892 in network mk-kindnet-639892
	I0917 18:13:55.316244   61418 main.go:141] libmachine: (kindnet-639892) DBG | I0917 18:13:55.316146   62205 retry.go:31] will retry after 888.724078ms: waiting for machine to come up
	I0917 18:13:54.505455   61298 crio.go:462] duration metric: took 1.566839876s to copy over tarball
	I0917 18:13:54.505546   61298 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 18:13:56.832449   61298 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.326876527s)
	I0917 18:13:56.832474   61298 crio.go:469] duration metric: took 2.326987833s to extract the tarball
	I0917 18:13:56.832482   61298 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 18:13:56.879426   61298 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:13:56.932230   61298 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 18:13:56.932258   61298 cache_images.go:84] Images are preloaded, skipping loading
	I0917 18:13:56.932269   61298 kubeadm.go:934] updating node { 192.168.61.95 8443 v1.31.1 crio true true} ...
	I0917 18:13:56.932394   61298 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-639892 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:auto-639892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:13:56.932479   61298 ssh_runner.go:195] Run: crio config
	I0917 18:13:56.985558   61298 cni.go:84] Creating CNI manager for ""
	I0917 18:13:56.985583   61298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:13:56.985592   61298 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:13:56.985611   61298 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.95 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-639892 NodeName:auto-639892 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.95"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.95 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 18:13:56.985779   61298 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.95
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-639892"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.95
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.95"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:13:56.985860   61298 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 18:13:56.997026   61298 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:13:56.997103   61298 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:13:57.010183   61298 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0917 18:13:57.028612   61298 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:13:57.047330   61298 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2152 bytes)
	I0917 18:13:57.065837   61298 ssh_runner.go:195] Run: grep 192.168.61.95	control-plane.minikube.internal$ /etc/hosts
	I0917 18:13:57.070053   61298 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.95	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:13:57.084325   61298 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:13:57.220084   61298 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:13:57.239116   61298 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892 for IP: 192.168.61.95
	I0917 18:13:57.239143   61298 certs.go:194] generating shared ca certs ...
	I0917 18:13:57.239164   61298 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:13:57.239347   61298 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:13:57.239403   61298 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:13:57.239415   61298 certs.go:256] generating profile certs ...
	I0917 18:13:57.239506   61298 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/client.key
	I0917 18:13:57.239536   61298 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/client.crt with IP's: []
	I0917 18:13:57.398768   61298 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/client.crt ...
	I0917 18:13:57.398804   61298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/client.crt: {Name:mk469c261fe3ef7ff347bb4382dfa555b30e252f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:13:57.399071   61298 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/client.key ...
	I0917 18:13:57.399091   61298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/client.key: {Name:mkb018bc32c277fa5a5cc74e6b797b02ce6aadf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:13:57.399210   61298 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/apiserver.key.6f155fc1
	I0917 18:13:57.399226   61298 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/apiserver.crt.6f155fc1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.95]
	I0917 18:13:57.601756   61298 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/apiserver.crt.6f155fc1 ...
	I0917 18:13:57.601787   61298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/apiserver.crt.6f155fc1: {Name:mk2e32f7a65d3f532c46066a17e0f929336e10c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:13:57.601985   61298 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/apiserver.key.6f155fc1 ...
	I0917 18:13:57.602003   61298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/apiserver.key.6f155fc1: {Name:mk43318e70e4542583195feda035a458bc879b6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:13:57.602103   61298 certs.go:381] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/apiserver.crt.6f155fc1 -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/apiserver.crt
	I0917 18:13:57.602183   61298 certs.go:385] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/apiserver.key.6f155fc1 -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/apiserver.key
	I0917 18:13:57.602235   61298 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/proxy-client.key
	I0917 18:13:57.602249   61298 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/proxy-client.crt with IP's: []
	I0917 18:13:57.663160   61298 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/proxy-client.crt ...
	I0917 18:13:57.663194   61298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/proxy-client.crt: {Name:mk93a244398f87d614f57489db7c0b50864d99b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:13:57.663398   61298 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/proxy-client.key ...
	I0917 18:13:57.663414   61298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/proxy-client.key: {Name:mk7732ca1226760941c153e89efabda68bca608d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:13:57.663630   61298 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:13:57.663666   61298 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:13:57.663678   61298 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:13:57.663738   61298 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:13:57.663775   61298 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:13:57.663795   61298 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:13:57.663840   61298 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:13:57.664477   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:13:57.692001   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:13:57.720022   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:13:57.753291   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:13:57.784709   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0917 18:13:57.817685   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 18:13:57.850596   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:13:57.879749   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 18:13:57.910730   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:13:57.964021   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:13:57.995810   61298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:13:58.022550   61298 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:13:58.040833   61298 ssh_runner.go:195] Run: openssl version
	I0917 18:13:58.047499   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:13:58.063615   61298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:13:58.069085   61298 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:13:58.069160   61298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:13:58.075768   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:13:58.088901   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:13:58.100831   61298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:13:58.105593   61298 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:13:58.105655   61298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:13:58.111611   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:13:58.124161   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:13:58.136893   61298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:13:58.142251   61298 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:13:58.142322   61298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:13:58.148406   61298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:13:58.160781   61298 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:13:58.165737   61298 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 18:13:58.165819   61298 kubeadm.go:392] StartCluster: {Name:auto-639892 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clu
sterName:auto-639892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.95 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:13:58.165926   61298 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:13:58.166013   61298 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:13:58.211613   61298 cri.go:89] found id: ""
	I0917 18:13:58.211702   61298 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:13:58.226981   61298 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:13:58.238826   61298 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:13:58.249985   61298 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:13:58.250009   61298 kubeadm.go:157] found existing configuration files:
	
	I0917 18:13:58.250060   61298 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:13:58.260890   61298 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:13:58.260959   61298 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:13:58.272264   61298 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:13:58.286941   61298 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:13:58.287001   61298 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:13:58.300604   61298 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:13:58.311952   61298 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:13:58.312039   61298 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:13:58.323728   61298 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:13:58.335051   61298 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:13:58.335137   61298 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:13:58.347112   61298 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:13:58.410271   61298 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 18:13:58.410377   61298 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:13:58.528447   61298 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:13:58.528608   61298 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:13:58.528748   61298 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 18:13:58.538161   61298 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:13:58.612077   61298 out.go:235]   - Generating certificates and keys ...
	I0917 18:13:58.612222   61298 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:13:58.612340   61298 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:13:58.835735   61298 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 18:13:58.999626   61298 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 18:13:59.256815   61298 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 18:13:56.206440   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:13:56.206913   61418 main.go:141] libmachine: (kindnet-639892) DBG | unable to find current IP address of domain kindnet-639892 in network mk-kindnet-639892
	I0917 18:13:56.206952   61418 main.go:141] libmachine: (kindnet-639892) DBG | I0917 18:13:56.206887   62205 retry.go:31] will retry after 751.279093ms: waiting for machine to come up
	I0917 18:13:56.959397   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:13:56.959906   61418 main.go:141] libmachine: (kindnet-639892) DBG | unable to find current IP address of domain kindnet-639892 in network mk-kindnet-639892
	I0917 18:13:56.959934   61418 main.go:141] libmachine: (kindnet-639892) DBG | I0917 18:13:56.959848   62205 retry.go:31] will retry after 1.432002024s: waiting for machine to come up
	I0917 18:13:58.393624   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:13:58.394138   61418 main.go:141] libmachine: (kindnet-639892) DBG | unable to find current IP address of domain kindnet-639892 in network mk-kindnet-639892
	I0917 18:13:58.394166   61418 main.go:141] libmachine: (kindnet-639892) DBG | I0917 18:13:58.394081   62205 retry.go:31] will retry after 1.656145329s: waiting for machine to come up
	I0917 18:14:00.051457   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:00.051903   61418 main.go:141] libmachine: (kindnet-639892) DBG | unable to find current IP address of domain kindnet-639892 in network mk-kindnet-639892
	I0917 18:14:00.051932   61418 main.go:141] libmachine: (kindnet-639892) DBG | I0917 18:14:00.051850   62205 retry.go:31] will retry after 2.308762179s: waiting for machine to come up
	I0917 18:13:59.443272   61298 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 18:13:59.625428   61298 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 18:13:59.625707   61298 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-639892 localhost] and IPs [192.168.61.95 127.0.0.1 ::1]
	I0917 18:13:59.829573   61298 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 18:13:59.829856   61298 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-639892 localhost] and IPs [192.168.61.95 127.0.0.1 ::1]
	I0917 18:13:59.898168   61298 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 18:14:00.029444   61298 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 18:14:00.185661   61298 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 18:14:00.185935   61298 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:14:00.509245   61298 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:14:00.669520   61298 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 18:14:00.912678   61298 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:14:01.087736   61298 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:14:01.188942   61298 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:14:01.189769   61298 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:14:01.195159   61298 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:14:01.197214   61298 out.go:235]   - Booting up control plane ...
	I0917 18:14:01.197362   61298 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:14:01.197478   61298 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:14:01.197947   61298 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:14:01.215979   61298 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:14:01.222388   61298 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:14:01.222520   61298 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:14:01.370549   61298 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 18:14:01.370733   61298 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 18:14:01.871329   61298 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.240653ms
	I0917 18:14:01.871503   61298 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 18:14:02.362301   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:02.362824   61418 main.go:141] libmachine: (kindnet-639892) DBG | unable to find current IP address of domain kindnet-639892 in network mk-kindnet-639892
	I0917 18:14:02.362855   61418 main.go:141] libmachine: (kindnet-639892) DBG | I0917 18:14:02.362773   62205 retry.go:31] will retry after 1.789280664s: waiting for machine to come up
	I0917 18:14:04.154404   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:04.154938   61418 main.go:141] libmachine: (kindnet-639892) DBG | unable to find current IP address of domain kindnet-639892 in network mk-kindnet-639892
	I0917 18:14:04.154966   61418 main.go:141] libmachine: (kindnet-639892) DBG | I0917 18:14:04.154884   62205 retry.go:31] will retry after 2.895050015s: waiting for machine to come up
	I0917 18:14:07.370932   61298 kubeadm.go:310] [api-check] The API server is healthy after 5.501798968s
	I0917 18:14:07.387472   61298 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 18:14:07.407943   61298 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 18:14:07.442214   61298 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 18:14:07.442434   61298 kubeadm.go:310] [mark-control-plane] Marking the node auto-639892 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 18:14:07.467824   61298 kubeadm.go:310] [bootstrap-token] Using token: wt2dlm.4lxp1bx47jm30a9i
	I0917 18:14:07.469700   61298 out.go:235]   - Configuring RBAC rules ...
	I0917 18:14:07.469829   61298 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 18:14:07.480193   61298 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 18:14:07.490982   61298 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 18:14:07.500342   61298 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 18:14:07.508204   61298 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 18:14:07.516393   61298 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 18:14:07.778720   61298 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 18:14:08.230634   61298 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 18:14:08.779970   61298 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 18:14:08.780010   61298 kubeadm.go:310] 
	I0917 18:14:08.780077   61298 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 18:14:08.780087   61298 kubeadm.go:310] 
	I0917 18:14:08.780228   61298 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 18:14:08.780239   61298 kubeadm.go:310] 
	I0917 18:14:08.780273   61298 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 18:14:08.780354   61298 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 18:14:08.780421   61298 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 18:14:08.780430   61298 kubeadm.go:310] 
	I0917 18:14:08.780520   61298 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 18:14:08.780551   61298 kubeadm.go:310] 
	I0917 18:14:08.780623   61298 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 18:14:08.780635   61298 kubeadm.go:310] 
	I0917 18:14:08.780715   61298 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 18:14:08.780826   61298 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 18:14:08.780944   61298 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 18:14:08.780964   61298 kubeadm.go:310] 
	I0917 18:14:08.781086   61298 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 18:14:08.781179   61298 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 18:14:08.781188   61298 kubeadm.go:310] 
	I0917 18:14:08.781310   61298 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wt2dlm.4lxp1bx47jm30a9i \
	I0917 18:14:08.781522   61298 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 \
	I0917 18:14:08.781557   61298 kubeadm.go:310] 	--control-plane 
	I0917 18:14:08.781564   61298 kubeadm.go:310] 
	I0917 18:14:08.781676   61298 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 18:14:08.781688   61298 kubeadm.go:310] 
	I0917 18:14:08.781783   61298 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wt2dlm.4lxp1bx47jm30a9i \
	I0917 18:14:08.781914   61298 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 
	I0917 18:14:08.783086   61298 kubeadm.go:310] W0917 18:13:58.370999     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:14:08.783373   61298 kubeadm.go:310] W0917 18:13:58.371873     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:14:08.783478   61298 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:14:08.783494   61298 cni.go:84] Creating CNI manager for ""
	I0917 18:14:08.783500   61298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:14:08.785474   61298 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:14:08.786753   61298 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:14:08.804929   61298 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:14:08.828180   61298 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 18:14:08.828255   61298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:14:08.828277   61298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-639892 minikube.k8s.io/updated_at=2024_09_17T18_14_08_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=auto-639892 minikube.k8s.io/primary=true
	I0917 18:14:08.991760   61298 ops.go:34] apiserver oom_adj: -16
	I0917 18:14:08.991802   61298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:14:07.052059   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:07.052594   61418 main.go:141] libmachine: (kindnet-639892) DBG | unable to find current IP address of domain kindnet-639892 in network mk-kindnet-639892
	I0917 18:14:07.052622   61418 main.go:141] libmachine: (kindnet-639892) DBG | I0917 18:14:07.052539   62205 retry.go:31] will retry after 4.404326894s: waiting for machine to come up
	I0917 18:14:09.492525   61298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:14:09.992792   61298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:14:10.492502   61298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:14:10.992228   61298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:14:11.492840   61298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:14:11.992575   61298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:14:12.491867   61298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:14:12.992464   61298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:14:13.124637   61298 kubeadm.go:1113] duration metric: took 4.296453914s to wait for elevateKubeSystemPrivileges
	I0917 18:14:13.124679   61298 kubeadm.go:394] duration metric: took 14.958864471s to StartCluster
	I0917 18:14:13.124700   61298 settings.go:142] acquiring lock: {Name:mk34898861b3ed534da4bd8404feab95fe690001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:14:13.124784   61298 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:14:13.126561   61298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:14:13.126833   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 18:14:13.126852   61298 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.95 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 18:14:13.126917   61298 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 18:14:13.127005   61298 addons.go:69] Setting storage-provisioner=true in profile "auto-639892"
	I0917 18:14:13.127029   61298 addons.go:234] Setting addon storage-provisioner=true in "auto-639892"
	I0917 18:14:13.127045   61298 addons.go:69] Setting default-storageclass=true in profile "auto-639892"
	I0917 18:14:13.127069   61298 config.go:182] Loaded profile config "auto-639892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:14:13.127108   61298 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-639892"
	I0917 18:14:13.127067   61298 host.go:66] Checking if "auto-639892" exists ...
	I0917 18:14:13.127626   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:14:13.127648   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:14:13.127666   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:14:13.127669   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:14:13.128869   61298 out.go:177] * Verifying Kubernetes components...
	I0917 18:14:13.130594   61298 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:14:13.145142   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34569
	I0917 18:14:13.145810   61298 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:14:13.146462   61298 main.go:141] libmachine: Using API Version  1
	I0917 18:14:13.146496   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:14:13.146921   61298 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:14:13.147144   61298 main.go:141] libmachine: (auto-639892) Calling .GetState
	I0917 18:14:13.147647   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38597
	I0917 18:14:13.148030   61298 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:14:13.148579   61298 main.go:141] libmachine: Using API Version  1
	I0917 18:14:13.148597   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:14:13.149221   61298 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:14:13.149783   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:14:13.149818   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:14:13.151284   61298 addons.go:234] Setting addon default-storageclass=true in "auto-639892"
	I0917 18:14:13.151330   61298 host.go:66] Checking if "auto-639892" exists ...
	I0917 18:14:13.151727   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:14:13.151773   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:14:13.166833   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39599
	I0917 18:14:13.167413   61298 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:14:13.167977   61298 main.go:141] libmachine: Using API Version  1
	I0917 18:14:13.168007   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:14:13.168508   61298 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:14:13.168722   61298 main.go:141] libmachine: (auto-639892) Calling .GetState
	I0917 18:14:13.170554   61298 main.go:141] libmachine: (auto-639892) Calling .DriverName
	I0917 18:14:13.172764   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34405
	I0917 18:14:13.172826   61298 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:14:13.173302   61298 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:14:13.173799   61298 main.go:141] libmachine: Using API Version  1
	I0917 18:14:13.173812   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:14:13.174047   61298 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:14:13.174059   61298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 18:14:13.174072   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHHostname
	I0917 18:14:13.174585   61298 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:14:13.175355   61298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:14:13.175394   61298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:14:13.177450   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:14:13.177933   61298 main.go:141] libmachine: (auto-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:ab:b8", ip: ""} in network mk-auto-639892: {Iface:virbr3 ExpiryTime:2024-09-17 19:13:39 +0000 UTC Type:0 Mac:52:54:00:c6:ab:b8 Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:auto-639892 Clientid:01:52:54:00:c6:ab:b8}
	I0917 18:14:13.177954   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined IP address 192.168.61.95 and MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:14:13.178582   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHPort
	I0917 18:14:13.178750   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHKeyPath
	I0917 18:14:13.178900   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHUsername
	I0917 18:14:13.178989   61298 sshutil.go:53] new ssh client: &{IP:192.168.61.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/auto-639892/id_rsa Username:docker}
	I0917 18:14:13.192895   61298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43679
	I0917 18:14:13.193785   61298 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:14:13.195222   61298 main.go:141] libmachine: Using API Version  1
	I0917 18:14:13.195247   61298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:14:13.197548   61298 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:14:13.197710   61298 main.go:141] libmachine: (auto-639892) Calling .GetState
	I0917 18:14:13.199745   61298 main.go:141] libmachine: (auto-639892) Calling .DriverName
	I0917 18:14:13.199991   61298 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 18:14:13.200014   61298 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 18:14:13.200036   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHHostname
	I0917 18:14:13.203698   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:14:13.204208   61298 main.go:141] libmachine: (auto-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:ab:b8", ip: ""} in network mk-auto-639892: {Iface:virbr3 ExpiryTime:2024-09-17 19:13:39 +0000 UTC Type:0 Mac:52:54:00:c6:ab:b8 Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:auto-639892 Clientid:01:52:54:00:c6:ab:b8}
	I0917 18:14:13.204231   61298 main.go:141] libmachine: (auto-639892) DBG | domain auto-639892 has defined IP address 192.168.61.95 and MAC address 52:54:00:c6:ab:b8 in network mk-auto-639892
	I0917 18:14:13.204431   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHPort
	I0917 18:14:13.204620   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHKeyPath
	I0917 18:14:13.204800   61298 main.go:141] libmachine: (auto-639892) Calling .GetSSHUsername
	I0917 18:14:13.205031   61298 sshutil.go:53] new ssh client: &{IP:192.168.61.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/auto-639892/id_rsa Username:docker}
	I0917 18:14:13.478512   61298 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:14:13.478513   61298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0917 18:14:13.486040   61298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 18:14:13.604880   61298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:14:13.943231   61298 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0917 18:14:13.943348   61298 main.go:141] libmachine: Making call to close driver server
	I0917 18:14:13.943369   61298 main.go:141] libmachine: (auto-639892) Calling .Close
	I0917 18:14:13.943767   61298 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:14:13.943844   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:14:13.943869   61298 main.go:141] libmachine: Making call to close driver server
	I0917 18:14:13.943884   61298 main.go:141] libmachine: (auto-639892) Calling .Close
	I0917 18:14:13.943801   61298 main.go:141] libmachine: (auto-639892) DBG | Closing plugin on server side
	I0917 18:14:13.944141   61298 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:14:13.944155   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:14:13.944650   61298 node_ready.go:35] waiting up to 15m0s for node "auto-639892" to be "Ready" ...
	I0917 18:14:13.955869   61298 node_ready.go:49] node "auto-639892" has status "Ready":"True"
	I0917 18:14:13.955896   61298 node_ready.go:38] duration metric: took 11.225066ms for node "auto-639892" to be "Ready" ...
	I0917 18:14:13.955907   61298 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:14:13.979862   61298 main.go:141] libmachine: Making call to close driver server
	I0917 18:14:13.979890   61298 main.go:141] libmachine: (auto-639892) Calling .Close
	I0917 18:14:13.980153   61298 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:14:13.980175   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:14:13.988222   61298 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-8kx5z" in "kube-system" namespace to be "Ready" ...
	I0917 18:14:14.448264   61298 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-639892" context rescaled to 1 replicas
	I0917 18:14:14.690387   61298 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.08546415s)
	I0917 18:14:14.690446   61298 main.go:141] libmachine: Making call to close driver server
	I0917 18:14:14.690458   61298 main.go:141] libmachine: (auto-639892) Calling .Close
	I0917 18:14:14.690759   61298 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:14:14.690780   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:14:14.690790   61298 main.go:141] libmachine: Making call to close driver server
	I0917 18:14:14.690797   61298 main.go:141] libmachine: (auto-639892) Calling .Close
	I0917 18:14:14.691090   61298 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:14:14.691106   61298 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:14:14.692830   61298 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0917 18:14:11.458838   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:11.459344   61418 main.go:141] libmachine: (kindnet-639892) DBG | unable to find current IP address of domain kindnet-639892 in network mk-kindnet-639892
	I0917 18:14:11.459371   61418 main.go:141] libmachine: (kindnet-639892) DBG | I0917 18:14:11.459299   62205 retry.go:31] will retry after 4.74795043s: waiting for machine to come up
	I0917 18:14:17.690734   61797 start.go:364] duration metric: took 1m3.409503815s to acquireMachinesLock for "kubernetes-upgrade-644038"
	I0917 18:14:17.690785   61797 start.go:96] Skipping create...Using existing machine configuration
	I0917 18:14:17.690791   61797 fix.go:54] fixHost starting: 
	I0917 18:14:17.691190   61797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:14:17.691243   61797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:14:17.709021   61797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36719
	I0917 18:14:17.709500   61797 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:14:17.710039   61797 main.go:141] libmachine: Using API Version  1
	I0917 18:14:17.710065   61797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:14:17.710417   61797 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:14:17.710620   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .DriverName
	I0917 18:14:17.710783   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetState
	I0917 18:14:17.712431   61797 fix.go:112] recreateIfNeeded on kubernetes-upgrade-644038: state=Running err=<nil>
	W0917 18:14:17.712458   61797 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 18:14:17.714739   61797 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-644038" VM ...
	I0917 18:14:17.716233   61797 machine.go:93] provisionDockerMachine start ...
	I0917 18:14:17.716263   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .DriverName
	I0917 18:14:17.716491   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHHostname
	I0917 18:14:17.719264   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:14:17.719770   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ec:bf", ip: ""} in network mk-kubernetes-upgrade-644038: {Iface:virbr2 ExpiryTime:2024-09-17 19:12:49 +0000 UTC Type:0 Mac:52:54:00:73:ec:bf Iaid: IPaddr:192.168.50.134 Prefix:24 Hostname:kubernetes-upgrade-644038 Clientid:01:52:54:00:73:ec:bf}
	I0917 18:14:17.719801   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined IP address 192.168.50.134 and MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:14:17.719973   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHPort
	I0917 18:14:17.720121   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHKeyPath
	I0917 18:14:17.720330   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHKeyPath
	I0917 18:14:17.720491   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHUsername
	I0917 18:14:17.720662   61797 main.go:141] libmachine: Using SSH client type: native
	I0917 18:14:17.720906   61797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.134 22 <nil> <nil>}
	I0917 18:14:17.720925   61797 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 18:14:17.830845   61797 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-644038
	
	I0917 18:14:17.830881   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetMachineName
	I0917 18:14:17.831198   61797 buildroot.go:166] provisioning hostname "kubernetes-upgrade-644038"
	I0917 18:14:17.831231   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetMachineName
	I0917 18:14:17.831420   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHHostname
	I0917 18:14:17.834435   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:14:17.834803   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ec:bf", ip: ""} in network mk-kubernetes-upgrade-644038: {Iface:virbr2 ExpiryTime:2024-09-17 19:12:49 +0000 UTC Type:0 Mac:52:54:00:73:ec:bf Iaid: IPaddr:192.168.50.134 Prefix:24 Hostname:kubernetes-upgrade-644038 Clientid:01:52:54:00:73:ec:bf}
	I0917 18:14:17.834832   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined IP address 192.168.50.134 and MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:14:17.835009   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHPort
	I0917 18:14:17.835192   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHKeyPath
	I0917 18:14:17.835328   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHKeyPath
	I0917 18:14:17.835451   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHUsername
	I0917 18:14:17.835594   61797 main.go:141] libmachine: Using SSH client type: native
	I0917 18:14:17.835768   61797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.134 22 <nil> <nil>}
	I0917 18:14:17.835780   61797 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-644038 && echo "kubernetes-upgrade-644038" | sudo tee /etc/hostname
	I0917 18:14:17.967766   61797 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-644038
	
	I0917 18:14:17.967798   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHHostname
	I0917 18:14:17.970784   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:14:17.971217   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ec:bf", ip: ""} in network mk-kubernetes-upgrade-644038: {Iface:virbr2 ExpiryTime:2024-09-17 19:12:49 +0000 UTC Type:0 Mac:52:54:00:73:ec:bf Iaid: IPaddr:192.168.50.134 Prefix:24 Hostname:kubernetes-upgrade-644038 Clientid:01:52:54:00:73:ec:bf}
	I0917 18:14:17.971262   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined IP address 192.168.50.134 and MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:14:17.971396   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHPort
	I0917 18:14:17.971594   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHKeyPath
	I0917 18:14:17.971747   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHKeyPath
	I0917 18:14:17.971916   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHUsername
	I0917 18:14:17.972065   61797 main.go:141] libmachine: Using SSH client type: native
	I0917 18:14:17.972311   61797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.134 22 <nil> <nil>}
	I0917 18:14:17.972336   61797 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-644038' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-644038/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-644038' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:14:18.087150   61797 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:14:18.087189   61797 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:14:18.087216   61797 buildroot.go:174] setting up certificates
	I0917 18:14:18.087229   61797 provision.go:84] configureAuth start
	I0917 18:14:18.087243   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetMachineName
	I0917 18:14:18.087550   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetIP
	I0917 18:14:18.090452   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:14:18.090814   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ec:bf", ip: ""} in network mk-kubernetes-upgrade-644038: {Iface:virbr2 ExpiryTime:2024-09-17 19:12:49 +0000 UTC Type:0 Mac:52:54:00:73:ec:bf Iaid: IPaddr:192.168.50.134 Prefix:24 Hostname:kubernetes-upgrade-644038 Clientid:01:52:54:00:73:ec:bf}
	I0917 18:14:18.090836   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined IP address 192.168.50.134 and MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:14:18.091017   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHHostname
	I0917 18:14:18.093579   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:14:18.094007   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ec:bf", ip: ""} in network mk-kubernetes-upgrade-644038: {Iface:virbr2 ExpiryTime:2024-09-17 19:12:49 +0000 UTC Type:0 Mac:52:54:00:73:ec:bf Iaid: IPaddr:192.168.50.134 Prefix:24 Hostname:kubernetes-upgrade-644038 Clientid:01:52:54:00:73:ec:bf}
	I0917 18:14:18.094035   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined IP address 192.168.50.134 and MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:14:18.094302   61797 provision.go:143] copyHostCerts
	I0917 18:14:18.094367   61797 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:14:18.094377   61797 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:14:18.094446   61797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:14:18.094564   61797 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:14:18.094577   61797 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:14:18.094605   61797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:14:18.094684   61797 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:14:18.094694   61797 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:14:18.094724   61797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:14:18.094790   61797 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-644038 san=[127.0.0.1 192.168.50.134 kubernetes-upgrade-644038 localhost minikube]
	I0917 18:14:18.249681   61797 provision.go:177] copyRemoteCerts
	I0917 18:14:18.249763   61797 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:14:18.249792   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHHostname
	I0917 18:14:18.253280   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:14:18.253766   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ec:bf", ip: ""} in network mk-kubernetes-upgrade-644038: {Iface:virbr2 ExpiryTime:2024-09-17 19:12:49 +0000 UTC Type:0 Mac:52:54:00:73:ec:bf Iaid: IPaddr:192.168.50.134 Prefix:24 Hostname:kubernetes-upgrade-644038 Clientid:01:52:54:00:73:ec:bf}
	I0917 18:14:18.253825   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined IP address 192.168.50.134 and MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:14:18.254023   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHPort
	I0917 18:14:18.254239   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHKeyPath
	I0917 18:14:18.254429   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHUsername
	I0917 18:14:18.254604   61797 sshutil.go:53] new ssh client: &{IP:192.168.50.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/kubernetes-upgrade-644038/id_rsa Username:docker}
	I0917 18:14:18.346234   61797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:14:18.373831   61797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0917 18:14:18.404005   61797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 18:14:18.437016   61797 provision.go:87] duration metric: took 349.774348ms to configureAuth
	I0917 18:14:18.437050   61797 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:14:18.437293   61797 config.go:182] Loaded profile config "kubernetes-upgrade-644038": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:14:18.437380   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHHostname
	I0917 18:14:18.440347   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:14:18.440806   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ec:bf", ip: ""} in network mk-kubernetes-upgrade-644038: {Iface:virbr2 ExpiryTime:2024-09-17 19:12:49 +0000 UTC Type:0 Mac:52:54:00:73:ec:bf Iaid: IPaddr:192.168.50.134 Prefix:24 Hostname:kubernetes-upgrade-644038 Clientid:01:52:54:00:73:ec:bf}
	I0917 18:14:18.440855   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined IP address 192.168.50.134 and MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:14:18.441053   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHPort
	I0917 18:14:18.441267   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHKeyPath
	I0917 18:14:18.441439   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHKeyPath
	I0917 18:14:18.441582   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHUsername
	I0917 18:14:18.441744   61797 main.go:141] libmachine: Using SSH client type: native
	I0917 18:14:18.441928   61797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.134 22 <nil> <nil>}
	I0917 18:14:18.441943   61797 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:14:16.210488   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:16.210964   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has current primary IP address 192.168.72.66 and MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:16.210988   61418 main.go:141] libmachine: (kindnet-639892) Found IP for machine: 192.168.72.66
	I0917 18:14:16.211002   61418 main.go:141] libmachine: (kindnet-639892) Reserving static IP address...
	I0917 18:14:16.211415   61418 main.go:141] libmachine: (kindnet-639892) DBG | unable to find host DHCP lease matching {name: "kindnet-639892", mac: "52:54:00:58:7c:d6", ip: "192.168.72.66"} in network mk-kindnet-639892
	I0917 18:14:16.293858   61418 main.go:141] libmachine: (kindnet-639892) Reserved static IP address: 192.168.72.66
	I0917 18:14:16.293912   61418 main.go:141] libmachine: (kindnet-639892) DBG | Getting to WaitForSSH function...
	I0917 18:14:16.293921   61418 main.go:141] libmachine: (kindnet-639892) Waiting for SSH to be available...
	I0917 18:14:16.296888   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:16.297310   61418 main.go:141] libmachine: (kindnet-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:7c:d6", ip: ""} in network mk-kindnet-639892: {Iface:virbr4 ExpiryTime:2024-09-17 19:14:07 +0000 UTC Type:0 Mac:52:54:00:58:7c:d6 Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:minikube Clientid:01:52:54:00:58:7c:d6}
	I0917 18:14:16.297440   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined IP address 192.168.72.66 and MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:16.297699   61418 main.go:141] libmachine: (kindnet-639892) DBG | Using SSH client type: external
	I0917 18:14:16.297727   61418 main.go:141] libmachine: (kindnet-639892) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/kindnet-639892/id_rsa (-rw-------)
	I0917 18:14:16.297760   61418 main.go:141] libmachine: (kindnet-639892) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.66 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/kindnet-639892/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:14:16.297774   61418 main.go:141] libmachine: (kindnet-639892) DBG | About to run SSH command:
	I0917 18:14:16.297799   61418 main.go:141] libmachine: (kindnet-639892) DBG | exit 0
	I0917 18:14:16.421623   61418 main.go:141] libmachine: (kindnet-639892) DBG | SSH cmd err, output: <nil>: 
	I0917 18:14:16.421910   61418 main.go:141] libmachine: (kindnet-639892) KVM machine creation complete!
	I0917 18:14:16.422272   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetConfigRaw
	I0917 18:14:16.422876   61418 main.go:141] libmachine: (kindnet-639892) Calling .DriverName
	I0917 18:14:16.423087   61418 main.go:141] libmachine: (kindnet-639892) Calling .DriverName
	I0917 18:14:16.423239   61418 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0917 18:14:16.423253   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetState
	I0917 18:14:16.424825   61418 main.go:141] libmachine: Detecting operating system of created instance...
	I0917 18:14:16.424839   61418 main.go:141] libmachine: Waiting for SSH to be available...
	I0917 18:14:16.424858   61418 main.go:141] libmachine: Getting to WaitForSSH function...
	I0917 18:14:16.424864   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHHostname
	I0917 18:14:16.427147   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:16.427525   61418 main.go:141] libmachine: (kindnet-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:7c:d6", ip: ""} in network mk-kindnet-639892: {Iface:virbr4 ExpiryTime:2024-09-17 19:14:07 +0000 UTC Type:0 Mac:52:54:00:58:7c:d6 Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:kindnet-639892 Clientid:01:52:54:00:58:7c:d6}
	I0917 18:14:16.427551   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined IP address 192.168.72.66 and MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:16.427696   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHPort
	I0917 18:14:16.427867   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHKeyPath
	I0917 18:14:16.428045   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHKeyPath
	I0917 18:14:16.428198   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHUsername
	I0917 18:14:16.428340   61418 main.go:141] libmachine: Using SSH client type: native
	I0917 18:14:16.428532   61418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.66 22 <nil> <nil>}
	I0917 18:14:16.428544   61418 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0917 18:14:16.533033   61418 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:14:16.533057   61418 main.go:141] libmachine: Detecting the provisioner...
	I0917 18:14:16.533083   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHHostname
	I0917 18:14:16.536310   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:16.536797   61418 main.go:141] libmachine: (kindnet-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:7c:d6", ip: ""} in network mk-kindnet-639892: {Iface:virbr4 ExpiryTime:2024-09-17 19:14:07 +0000 UTC Type:0 Mac:52:54:00:58:7c:d6 Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:kindnet-639892 Clientid:01:52:54:00:58:7c:d6}
	I0917 18:14:16.536838   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined IP address 192.168.72.66 and MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:16.537027   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHPort
	I0917 18:14:16.537214   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHKeyPath
	I0917 18:14:16.537419   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHKeyPath
	I0917 18:14:16.537589   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHUsername
	I0917 18:14:16.537778   61418 main.go:141] libmachine: Using SSH client type: native
	I0917 18:14:16.537993   61418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.66 22 <nil> <nil>}
	I0917 18:14:16.538009   61418 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0917 18:14:16.642256   61418 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0917 18:14:16.642369   61418 main.go:141] libmachine: found compatible host: buildroot
	I0917 18:14:16.642386   61418 main.go:141] libmachine: Provisioning with buildroot...
	I0917 18:14:16.642396   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetMachineName
	I0917 18:14:16.642665   61418 buildroot.go:166] provisioning hostname "kindnet-639892"
	I0917 18:14:16.642710   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetMachineName
	I0917 18:14:16.642923   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHHostname
	I0917 18:14:16.645963   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:16.646364   61418 main.go:141] libmachine: (kindnet-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:7c:d6", ip: ""} in network mk-kindnet-639892: {Iface:virbr4 ExpiryTime:2024-09-17 19:14:07 +0000 UTC Type:0 Mac:52:54:00:58:7c:d6 Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:kindnet-639892 Clientid:01:52:54:00:58:7c:d6}
	I0917 18:14:16.646396   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined IP address 192.168.72.66 and MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:16.646591   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHPort
	I0917 18:14:16.646780   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHKeyPath
	I0917 18:14:16.646965   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHKeyPath
	I0917 18:14:16.647145   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHUsername
	I0917 18:14:16.647309   61418 main.go:141] libmachine: Using SSH client type: native
	I0917 18:14:16.647530   61418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.66 22 <nil> <nil>}
	I0917 18:14:16.647547   61418 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-639892 && echo "kindnet-639892" | sudo tee /etc/hostname
	I0917 18:14:16.773606   61418 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-639892
	
	I0917 18:14:16.773636   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHHostname
	I0917 18:14:16.776631   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:16.777012   61418 main.go:141] libmachine: (kindnet-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:7c:d6", ip: ""} in network mk-kindnet-639892: {Iface:virbr4 ExpiryTime:2024-09-17 19:14:07 +0000 UTC Type:0 Mac:52:54:00:58:7c:d6 Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:kindnet-639892 Clientid:01:52:54:00:58:7c:d6}
	I0917 18:14:16.777042   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined IP address 192.168.72.66 and MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:16.777268   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHPort
	I0917 18:14:16.777459   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHKeyPath
	I0917 18:14:16.777643   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHKeyPath
	I0917 18:14:16.777805   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHUsername
	I0917 18:14:16.777996   61418 main.go:141] libmachine: Using SSH client type: native
	I0917 18:14:16.778215   61418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.66 22 <nil> <nil>}
	I0917 18:14:16.778239   61418 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-639892' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-639892/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-639892' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:14:16.899765   61418 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:14:16.899827   61418 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:14:16.899857   61418 buildroot.go:174] setting up certificates
	I0917 18:14:16.899871   61418 provision.go:84] configureAuth start
	I0917 18:14:16.899883   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetMachineName
	I0917 18:14:16.900245   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetIP
	I0917 18:14:16.902991   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:16.903359   61418 main.go:141] libmachine: (kindnet-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:7c:d6", ip: ""} in network mk-kindnet-639892: {Iface:virbr4 ExpiryTime:2024-09-17 19:14:07 +0000 UTC Type:0 Mac:52:54:00:58:7c:d6 Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:kindnet-639892 Clientid:01:52:54:00:58:7c:d6}
	I0917 18:14:16.903387   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined IP address 192.168.72.66 and MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:16.903561   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHHostname
	I0917 18:14:16.905820   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:16.906165   61418 main.go:141] libmachine: (kindnet-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:7c:d6", ip: ""} in network mk-kindnet-639892: {Iface:virbr4 ExpiryTime:2024-09-17 19:14:07 +0000 UTC Type:0 Mac:52:54:00:58:7c:d6 Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:kindnet-639892 Clientid:01:52:54:00:58:7c:d6}
	I0917 18:14:16.906190   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined IP address 192.168.72.66 and MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:16.906386   61418 provision.go:143] copyHostCerts
	I0917 18:14:16.906450   61418 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:14:16.906459   61418 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:14:16.906516   61418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:14:16.906604   61418 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:14:16.906612   61418 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:14:16.906631   61418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:14:16.906689   61418 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:14:16.906696   61418 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:14:16.906720   61418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:14:16.906799   61418 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.kindnet-639892 san=[127.0.0.1 192.168.72.66 kindnet-639892 localhost minikube]
	I0917 18:14:17.034258   61418 provision.go:177] copyRemoteCerts
	I0917 18:14:17.034321   61418 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:14:17.034346   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHHostname
	I0917 18:14:17.037286   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:17.037768   61418 main.go:141] libmachine: (kindnet-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:7c:d6", ip: ""} in network mk-kindnet-639892: {Iface:virbr4 ExpiryTime:2024-09-17 19:14:07 +0000 UTC Type:0 Mac:52:54:00:58:7c:d6 Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:kindnet-639892 Clientid:01:52:54:00:58:7c:d6}
	I0917 18:14:17.037791   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined IP address 192.168.72.66 and MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:17.038009   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHPort
	I0917 18:14:17.038210   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHKeyPath
	I0917 18:14:17.038373   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHUsername
	I0917 18:14:17.038511   61418 sshutil.go:53] new ssh client: &{IP:192.168.72.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/kindnet-639892/id_rsa Username:docker}
	I0917 18:14:17.120431   61418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:14:17.149626   61418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0917 18:14:17.179076   61418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 18:14:17.207057   61418 provision.go:87] duration metric: took 307.171251ms to configureAuth
	I0917 18:14:17.207088   61418 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:14:17.207300   61418 config.go:182] Loaded profile config "kindnet-639892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:14:17.207410   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHHostname
	I0917 18:14:17.210128   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:17.210483   61418 main.go:141] libmachine: (kindnet-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:7c:d6", ip: ""} in network mk-kindnet-639892: {Iface:virbr4 ExpiryTime:2024-09-17 19:14:07 +0000 UTC Type:0 Mac:52:54:00:58:7c:d6 Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:kindnet-639892 Clientid:01:52:54:00:58:7c:d6}
	I0917 18:14:17.210514   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined IP address 192.168.72.66 and MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:17.210726   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHPort
	I0917 18:14:17.210903   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHKeyPath
	I0917 18:14:17.211018   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHKeyPath
	I0917 18:14:17.211203   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHUsername
	I0917 18:14:17.211407   61418 main.go:141] libmachine: Using SSH client type: native
	I0917 18:14:17.211634   61418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.66 22 <nil> <nil>}
	I0917 18:14:17.211658   61418 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:14:17.442787   61418 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:14:17.442820   61418 main.go:141] libmachine: Checking connection to Docker...
	I0917 18:14:17.442831   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetURL
	I0917 18:14:17.444193   61418 main.go:141] libmachine: (kindnet-639892) DBG | Using libvirt version 6000000
	I0917 18:14:17.446891   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:17.447302   61418 main.go:141] libmachine: (kindnet-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:7c:d6", ip: ""} in network mk-kindnet-639892: {Iface:virbr4 ExpiryTime:2024-09-17 19:14:07 +0000 UTC Type:0 Mac:52:54:00:58:7c:d6 Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:kindnet-639892 Clientid:01:52:54:00:58:7c:d6}
	I0917 18:14:17.447335   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined IP address 192.168.72.66 and MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:17.447625   61418 main.go:141] libmachine: Docker is up and running!
	I0917 18:14:17.447644   61418 main.go:141] libmachine: Reticulating splines...
	I0917 18:14:17.447651   61418 client.go:171] duration metric: took 26.103117142s to LocalClient.Create
	I0917 18:14:17.447669   61418 start.go:167] duration metric: took 26.103176203s to libmachine.API.Create "kindnet-639892"
	I0917 18:14:17.447679   61418 start.go:293] postStartSetup for "kindnet-639892" (driver="kvm2")
	I0917 18:14:17.447688   61418 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:14:17.447705   61418 main.go:141] libmachine: (kindnet-639892) Calling .DriverName
	I0917 18:14:17.447993   61418 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:14:17.448021   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHHostname
	I0917 18:14:17.450264   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:17.450581   61418 main.go:141] libmachine: (kindnet-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:7c:d6", ip: ""} in network mk-kindnet-639892: {Iface:virbr4 ExpiryTime:2024-09-17 19:14:07 +0000 UTC Type:0 Mac:52:54:00:58:7c:d6 Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:kindnet-639892 Clientid:01:52:54:00:58:7c:d6}
	I0917 18:14:17.450611   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined IP address 192.168.72.66 and MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:17.450790   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHPort
	I0917 18:14:17.450992   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHKeyPath
	I0917 18:14:17.451185   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHUsername
	I0917 18:14:17.451332   61418 sshutil.go:53] new ssh client: &{IP:192.168.72.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/kindnet-639892/id_rsa Username:docker}
	I0917 18:14:17.533844   61418 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:14:17.538560   61418 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:14:17.538592   61418 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:14:17.538666   61418 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:14:17.538767   61418 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:14:17.538891   61418 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:14:17.550925   61418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:14:17.577180   61418 start.go:296] duration metric: took 129.489021ms for postStartSetup
	I0917 18:14:17.577260   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetConfigRaw
	I0917 18:14:17.577941   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetIP
	I0917 18:14:17.581146   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:17.581565   61418 main.go:141] libmachine: (kindnet-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:7c:d6", ip: ""} in network mk-kindnet-639892: {Iface:virbr4 ExpiryTime:2024-09-17 19:14:07 +0000 UTC Type:0 Mac:52:54:00:58:7c:d6 Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:kindnet-639892 Clientid:01:52:54:00:58:7c:d6}
	I0917 18:14:17.581607   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined IP address 192.168.72.66 and MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:17.581905   61418 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/config.json ...
	I0917 18:14:17.582171   61418 start.go:128] duration metric: took 26.259564083s to createHost
	I0917 18:14:17.582203   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHHostname
	I0917 18:14:17.584581   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:17.584926   61418 main.go:141] libmachine: (kindnet-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:7c:d6", ip: ""} in network mk-kindnet-639892: {Iface:virbr4 ExpiryTime:2024-09-17 19:14:07 +0000 UTC Type:0 Mac:52:54:00:58:7c:d6 Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:kindnet-639892 Clientid:01:52:54:00:58:7c:d6}
	I0917 18:14:17.584947   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined IP address 192.168.72.66 and MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:17.585121   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHPort
	I0917 18:14:17.585331   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHKeyPath
	I0917 18:14:17.585495   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHKeyPath
	I0917 18:14:17.585628   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHUsername
	I0917 18:14:17.585760   61418 main.go:141] libmachine: Using SSH client type: native
	I0917 18:14:17.585964   61418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.66 22 <nil> <nil>}
	I0917 18:14:17.585975   61418 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:14:17.690594   61418 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726596857.674549224
	
	I0917 18:14:17.690616   61418 fix.go:216] guest clock: 1726596857.674549224
	I0917 18:14:17.690624   61418 fix.go:229] Guest: 2024-09-17 18:14:17.674549224 +0000 UTC Remote: 2024-09-17 18:14:17.582186679 +0000 UTC m=+96.676870306 (delta=92.362545ms)
	I0917 18:14:17.690642   61418 fix.go:200] guest clock delta is within tolerance: 92.362545ms
	I0917 18:14:17.690647   61418 start.go:83] releasing machines lock for "kindnet-639892", held for 26.368229892s
	I0917 18:14:17.690681   61418 main.go:141] libmachine: (kindnet-639892) Calling .DriverName
	I0917 18:14:17.690942   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetIP
	I0917 18:14:17.693697   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:17.694141   61418 main.go:141] libmachine: (kindnet-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:7c:d6", ip: ""} in network mk-kindnet-639892: {Iface:virbr4 ExpiryTime:2024-09-17 19:14:07 +0000 UTC Type:0 Mac:52:54:00:58:7c:d6 Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:kindnet-639892 Clientid:01:52:54:00:58:7c:d6}
	I0917 18:14:17.694172   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined IP address 192.168.72.66 and MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:17.694332   61418 main.go:141] libmachine: (kindnet-639892) Calling .DriverName
	I0917 18:14:17.694857   61418 main.go:141] libmachine: (kindnet-639892) Calling .DriverName
	I0917 18:14:17.695029   61418 main.go:141] libmachine: (kindnet-639892) Calling .DriverName
	I0917 18:14:17.695103   61418 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:14:17.695169   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHHostname
	I0917 18:14:17.695264   61418 ssh_runner.go:195] Run: cat /version.json
	I0917 18:14:17.695290   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHHostname
	I0917 18:14:17.699252   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:17.699290   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:17.699707   61418 main.go:141] libmachine: (kindnet-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:7c:d6", ip: ""} in network mk-kindnet-639892: {Iface:virbr4 ExpiryTime:2024-09-17 19:14:07 +0000 UTC Type:0 Mac:52:54:00:58:7c:d6 Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:kindnet-639892 Clientid:01:52:54:00:58:7c:d6}
	I0917 18:14:17.699733   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined IP address 192.168.72.66 and MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:17.699765   61418 main.go:141] libmachine: (kindnet-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:7c:d6", ip: ""} in network mk-kindnet-639892: {Iface:virbr4 ExpiryTime:2024-09-17 19:14:07 +0000 UTC Type:0 Mac:52:54:00:58:7c:d6 Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:kindnet-639892 Clientid:01:52:54:00:58:7c:d6}
	I0917 18:14:17.699780   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined IP address 192.168.72.66 and MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:17.699929   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHPort
	I0917 18:14:17.700093   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHKeyPath
	I0917 18:14:17.700098   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHPort
	I0917 18:14:17.700275   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHUsername
	I0917 18:14:17.700288   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHKeyPath
	I0917 18:14:17.700409   61418 sshutil.go:53] new ssh client: &{IP:192.168.72.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/kindnet-639892/id_rsa Username:docker}
	I0917 18:14:17.700494   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetSSHUsername
	I0917 18:14:17.700614   61418 sshutil.go:53] new ssh client: &{IP:192.168.72.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/kindnet-639892/id_rsa Username:docker}
	I0917 18:14:17.778918   61418 ssh_runner.go:195] Run: systemctl --version
	I0917 18:14:17.799721   61418 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:14:17.972412   61418 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:14:17.979750   61418 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:14:17.979820   61418 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:14:17.999520   61418 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:14:17.999545   61418 start.go:495] detecting cgroup driver to use...
	I0917 18:14:17.999607   61418 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:14:18.019608   61418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:14:18.035275   61418 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:14:18.035354   61418 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:14:18.050107   61418 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:14:18.065153   61418 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:14:18.205397   61418 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:14:18.398844   61418 docker.go:233] disabling docker service ...
	I0917 18:14:18.398920   61418 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:14:18.419743   61418 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:14:18.436334   61418 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:14:18.584497   61418 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:14:18.720429   61418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:14:18.735262   61418 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:14:18.755917   61418 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 18:14:18.755986   61418 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:14:18.767730   61418 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:14:18.767793   61418 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:14:18.779535   61418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:14:18.790713   61418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:14:18.802440   61418 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:14:18.813911   61418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:14:18.824938   61418 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:14:18.843142   61418 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:14:18.854263   61418 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:14:18.865245   61418 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:14:18.865311   61418 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:14:18.879278   61418 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:14:18.889971   61418 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:14:19.010579   61418 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:14:19.107691   61418 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:14:19.107772   61418 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:14:19.112790   61418 start.go:563] Will wait 60s for crictl version
	I0917 18:14:19.112860   61418 ssh_runner.go:195] Run: which crictl
	I0917 18:14:19.116948   61418 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:14:19.160269   61418 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:14:19.160361   61418 ssh_runner.go:195] Run: crio --version
	I0917 18:14:19.191423   61418 ssh_runner.go:195] Run: crio --version
	I0917 18:14:19.224645   61418 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 18:14:14.694160   61298 addons.go:510] duration metric: took 1.567240011s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0917 18:14:15.995378   61298 pod_ready.go:103] pod "coredns-7c65d6cfc9-8kx5z" in "kube-system" namespace has status "Ready":"False"
	I0917 18:14:17.996964   61298 pod_ready.go:103] pod "coredns-7c65d6cfc9-8kx5z" in "kube-system" namespace has status "Ready":"False"
	I0917 18:14:19.226059   61418 main.go:141] libmachine: (kindnet-639892) Calling .GetIP
	I0917 18:14:19.228460   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:19.228985   61418 main.go:141] libmachine: (kindnet-639892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:7c:d6", ip: ""} in network mk-kindnet-639892: {Iface:virbr4 ExpiryTime:2024-09-17 19:14:07 +0000 UTC Type:0 Mac:52:54:00:58:7c:d6 Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:kindnet-639892 Clientid:01:52:54:00:58:7c:d6}
	I0917 18:14:19.229023   61418 main.go:141] libmachine: (kindnet-639892) DBG | domain kindnet-639892 has defined IP address 192.168.72.66 and MAC address 52:54:00:58:7c:d6 in network mk-kindnet-639892
	I0917 18:14:19.229264   61418 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0917 18:14:19.233480   61418 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:14:19.246719   61418 kubeadm.go:883] updating cluster {Name:kindnet-639892 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:kindnet-639892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.66 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:14:19.246837   61418 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 18:14:19.246929   61418 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:14:19.282371   61418 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0917 18:14:19.282449   61418 ssh_runner.go:195] Run: which lz4
	I0917 18:14:19.286774   61418 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 18:14:19.290996   61418 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 18:14:19.291032   61418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0917 18:14:20.809741   61418 crio.go:462] duration metric: took 1.523015465s to copy over tarball
	I0917 18:14:20.809819   61418 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 18:14:20.498299   61298 pod_ready.go:103] pod "coredns-7c65d6cfc9-8kx5z" in "kube-system" namespace has status "Ready":"False"
	I0917 18:14:22.995744   61298 pod_ready.go:103] pod "coredns-7c65d6cfc9-8kx5z" in "kube-system" namespace has status "Ready":"False"
	I0917 18:14:23.105329   61418 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.295485925s)
	I0917 18:14:23.105353   61418 crio.go:469] duration metric: took 2.295580501s to extract the tarball
	I0917 18:14:23.105362   61418 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 18:14:23.144423   61418 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:14:23.188993   61418 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 18:14:23.189018   61418 cache_images.go:84] Images are preloaded, skipping loading
	I0917 18:14:23.189027   61418 kubeadm.go:934] updating node { 192.168.72.66 8443 v1.31.1 crio true true} ...
	I0917 18:14:23.189127   61418 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-639892 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.66
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:kindnet-639892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0917 18:14:23.189193   61418 ssh_runner.go:195] Run: crio config
	I0917 18:14:23.250803   61418 cni.go:84] Creating CNI manager for "kindnet"
	I0917 18:14:23.250830   61418 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:14:23.250861   61418 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.66 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-639892 NodeName:kindnet-639892 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.66"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.66 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 18:14:23.251028   61418 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.66
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-639892"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.66
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.66"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:14:23.251101   61418 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 18:14:23.262265   61418 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:14:23.262333   61418 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:14:23.272918   61418 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0917 18:14:23.291867   61418 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:14:23.310940   61418 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0917 18:14:23.329658   61418 ssh_runner.go:195] Run: grep 192.168.72.66	control-plane.minikube.internal$ /etc/hosts
	I0917 18:14:23.333956   61418 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.66	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:14:23.347149   61418 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:14:23.458218   61418 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:14:23.475760   61418 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892 for IP: 192.168.72.66
	I0917 18:14:23.475784   61418 certs.go:194] generating shared ca certs ...
	I0917 18:14:23.475800   61418 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:14:23.475976   61418 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:14:23.476033   61418 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:14:23.476050   61418 certs.go:256] generating profile certs ...
	I0917 18:14:23.476119   61418 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/client.key
	I0917 18:14:23.476147   61418 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/client.crt with IP's: []
	I0917 18:14:23.559226   61418 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/client.crt ...
	I0917 18:14:23.559253   61418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/client.crt: {Name:mk926c201a831a1a6835f90f0c280c82e85c0fb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:14:23.559473   61418 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/client.key ...
	I0917 18:14:23.559488   61418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/client.key: {Name:mka5a1e13ca45e92e616af04bfa5c7b7f8ce90bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:14:23.559593   61418 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/apiserver.key.a4b9c8eb
	I0917 18:14:23.559614   61418 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/apiserver.crt.a4b9c8eb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.66]
	I0917 18:14:23.685930   61418 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/apiserver.crt.a4b9c8eb ...
	I0917 18:14:23.685966   61418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/apiserver.crt.a4b9c8eb: {Name:mkb612c5eebf08c0ba49ab733109a445c4d57038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:14:23.686186   61418 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/apiserver.key.a4b9c8eb ...
	I0917 18:14:23.686206   61418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/apiserver.key.a4b9c8eb: {Name:mkd509523aaa5c5c6f9aca8f065696d69a61e3cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:14:23.686322   61418 certs.go:381] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/apiserver.crt.a4b9c8eb -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/apiserver.crt
	I0917 18:14:23.686450   61418 certs.go:385] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/apiserver.key.a4b9c8eb -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/apiserver.key
	I0917 18:14:23.686529   61418 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/proxy-client.key
	I0917 18:14:23.686549   61418 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/proxy-client.crt with IP's: []
	I0917 18:14:23.767215   61418 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/proxy-client.crt ...
	I0917 18:14:23.767244   61418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/proxy-client.crt: {Name:mkbc62735349901e4a89fe2a3b9ae9d9dcc9e72a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:14:23.767437   61418 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/proxy-client.key ...
	I0917 18:14:23.767453   61418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/proxy-client.key: {Name:mk764e0ea091c021109f8674cc497cfc20e67b6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:14:23.767675   61418 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:14:23.767719   61418 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:14:23.767733   61418 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:14:23.767774   61418 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:14:23.767804   61418 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:14:23.767835   61418 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:14:23.767887   61418 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:14:23.768513   61418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:14:23.802990   61418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:14:23.834696   61418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:14:23.862114   61418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:14:23.887719   61418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0917 18:14:23.914010   61418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 18:14:23.946508   61418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:14:23.977817   61418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 18:14:24.007352   61418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:14:24.035434   61418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:14:24.062403   61418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:14:24.089390   61418 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:14:24.112479   61418 ssh_runner.go:195] Run: openssl version
	I0917 18:14:24.121405   61418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:14:24.134510   61418 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:14:24.140119   61418 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:14:24.140191   61418 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:14:24.147683   61418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:14:24.168112   61418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:14:24.187606   61418 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:14:24.193376   61418 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:14:24.193465   61418 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:14:24.202507   61418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:14:24.221558   61418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:14:24.239053   61418 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:14:24.246568   61418 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:14:24.246625   61418 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:14:24.253613   61418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:14:24.267266   61418 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:14:24.273578   61418 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 18:14:24.273669   61418 kubeadm.go:392] StartCluster: {Name:kindnet-639892 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:kindnet-639892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.66 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:14:24.273743   61418 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:14:24.273800   61418 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:14:24.316316   61418 cri.go:89] found id: ""
	I0917 18:14:24.316427   61418 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:14:24.327698   61418 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:14:24.340619   61418 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:14:24.353313   61418 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:14:24.353332   61418 kubeadm.go:157] found existing configuration files:
	
	I0917 18:14:24.353372   61418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:14:24.364682   61418 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:14:24.364770   61418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:14:24.375593   61418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:14:24.385791   61418 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:14:24.385875   61418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:14:24.399748   61418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:14:24.411813   61418 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:14:24.411870   61418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:14:24.422950   61418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:14:24.433093   61418 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:14:24.433160   61418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:14:24.443771   61418 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:14:24.507475   61418 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 18:14:24.507619   61418 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:14:24.630025   61418 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:14:24.630210   61418 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:14:24.630395   61418 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 18:14:24.648848   61418 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:14:24.730907   61418 out.go:235]   - Generating certificates and keys ...
	I0917 18:14:24.731024   61418 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:14:24.731112   61418 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:14:24.846192   61418 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 18:14:25.069870   61418 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 18:14:25.178577   61418 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 18:14:25.315287   61418 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 18:14:25.623819   61418 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 18:14:25.624004   61418 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-639892 localhost] and IPs [192.168.72.66 127.0.0.1 ::1]
	I0917 18:14:25.779652   61418 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 18:14:25.779815   61418 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-639892 localhost] and IPs [192.168.72.66 127.0.0.1 ::1]
	I0917 18:14:25.986148   61418 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 18:14:26.409386   61418 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 18:14:26.462276   61418 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 18:14:26.462348   61418 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:14:26.567896   61418 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:14:26.791552   61418 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 18:14:26.929101   61418 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:14:27.085347   61418 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:14:27.166416   61418 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:14:27.167159   61418 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:14:27.170230   61418 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:14:24.624648   61797 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:14:24.624676   61797 machine.go:96] duration metric: took 6.908422442s to provisionDockerMachine
	I0917 18:14:24.624689   61797 start.go:293] postStartSetup for "kubernetes-upgrade-644038" (driver="kvm2")
	I0917 18:14:24.624701   61797 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:14:24.624722   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .DriverName
	I0917 18:14:24.625141   61797 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:14:24.625169   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHHostname
	I0917 18:14:24.628411   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:14:24.628989   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ec:bf", ip: ""} in network mk-kubernetes-upgrade-644038: {Iface:virbr2 ExpiryTime:2024-09-17 19:12:49 +0000 UTC Type:0 Mac:52:54:00:73:ec:bf Iaid: IPaddr:192.168.50.134 Prefix:24 Hostname:kubernetes-upgrade-644038 Clientid:01:52:54:00:73:ec:bf}
	I0917 18:14:24.629025   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined IP address 192.168.50.134 and MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:14:24.629298   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHPort
	I0917 18:14:24.629549   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHKeyPath
	I0917 18:14:24.629776   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHUsername
	I0917 18:14:24.629943   61797 sshutil.go:53] new ssh client: &{IP:192.168.50.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/kubernetes-upgrade-644038/id_rsa Username:docker}
	I0917 18:14:24.717828   61797 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:14:24.722862   61797 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:14:24.722898   61797 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:14:24.722975   61797 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:14:24.723063   61797 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:14:24.723176   61797 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:14:24.737010   61797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:14:24.768929   61797 start.go:296] duration metric: took 144.22539ms for postStartSetup
	I0917 18:14:24.768982   61797 fix.go:56] duration metric: took 7.078190097s for fixHost
	I0917 18:14:24.769005   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHHostname
	I0917 18:14:24.771909   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:14:24.772289   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ec:bf", ip: ""} in network mk-kubernetes-upgrade-644038: {Iface:virbr2 ExpiryTime:2024-09-17 19:12:49 +0000 UTC Type:0 Mac:52:54:00:73:ec:bf Iaid: IPaddr:192.168.50.134 Prefix:24 Hostname:kubernetes-upgrade-644038 Clientid:01:52:54:00:73:ec:bf}
	I0917 18:14:24.772321   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined IP address 192.168.50.134 and MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:14:24.772595   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHPort
	I0917 18:14:24.772813   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHKeyPath
	I0917 18:14:24.772974   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHKeyPath
	I0917 18:14:24.773138   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHUsername
	I0917 18:14:24.773364   61797 main.go:141] libmachine: Using SSH client type: native
	I0917 18:14:24.773592   61797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.134 22 <nil> <nil>}
	I0917 18:14:24.773613   61797 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:14:24.886585   61797 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726596864.839062237
	
	I0917 18:14:24.886614   61797 fix.go:216] guest clock: 1726596864.839062237
	I0917 18:14:24.886624   61797 fix.go:229] Guest: 2024-09-17 18:14:24.839062237 +0000 UTC Remote: 2024-09-17 18:14:24.768986433 +0000 UTC m=+70.638185593 (delta=70.075804ms)
	I0917 18:14:24.886652   61797 fix.go:200] guest clock delta is within tolerance: 70.075804ms
	I0917 18:14:24.886659   61797 start.go:83] releasing machines lock for "kubernetes-upgrade-644038", held for 7.195893444s
	I0917 18:14:24.886681   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .DriverName
	I0917 18:14:24.886929   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetIP
	I0917 18:14:24.889913   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:14:24.890328   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ec:bf", ip: ""} in network mk-kubernetes-upgrade-644038: {Iface:virbr2 ExpiryTime:2024-09-17 19:12:49 +0000 UTC Type:0 Mac:52:54:00:73:ec:bf Iaid: IPaddr:192.168.50.134 Prefix:24 Hostname:kubernetes-upgrade-644038 Clientid:01:52:54:00:73:ec:bf}
	I0917 18:14:24.890368   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined IP address 192.168.50.134 and MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:14:24.890528   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .DriverName
	I0917 18:14:24.891039   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .DriverName
	I0917 18:14:24.891223   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .DriverName
	I0917 18:14:24.891336   61797 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:14:24.891371   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHHostname
	I0917 18:14:24.891447   61797 ssh_runner.go:195] Run: cat /version.json
	I0917 18:14:24.891480   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHHostname
	I0917 18:14:24.894406   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:14:24.894437   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:14:24.894796   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ec:bf", ip: ""} in network mk-kubernetes-upgrade-644038: {Iface:virbr2 ExpiryTime:2024-09-17 19:12:49 +0000 UTC Type:0 Mac:52:54:00:73:ec:bf Iaid: IPaddr:192.168.50.134 Prefix:24 Hostname:kubernetes-upgrade-644038 Clientid:01:52:54:00:73:ec:bf}
	I0917 18:14:24.894828   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined IP address 192.168.50.134 and MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:14:24.894858   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ec:bf", ip: ""} in network mk-kubernetes-upgrade-644038: {Iface:virbr2 ExpiryTime:2024-09-17 19:12:49 +0000 UTC Type:0 Mac:52:54:00:73:ec:bf Iaid: IPaddr:192.168.50.134 Prefix:24 Hostname:kubernetes-upgrade-644038 Clientid:01:52:54:00:73:ec:bf}
	I0917 18:14:24.894874   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined IP address 192.168.50.134 and MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:14:24.895008   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHPort
	I0917 18:14:24.895131   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHPort
	I0917 18:14:24.895199   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHKeyPath
	I0917 18:14:24.895277   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHKeyPath
	I0917 18:14:24.895331   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHUsername
	I0917 18:14:24.895430   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetSSHUsername
	I0917 18:14:24.895431   61797 sshutil.go:53] new ssh client: &{IP:192.168.50.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/kubernetes-upgrade-644038/id_rsa Username:docker}
	I0917 18:14:24.895558   61797 sshutil.go:53] new ssh client: &{IP:192.168.50.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/kubernetes-upgrade-644038/id_rsa Username:docker}
	I0917 18:14:25.011866   61797 ssh_runner.go:195] Run: systemctl --version
	I0917 18:14:25.018737   61797 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:14:25.193414   61797 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:14:25.202543   61797 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:14:25.202617   61797 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:14:25.213718   61797 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 18:14:25.213746   61797 start.go:495] detecting cgroup driver to use...
	I0917 18:14:25.213830   61797 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:14:25.232323   61797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:14:25.247412   61797 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:14:25.247507   61797 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:14:25.263843   61797 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:14:25.279959   61797 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:14:25.463867   61797 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:14:25.628628   61797 docker.go:233] disabling docker service ...
	I0917 18:14:25.628747   61797 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:14:25.647567   61797 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:14:25.663099   61797 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:14:25.810009   61797 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:14:25.964009   61797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:14:25.980924   61797 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:14:26.010265   61797 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 18:14:26.010339   61797 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:14:26.022662   61797 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:14:26.022733   61797 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:14:26.034708   61797 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:14:26.048643   61797 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:14:26.062722   61797 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:14:26.075995   61797 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:14:26.087952   61797 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:14:26.100981   61797 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:14:26.115079   61797 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:14:26.125585   61797 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:14:26.137351   61797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:14:26.320051   61797 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:14:27.136774   61797 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:14:27.136855   61797 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:14:27.147537   61797 start.go:563] Will wait 60s for crictl version
	I0917 18:14:27.147612   61797 ssh_runner.go:195] Run: which crictl
	I0917 18:14:27.153336   61797 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:14:27.394974   61797 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:14:27.395119   61797 ssh_runner.go:195] Run: crio --version
	I0917 18:14:27.714320   61797 ssh_runner.go:195] Run: crio --version
	I0917 18:14:27.946343   61797 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 18:14:27.947943   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) Calling .GetIP
	I0917 18:14:27.951684   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:14:27.952147   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ec:bf", ip: ""} in network mk-kubernetes-upgrade-644038: {Iface:virbr2 ExpiryTime:2024-09-17 19:12:49 +0000 UTC Type:0 Mac:52:54:00:73:ec:bf Iaid: IPaddr:192.168.50.134 Prefix:24 Hostname:kubernetes-upgrade-644038 Clientid:01:52:54:00:73:ec:bf}
	I0917 18:14:27.952183   61797 main.go:141] libmachine: (kubernetes-upgrade-644038) DBG | domain kubernetes-upgrade-644038 has defined IP address 192.168.50.134 and MAC address 52:54:00:73:ec:bf in network mk-kubernetes-upgrade-644038
	I0917 18:14:27.952648   61797 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0917 18:14:28.023770   61797 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-644038 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:kubernetes-upgrade-644038 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:14:28.023874   61797 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 18:14:28.023913   61797 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:14:28.266373   61797 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 18:14:28.266403   61797 crio.go:433] Images already preloaded, skipping extraction
	I0917 18:14:28.266474   61797 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:14:28.526835   61797 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 18:14:28.526871   61797 cache_images.go:84] Images are preloaded, skipping loading
	I0917 18:14:28.526880   61797 kubeadm.go:934] updating node { 192.168.50.134 8443 v1.31.1 crio true true} ...
	I0917 18:14:28.527027   61797 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-644038 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-644038 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:14:28.527120   61797 ssh_runner.go:195] Run: crio config
	I0917 18:14:28.740748   61797 cni.go:84] Creating CNI manager for ""
	I0917 18:14:28.740839   61797 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:14:28.740863   61797 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:14:28.740912   61797 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.134 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-644038 NodeName:kubernetes-upgrade-644038 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 18:14:28.741117   61797 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.134
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-644038"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:14:28.741204   61797 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 18:14:28.767850   61797 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:14:28.767935   61797 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:14:28.791246   61797 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0917 18:14:28.832171   61797 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:14:28.861888   61797 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0917 18:14:28.890623   61797 ssh_runner.go:195] Run: grep 192.168.50.134	control-plane.minikube.internal$ /etc/hosts
	I0917 18:14:28.895645   61797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:14:25.613723   61298 pod_ready.go:103] pod "coredns-7c65d6cfc9-8kx5z" in "kube-system" namespace has status "Ready":"False"
	I0917 18:14:27.996067   61298 pod_ready.go:103] pod "coredns-7c65d6cfc9-8kx5z" in "kube-system" namespace has status "Ready":"False"
	I0917 18:14:27.171346   61418 out.go:235]   - Booting up control plane ...
	I0917 18:14:27.171466   61418 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:14:27.171559   61418 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:14:27.172826   61418 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:14:27.192098   61418 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:14:27.201285   61418 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:14:27.201389   61418 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:14:27.382485   61418 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 18:14:27.382667   61418 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 18:14:28.384751   61418 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001625614s
	I0917 18:14:28.384980   61418 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 18:14:33.383474   61418 kubeadm.go:310] [api-check] The API server is healthy after 5.001284845s
	I0917 18:14:33.407872   61418 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 18:14:33.424361   61418 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 18:14:33.476100   61418 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 18:14:33.476499   61418 kubeadm.go:310] [mark-control-plane] Marking the node kindnet-639892 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 18:14:33.496052   61418 kubeadm.go:310] [bootstrap-token] Using token: 1bf0jz.jqda0m2g5xisc0gb
	I0917 18:14:29.202092   61797 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:14:29.229924   61797 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038 for IP: 192.168.50.134
	I0917 18:14:29.229950   61797 certs.go:194] generating shared ca certs ...
	I0917 18:14:29.229969   61797 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:14:29.230152   61797 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:14:29.230206   61797 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:14:29.230217   61797 certs.go:256] generating profile certs ...
	I0917 18:14:29.230314   61797 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/client.key
	I0917 18:14:29.230383   61797 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/apiserver.key.82a12f2a
	I0917 18:14:29.230467   61797 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/proxy-client.key
	I0917 18:14:29.230627   61797 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:14:29.230674   61797 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:14:29.230687   61797 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:14:29.230718   61797 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:14:29.230754   61797 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:14:29.230780   61797 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:14:29.230839   61797 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:14:29.231760   61797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:14:29.268773   61797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:14:29.302301   61797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:14:29.330668   61797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:14:29.367291   61797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0917 18:14:29.406027   61797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 18:14:29.445104   61797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:14:29.527612   61797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kubernetes-upgrade-644038/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 18:14:29.581883   61797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:14:29.628379   61797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:14:29.660561   61797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:14:29.688781   61797 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:14:29.713550   61797 ssh_runner.go:195] Run: openssl version
	I0917 18:14:29.723823   61797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:14:29.740061   61797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:14:29.745785   61797 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:14:29.745899   61797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:14:29.754199   61797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:14:29.767365   61797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:14:29.783293   61797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:14:29.789414   61797 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:14:29.789487   61797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:14:29.799304   61797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:14:29.816886   61797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:14:29.831608   61797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:14:29.837364   61797 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:14:29.837438   61797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:14:29.844650   61797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:14:29.856979   61797 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:14:29.862010   61797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 18:14:29.870723   61797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 18:14:29.880173   61797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 18:14:29.886577   61797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 18:14:29.893874   61797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 18:14:29.905017   61797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 18:14:29.911829   61797 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-644038 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.1 ClusterName:kubernetes-upgrade-644038 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.134 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:14:29.911933   61797 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:14:29.912010   61797 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:14:29.977427   61797 cri.go:89] found id: "2b8469af1db843ab57072d33f07d64eb1a7d32ebe6e0f85d207ee2c7135b0da8"
	I0917 18:14:29.977454   61797 cri.go:89] found id: "190bbcc2474474ffce3263bca0f993ae3853e76e36370b1de1ea37faf4b1697c"
	I0917 18:14:29.977462   61797 cri.go:89] found id: "0ab5909dd5d7f6118236de0f58c48a2fff7a8ae8a66de3b66d2e04494670d7aa"
	I0917 18:14:29.977467   61797 cri.go:89] found id: "cf6ed605272c7b377276d316e46536bada906593fbf9948ca39e478592d14498"
	I0917 18:14:29.977472   61797 cri.go:89] found id: "625aa012c8958ec6257ca6b65a853eee78d0d404d1432ea719a70d506f07ce3b"
	I0917 18:14:29.977476   61797 cri.go:89] found id: "e69042beb60b69f5363ae3039716f648105512e150bb36b4c119d18229dad1ab"
	I0917 18:14:29.977480   61797 cri.go:89] found id: "ae210bc0adab30d1fff1a80e0b660ab8e1bf9374e28886f520836c3f956fe501"
	I0917 18:14:29.977484   61797 cri.go:89] found id: "58d3db62db2ff3f3d2b459cfb402820e8ebc9b6d7002e422fd2d482dcae82643"
	I0917 18:14:29.977489   61797 cri.go:89] found id: "28b7c75c2b9b3adbe13844953510ba0a055bb6d971536da794ee664f6ba1e5d7"
	I0917 18:14:29.977496   61797 cri.go:89] found id: "ce484757d62258dba8a09b02b6b14db116473fbc9ce3fb06ee02611ab7813b20"
	I0917 18:14:29.977499   61797 cri.go:89] found id: "784b47dc08f179e3a118d0b3834da998efb01d457d318a41a4d173c6ae658ae9"
	I0917 18:14:29.977504   61797 cri.go:89] found id: "d9200dda1fcaca1c1ec4624106b81677ef344e0f5ea37eb9d8ff5f2a00d2aaf1"
	I0917 18:14:29.977508   61797 cri.go:89] found id: "c8e74ebd7e21ba11315e1efc5c01ed4c96dc0749585b3d790886fe5273dd7163"
	I0917 18:14:29.977512   61797 cri.go:89] found id: "68bf18347ee1757e53ab2fb9e0692097348de5dff04a59829ddf914e80fa049e"
	I0917 18:14:29.977518   61797 cri.go:89] found id: "93ffcf1970c20dc0fb98df756be1b79e37e90cb19fdb094fc83471a40f588565"
	I0917 18:14:29.977523   61797 cri.go:89] found id: ""
	I0917 18:14:29.977576   61797 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 17 18:15:04 kubernetes-upgrade-644038 crio[2338]: time="2024-09-17 18:15:04.933483803Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726596904933454525,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=375e9eea-4c1a-4f2d-a225-c3f11b25f871 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:15:04 kubernetes-upgrade-644038 crio[2338]: time="2024-09-17 18:15:04.934307580Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3e085d4e-bb2c-4560-a639-b49167a69a8e name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:15:04 kubernetes-upgrade-644038 crio[2338]: time="2024-09-17 18:15:04.934383746Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3e085d4e-bb2c-4560-a639-b49167a69a8e name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:15:04 kubernetes-upgrade-644038 crio[2338]: time="2024-09-17 18:15:04.934877452Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f4eea2c832a40da3776deb2321cf2969b9d569cc333e53ae4ef6a405f1ca5595,PodSandboxId:b6b96f67f1fadafa5cf2d9541fa0481d5a5b6527e563c3122879c3aed998e8a6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726596901385289669,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmxw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a23bae-a3e7-4749-9fbe-ab281cfd4782,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65875158488ee870555b20ff1ff0f42bf1872a5a0190ed601150cd817d018efc,PodSandboxId:020abd3ace2e7c5bca1b863256eca5dddf0a6a508c6c080ae33511c2eaf87dd0,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726596901379306294,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jv84g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54cc864f-cfbe-470d-b253-eb63170f7ca0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd59ede5ba6792f315d549193c31d2341274f50fd4905af7b805e07f979ff31,PodSandboxId:3b122c6b289c4900f5bad6ce1d485e1acaf46c4feb09c2f794772d72488d5076,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726596897612855301,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-644038,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: d70e9195d8956e82b5df2495628c9136,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3add9f200fac8d2dc2ab9cffae3833847c2da424d60127a41af264a7563d570,PodSandboxId:96c0d35346971e73ef6425ecb578685a1c7ce39ad9b12fe9488e0265a12d5375,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726596897660146925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-644038,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 26125b27495b1edad3eebe3d44dcdc76,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdc4d2995ad7fb75e3c92e2e967ff17caad9028540a08e99a2a6009498f082f8,PodSandboxId:cdcb6fe0359c967962a63606ff7b1a325c7c4109020c7ef48cb3b1ff8542675e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726596897597806306,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-644038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 24f20fa38eaabb4b1c7730db30af637f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8da49855e1000372f2826e3f6166e563f8d8b4fffc18b7ea50142f7a52607096,PodSandboxId:69277d3b2a87c0bff6ecb7905d3765b50e3790b44ad7d6d744122f56c020ef6a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726596897580144132,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-644038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d
440fa90fde65689abc1dad629424b7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a29b2d78ef73736ffb88958c35d9df5aaf0283ca5a11810e111605a36096df8,PodSandboxId:418a7fd5f1f97f55d6f029393e66117cfe81852f4b700db6db95a33a6e110942,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726596889924714985,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q4pb8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f632d4e1-a23d-4952-b5f3-e377830b0958,
},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e5a0f89a24d0708005686f37ba0438876f0524b41d3725650fd426aea4a479d,PodSandboxId:5e1b82185d4eabf74ee98015b36130f839fd0789cf3d1700490fc188e52dd497,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726596882560179407,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18cd416b-6de8-4b85-8393-40a39ef67f3f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b8469af1db843ab57072d33f07d64eb1a7d32ebe6e0f85d207ee2c7135b0da8,PodSandboxId:418a7fd5f1f97f55d6f029393e66117cfe81852f4b700db6db95a33a6e110942,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726596869125549296,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q4pb8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f632d4e1-a23d-4952-b5f3-e377830b0958,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:190bbcc2474474ffce3263bca0f993ae3853e76e36370b1de1ea37faf4b1697c,PodSandboxId:020abd3ace2e7c5bca1b863256eca5dddf0a6a508c6c080ae33511c2eaf87dd0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726596868972081671,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jv84g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54cc864f-cfbe-470d-b253-eb63170f7ca0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ab5909dd5d7f6118236de0f58c48a2fff7a8ae8a66de3b66d2e04494670d7aa,PodSandboxId:69277d3b2a87c0bff6ecb7905d3765b50e3790b44ad7d6
d744122f56c020ef6a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726596868000850175,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-644038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d440fa90fde65689abc1dad629424b7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625aa012c8958ec6257ca6b65a853eee78d0d404d1432ea719a70d506f07ce3b,PodSandboxId:96c0d35346971e73ef6425ecb578685a1c7ce39ad9b12fe9488e
0265a12d5375,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726596867791419468,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-644038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26125b27495b1edad3eebe3d44dcdc76,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf6ed605272c7b377276d316e46536bada906593fbf9948ca39e478592d14498,PodSandboxId:cdcb6fe0359c967962a63606ff7b1a3
25c7c4109020c7ef48cb3b1ff8542675e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726596867860817449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-644038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24f20fa38eaabb4b1c7730db30af637f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e69042beb60b69f5363ae3039716f648105512e150bb36b4c119d18229dad1ab,PodSandboxId:3b122c6b289c4900f5bad6ce1d485e1acaf46c4feb09c2f794772d72488d5076,Me
tadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726596867721935427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-644038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d70e9195d8956e82b5df2495628c9136,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae210bc0adab30d1fff1a80e0b660ab8e1bf9374e28886f520836c3f956fe501,PodSandboxId:b6b96f67f1fadafa5cf2d9541fa0481d5a5b6527e563c3122879c3aed998e8a6,Metadata
:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726596867710511854,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmxw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a23bae-a3e7-4749-9fbe-ab281cfd4782,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58d3db62db2ff3f3d2b459cfb402820e8ebc9b6d7002e422fd2d482dcae82643,PodSandboxId:0dc0f00f54bcb123f870c53ef0f2f4c3427d0bf2f87c81526215071be91f263b,Metadata:&ContainerMetadata{Name:storage-p
rovisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726596828787156093,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18cd416b-6de8-4b85-8393-40a39ef67f3f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3e085d4e-bb2c-4560-a639-b49167a69a8e name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:15:04 kubernetes-upgrade-644038 crio[2338]: time="2024-09-17 18:15:04.995382997Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a8ee16d7-490c-4bb7-b631-7ec5f2bec7e0 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:15:04 kubernetes-upgrade-644038 crio[2338]: time="2024-09-17 18:15:04.995494751Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a8ee16d7-490c-4bb7-b631-7ec5f2bec7e0 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:15:04 kubernetes-upgrade-644038 crio[2338]: time="2024-09-17 18:15:04.997146847Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4c85439d-14f6-4c1f-93cf-3ca06c3ea0e5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:15:04 kubernetes-upgrade-644038 crio[2338]: time="2024-09-17 18:15:04.998495259Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726596904998450331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4c85439d-14f6-4c1f-93cf-3ca06c3ea0e5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:15:05 kubernetes-upgrade-644038 crio[2338]: time="2024-09-17 18:15:05.000302905Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e2f66886-17ed-4067-ada4-74e290d2f932 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:15:05 kubernetes-upgrade-644038 crio[2338]: time="2024-09-17 18:15:05.000443219Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e2f66886-17ed-4067-ada4-74e290d2f932 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:15:05 kubernetes-upgrade-644038 crio[2338]: time="2024-09-17 18:15:05.001212544Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f4eea2c832a40da3776deb2321cf2969b9d569cc333e53ae4ef6a405f1ca5595,PodSandboxId:b6b96f67f1fadafa5cf2d9541fa0481d5a5b6527e563c3122879c3aed998e8a6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726596901385289669,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmxw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a23bae-a3e7-4749-9fbe-ab281cfd4782,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65875158488ee870555b20ff1ff0f42bf1872a5a0190ed601150cd817d018efc,PodSandboxId:020abd3ace2e7c5bca1b863256eca5dddf0a6a508c6c080ae33511c2eaf87dd0,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726596901379306294,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jv84g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54cc864f-cfbe-470d-b253-eb63170f7ca0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd59ede5ba6792f315d549193c31d2341274f50fd4905af7b805e07f979ff31,PodSandboxId:3b122c6b289c4900f5bad6ce1d485e1acaf46c4feb09c2f794772d72488d5076,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726596897612855301,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-644038,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: d70e9195d8956e82b5df2495628c9136,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3add9f200fac8d2dc2ab9cffae3833847c2da424d60127a41af264a7563d570,PodSandboxId:96c0d35346971e73ef6425ecb578685a1c7ce39ad9b12fe9488e0265a12d5375,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726596897660146925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-644038,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 26125b27495b1edad3eebe3d44dcdc76,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdc4d2995ad7fb75e3c92e2e967ff17caad9028540a08e99a2a6009498f082f8,PodSandboxId:cdcb6fe0359c967962a63606ff7b1a325c7c4109020c7ef48cb3b1ff8542675e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726596897597806306,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-644038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 24f20fa38eaabb4b1c7730db30af637f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8da49855e1000372f2826e3f6166e563f8d8b4fffc18b7ea50142f7a52607096,PodSandboxId:69277d3b2a87c0bff6ecb7905d3765b50e3790b44ad7d6d744122f56c020ef6a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726596897580144132,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-644038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d
440fa90fde65689abc1dad629424b7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a29b2d78ef73736ffb88958c35d9df5aaf0283ca5a11810e111605a36096df8,PodSandboxId:418a7fd5f1f97f55d6f029393e66117cfe81852f4b700db6db95a33a6e110942,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726596889924714985,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q4pb8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f632d4e1-a23d-4952-b5f3-e377830b0958,
},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e5a0f89a24d0708005686f37ba0438876f0524b41d3725650fd426aea4a479d,PodSandboxId:5e1b82185d4eabf74ee98015b36130f839fd0789cf3d1700490fc188e52dd497,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726596882560179407,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18cd416b-6de8-4b85-8393-40a39ef67f3f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b8469af1db843ab57072d33f07d64eb1a7d32ebe6e0f85d207ee2c7135b0da8,PodSandboxId:418a7fd5f1f97f55d6f029393e66117cfe81852f4b700db6db95a33a6e110942,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726596869125549296,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q4pb8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f632d4e1-a23d-4952-b5f3-e377830b0958,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:190bbcc2474474ffce3263bca0f993ae3853e76e36370b1de1ea37faf4b1697c,PodSandboxId:020abd3ace2e7c5bca1b863256eca5dddf0a6a508c6c080ae33511c2eaf87dd0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726596868972081671,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jv84g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54cc864f-cfbe-470d-b253-eb63170f7ca0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ab5909dd5d7f6118236de0f58c48a2fff7a8ae8a66de3b66d2e04494670d7aa,PodSandboxId:69277d3b2a87c0bff6ecb7905d3765b50e3790b44ad7d6
d744122f56c020ef6a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726596868000850175,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-644038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d440fa90fde65689abc1dad629424b7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625aa012c8958ec6257ca6b65a853eee78d0d404d1432ea719a70d506f07ce3b,PodSandboxId:96c0d35346971e73ef6425ecb578685a1c7ce39ad9b12fe9488e
0265a12d5375,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726596867791419468,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-644038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26125b27495b1edad3eebe3d44dcdc76,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf6ed605272c7b377276d316e46536bada906593fbf9948ca39e478592d14498,PodSandboxId:cdcb6fe0359c967962a63606ff7b1a3
25c7c4109020c7ef48cb3b1ff8542675e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726596867860817449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-644038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24f20fa38eaabb4b1c7730db30af637f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e69042beb60b69f5363ae3039716f648105512e150bb36b4c119d18229dad1ab,PodSandboxId:3b122c6b289c4900f5bad6ce1d485e1acaf46c4feb09c2f794772d72488d5076,Me
tadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726596867721935427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-644038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d70e9195d8956e82b5df2495628c9136,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae210bc0adab30d1fff1a80e0b660ab8e1bf9374e28886f520836c3f956fe501,PodSandboxId:b6b96f67f1fadafa5cf2d9541fa0481d5a5b6527e563c3122879c3aed998e8a6,Metadata
:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726596867710511854,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmxw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a23bae-a3e7-4749-9fbe-ab281cfd4782,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58d3db62db2ff3f3d2b459cfb402820e8ebc9b6d7002e422fd2d482dcae82643,PodSandboxId:0dc0f00f54bcb123f870c53ef0f2f4c3427d0bf2f87c81526215071be91f263b,Metadata:&ContainerMetadata{Name:storage-p
rovisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726596828787156093,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18cd416b-6de8-4b85-8393-40a39ef67f3f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e2f66886-17ed-4067-ada4-74e290d2f932 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:15:05 kubernetes-upgrade-644038 crio[2338]: time="2024-09-17 18:15:05.066962569Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=07b0fb4d-13b4-4d7e-830b-b813a517e9b5 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:15:05 kubernetes-upgrade-644038 crio[2338]: time="2024-09-17 18:15:05.067075089Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=07b0fb4d-13b4-4d7e-830b-b813a517e9b5 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:15:05 kubernetes-upgrade-644038 crio[2338]: time="2024-09-17 18:15:05.068873563Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=01e93f39-3979-4156-932d-eb51040a79de name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:15:05 kubernetes-upgrade-644038 crio[2338]: time="2024-09-17 18:15:05.069545434Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726596905069509247,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=01e93f39-3979-4156-932d-eb51040a79de name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:15:05 kubernetes-upgrade-644038 crio[2338]: time="2024-09-17 18:15:05.070207128Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9093f690-0a0e-4bc4-b073-4b5ed0f540de name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:15:05 kubernetes-upgrade-644038 crio[2338]: time="2024-09-17 18:15:05.070361980Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9093f690-0a0e-4bc4-b073-4b5ed0f540de name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:15:05 kubernetes-upgrade-644038 crio[2338]: time="2024-09-17 18:15:05.070847579Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f4eea2c832a40da3776deb2321cf2969b9d569cc333e53ae4ef6a405f1ca5595,PodSandboxId:b6b96f67f1fadafa5cf2d9541fa0481d5a5b6527e563c3122879c3aed998e8a6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726596901385289669,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmxw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a23bae-a3e7-4749-9fbe-ab281cfd4782,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65875158488ee870555b20ff1ff0f42bf1872a5a0190ed601150cd817d018efc,PodSandboxId:020abd3ace2e7c5bca1b863256eca5dddf0a6a508c6c080ae33511c2eaf87dd0,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726596901379306294,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jv84g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54cc864f-cfbe-470d-b253-eb63170f7ca0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd59ede5ba6792f315d549193c31d2341274f50fd4905af7b805e07f979ff31,PodSandboxId:3b122c6b289c4900f5bad6ce1d485e1acaf46c4feb09c2f794772d72488d5076,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726596897612855301,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-644038,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: d70e9195d8956e82b5df2495628c9136,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3add9f200fac8d2dc2ab9cffae3833847c2da424d60127a41af264a7563d570,PodSandboxId:96c0d35346971e73ef6425ecb578685a1c7ce39ad9b12fe9488e0265a12d5375,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726596897660146925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-644038,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 26125b27495b1edad3eebe3d44dcdc76,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdc4d2995ad7fb75e3c92e2e967ff17caad9028540a08e99a2a6009498f082f8,PodSandboxId:cdcb6fe0359c967962a63606ff7b1a325c7c4109020c7ef48cb3b1ff8542675e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726596897597806306,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-644038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 24f20fa38eaabb4b1c7730db30af637f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8da49855e1000372f2826e3f6166e563f8d8b4fffc18b7ea50142f7a52607096,PodSandboxId:69277d3b2a87c0bff6ecb7905d3765b50e3790b44ad7d6d744122f56c020ef6a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726596897580144132,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-644038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d
440fa90fde65689abc1dad629424b7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a29b2d78ef73736ffb88958c35d9df5aaf0283ca5a11810e111605a36096df8,PodSandboxId:418a7fd5f1f97f55d6f029393e66117cfe81852f4b700db6db95a33a6e110942,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726596889924714985,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q4pb8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f632d4e1-a23d-4952-b5f3-e377830b0958,
},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e5a0f89a24d0708005686f37ba0438876f0524b41d3725650fd426aea4a479d,PodSandboxId:5e1b82185d4eabf74ee98015b36130f839fd0789cf3d1700490fc188e52dd497,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726596882560179407,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18cd416b-6de8-4b85-8393-40a39ef67f3f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b8469af1db843ab57072d33f07d64eb1a7d32ebe6e0f85d207ee2c7135b0da8,PodSandboxId:418a7fd5f1f97f55d6f029393e66117cfe81852f4b700db6db95a33a6e110942,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726596869125549296,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q4pb8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f632d4e1-a23d-4952-b5f3-e377830b0958,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:190bbcc2474474ffce3263bca0f993ae3853e76e36370b1de1ea37faf4b1697c,PodSandboxId:020abd3ace2e7c5bca1b863256eca5dddf0a6a508c6c080ae33511c2eaf87dd0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726596868972081671,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jv84g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54cc864f-cfbe-470d-b253-eb63170f7ca0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ab5909dd5d7f6118236de0f58c48a2fff7a8ae8a66de3b66d2e04494670d7aa,PodSandboxId:69277d3b2a87c0bff6ecb7905d3765b50e3790b44ad7d6
d744122f56c020ef6a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726596868000850175,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-644038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d440fa90fde65689abc1dad629424b7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625aa012c8958ec6257ca6b65a853eee78d0d404d1432ea719a70d506f07ce3b,PodSandboxId:96c0d35346971e73ef6425ecb578685a1c7ce39ad9b12fe9488e
0265a12d5375,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726596867791419468,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-644038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26125b27495b1edad3eebe3d44dcdc76,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf6ed605272c7b377276d316e46536bada906593fbf9948ca39e478592d14498,PodSandboxId:cdcb6fe0359c967962a63606ff7b1a3
25c7c4109020c7ef48cb3b1ff8542675e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726596867860817449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-644038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24f20fa38eaabb4b1c7730db30af637f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e69042beb60b69f5363ae3039716f648105512e150bb36b4c119d18229dad1ab,PodSandboxId:3b122c6b289c4900f5bad6ce1d485e1acaf46c4feb09c2f794772d72488d5076,Me
tadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726596867721935427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-644038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d70e9195d8956e82b5df2495628c9136,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae210bc0adab30d1fff1a80e0b660ab8e1bf9374e28886f520836c3f956fe501,PodSandboxId:b6b96f67f1fadafa5cf2d9541fa0481d5a5b6527e563c3122879c3aed998e8a6,Metadata
:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726596867710511854,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmxw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a23bae-a3e7-4749-9fbe-ab281cfd4782,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58d3db62db2ff3f3d2b459cfb402820e8ebc9b6d7002e422fd2d482dcae82643,PodSandboxId:0dc0f00f54bcb123f870c53ef0f2f4c3427d0bf2f87c81526215071be91f263b,Metadata:&ContainerMetadata{Name:storage-p
rovisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726596828787156093,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18cd416b-6de8-4b85-8393-40a39ef67f3f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9093f690-0a0e-4bc4-b073-4b5ed0f540de name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:15:05 kubernetes-upgrade-644038 crio[2338]: time="2024-09-17 18:15:05.113613708Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d85d23af-35a2-45e7-a1d8-52eace7db66f name=/runtime.v1.RuntimeService/Version
	Sep 17 18:15:05 kubernetes-upgrade-644038 crio[2338]: time="2024-09-17 18:15:05.113718844Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d85d23af-35a2-45e7-a1d8-52eace7db66f name=/runtime.v1.RuntimeService/Version
	Sep 17 18:15:05 kubernetes-upgrade-644038 crio[2338]: time="2024-09-17 18:15:05.115971637Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0185222d-2dfb-4b4c-90ff-f5cde4fdb4eb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:15:05 kubernetes-upgrade-644038 crio[2338]: time="2024-09-17 18:15:05.116981917Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726596905116950770,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0185222d-2dfb-4b4c-90ff-f5cde4fdb4eb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:15:05 kubernetes-upgrade-644038 crio[2338]: time="2024-09-17 18:15:05.117838893Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a8ea21ca-af67-4ff4-b741-7f17d2ed3595 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:15:05 kubernetes-upgrade-644038 crio[2338]: time="2024-09-17 18:15:05.117950419Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a8ea21ca-af67-4ff4-b741-7f17d2ed3595 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:15:05 kubernetes-upgrade-644038 crio[2338]: time="2024-09-17 18:15:05.118476792Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f4eea2c832a40da3776deb2321cf2969b9d569cc333e53ae4ef6a405f1ca5595,PodSandboxId:b6b96f67f1fadafa5cf2d9541fa0481d5a5b6527e563c3122879c3aed998e8a6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726596901385289669,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmxw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a23bae-a3e7-4749-9fbe-ab281cfd4782,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65875158488ee870555b20ff1ff0f42bf1872a5a0190ed601150cd817d018efc,PodSandboxId:020abd3ace2e7c5bca1b863256eca5dddf0a6a508c6c080ae33511c2eaf87dd0,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726596901379306294,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jv84g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54cc864f-cfbe-470d-b253-eb63170f7ca0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd59ede5ba6792f315d549193c31d2341274f50fd4905af7b805e07f979ff31,PodSandboxId:3b122c6b289c4900f5bad6ce1d485e1acaf46c4feb09c2f794772d72488d5076,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726596897612855301,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-644038,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: d70e9195d8956e82b5df2495628c9136,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3add9f200fac8d2dc2ab9cffae3833847c2da424d60127a41af264a7563d570,PodSandboxId:96c0d35346971e73ef6425ecb578685a1c7ce39ad9b12fe9488e0265a12d5375,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726596897660146925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-644038,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 26125b27495b1edad3eebe3d44dcdc76,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdc4d2995ad7fb75e3c92e2e967ff17caad9028540a08e99a2a6009498f082f8,PodSandboxId:cdcb6fe0359c967962a63606ff7b1a325c7c4109020c7ef48cb3b1ff8542675e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726596897597806306,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-644038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 24f20fa38eaabb4b1c7730db30af637f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8da49855e1000372f2826e3f6166e563f8d8b4fffc18b7ea50142f7a52607096,PodSandboxId:69277d3b2a87c0bff6ecb7905d3765b50e3790b44ad7d6d744122f56c020ef6a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726596897580144132,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-644038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d
440fa90fde65689abc1dad629424b7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a29b2d78ef73736ffb88958c35d9df5aaf0283ca5a11810e111605a36096df8,PodSandboxId:418a7fd5f1f97f55d6f029393e66117cfe81852f4b700db6db95a33a6e110942,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726596889924714985,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q4pb8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f632d4e1-a23d-4952-b5f3-e377830b0958,
},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e5a0f89a24d0708005686f37ba0438876f0524b41d3725650fd426aea4a479d,PodSandboxId:5e1b82185d4eabf74ee98015b36130f839fd0789cf3d1700490fc188e52dd497,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726596882560179407,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18cd416b-6de8-4b85-8393-40a39ef67f3f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b8469af1db843ab57072d33f07d64eb1a7d32ebe6e0f85d207ee2c7135b0da8,PodSandboxId:418a7fd5f1f97f55d6f029393e66117cfe81852f4b700db6db95a33a6e110942,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726596869125549296,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q4pb8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f632d4e1-a23d-4952-b5f3-e377830b0958,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:190bbcc2474474ffce3263bca0f993ae3853e76e36370b1de1ea37faf4b1697c,PodSandboxId:020abd3ace2e7c5bca1b863256eca5dddf0a6a508c6c080ae33511c2eaf87dd0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726596868972081671,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jv84g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54cc864f-cfbe-470d-b253-eb63170f7ca0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ab5909dd5d7f6118236de0f58c48a2fff7a8ae8a66de3b66d2e04494670d7aa,PodSandboxId:69277d3b2a87c0bff6ecb7905d3765b50e3790b44ad7d6
d744122f56c020ef6a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726596868000850175,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-644038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d440fa90fde65689abc1dad629424b7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625aa012c8958ec6257ca6b65a853eee78d0d404d1432ea719a70d506f07ce3b,PodSandboxId:96c0d35346971e73ef6425ecb578685a1c7ce39ad9b12fe9488e
0265a12d5375,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726596867791419468,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-644038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26125b27495b1edad3eebe3d44dcdc76,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf6ed605272c7b377276d316e46536bada906593fbf9948ca39e478592d14498,PodSandboxId:cdcb6fe0359c967962a63606ff7b1a3
25c7c4109020c7ef48cb3b1ff8542675e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726596867860817449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-644038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24f20fa38eaabb4b1c7730db30af637f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e69042beb60b69f5363ae3039716f648105512e150bb36b4c119d18229dad1ab,PodSandboxId:3b122c6b289c4900f5bad6ce1d485e1acaf46c4feb09c2f794772d72488d5076,Me
tadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726596867721935427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-644038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d70e9195d8956e82b5df2495628c9136,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae210bc0adab30d1fff1a80e0b660ab8e1bf9374e28886f520836c3f956fe501,PodSandboxId:b6b96f67f1fadafa5cf2d9541fa0481d5a5b6527e563c3122879c3aed998e8a6,Metadata
:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726596867710511854,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmxw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a23bae-a3e7-4749-9fbe-ab281cfd4782,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58d3db62db2ff3f3d2b459cfb402820e8ebc9b6d7002e422fd2d482dcae82643,PodSandboxId:0dc0f00f54bcb123f870c53ef0f2f4c3427d0bf2f87c81526215071be91f263b,Metadata:&ContainerMetadata{Name:storage-p
rovisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726596828787156093,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18cd416b-6de8-4b85-8393-40a39ef67f3f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a8ea21ca-af67-4ff4-b741-7f17d2ed3595 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f4eea2c832a40       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   3 seconds ago        Running             kube-proxy                2                   b6b96f67f1fad       kube-proxy-rmxw9
	65875158488ee       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago        Running             coredns                   2                   020abd3ace2e7       coredns-7c65d6cfc9-jv84g
	a3add9f200fac       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   7 seconds ago        Running             kube-controller-manager   2                   96c0d35346971       kube-controller-manager-kubernetes-upgrade-644038
	4bd59ede5ba67       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   7 seconds ago        Running             kube-scheduler            2                   3b122c6b289c4       kube-scheduler-kubernetes-upgrade-644038
	cdc4d2995ad7f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   7 seconds ago        Running             etcd                      2                   cdcb6fe0359c9       etcd-kubernetes-upgrade-644038
	8da49855e1000       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   7 seconds ago        Running             kube-apiserver            2                   69277d3b2a87c       kube-apiserver-kubernetes-upgrade-644038
	8a29b2d78ef73       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 seconds ago       Running             coredns                   2                   418a7fd5f1f97       coredns-7c65d6cfc9-q4pb8
	2e5a0f89a24d0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   22 seconds ago       Running             storage-provisioner       2                   5e1b82185d4ea       storage-provisioner
	2b8469af1db84       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   36 seconds ago       Exited              coredns                   1                   418a7fd5f1f97       coredns-7c65d6cfc9-q4pb8
	190bbcc247447       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   36 seconds ago       Exited              coredns                   1                   020abd3ace2e7       coredns-7c65d6cfc9-jv84g
	0ab5909dd5d7f       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   37 seconds ago       Exited              kube-apiserver            1                   69277d3b2a87c       kube-apiserver-kubernetes-upgrade-644038
	cf6ed605272c7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   37 seconds ago       Exited              etcd                      1                   cdcb6fe0359c9       etcd-kubernetes-upgrade-644038
	625aa012c8958       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   37 seconds ago       Exited              kube-controller-manager   1                   96c0d35346971       kube-controller-manager-kubernetes-upgrade-644038
	e69042beb60b6       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   37 seconds ago       Exited              kube-scheduler            1                   3b122c6b289c4       kube-scheduler-kubernetes-upgrade-644038
	ae210bc0adab3       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   37 seconds ago       Exited              kube-proxy                1                   b6b96f67f1fad       kube-proxy-rmxw9
	58d3db62db2ff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   About a minute ago   Exited              storage-provisioner       1                   0dc0f00f54bcb       storage-provisioner
	
	
	==> coredns [190bbcc2474474ffce3263bca0f993ae3853e76e36370b1de1ea37faf4b1697c] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [2b8469af1db843ab57072d33f07d64eb1a7d32ebe6e0f85d207ee2c7135b0da8] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [65875158488ee870555b20ff1ff0f42bf1872a5a0190ed601150cd817d018efc] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	
	
	==> coredns [8a29b2d78ef73736ffb88958c35d9df5aaf0283ca5a11810e111605a36096df8] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-644038
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-644038
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 18:13:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-644038
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 18:15:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 18:15:00 +0000   Tue, 17 Sep 2024 18:13:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 18:15:00 +0000   Tue, 17 Sep 2024 18:13:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 18:15:00 +0000   Tue, 17 Sep 2024 18:13:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 18:15:00 +0000   Tue, 17 Sep 2024 18:13:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.134
	  Hostname:    kubernetes-upgrade-644038
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d675dd5fb4394f04a611873efcba2481
	  System UUID:                d675dd5f-b439-4f04-a611-873efcba2481
	  Boot ID:                    44a89551-0f31-48ef-89fa-4ddad5901be6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-jv84g                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     107s
	  kube-system                 coredns-7c65d6cfc9-q4pb8                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     107s
	  kube-system                 etcd-kubernetes-upgrade-644038                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         107s
	  kube-system                 kube-apiserver-kubernetes-upgrade-644038             250m (12%)    0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-644038    200m (10%)    0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-rmxw9                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-kubernetes-upgrade-644038             100m (5%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3s                   kube-proxy       
	  Normal  Starting                 33s                  kube-proxy       
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  119s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 119s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  118s (x8 over 119s)  kubelet          Node kubernetes-upgrade-644038 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     118s (x7 over 119s)  kubelet          Node kubernetes-upgrade-644038 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    118s (x8 over 119s)  kubelet          Node kubernetes-upgrade-644038 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           108s                 node-controller  Node kubernetes-upgrade-644038 event: Registered Node kubernetes-upgrade-644038 in Controller
	  Normal  RegisteredNode           30s                  node-controller  Node kubernetes-upgrade-644038 event: Registered Node kubernetes-upgrade-644038 in Controller
	  Normal  Starting                 8s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)      kubelet          Node kubernetes-upgrade-644038 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)      kubelet          Node kubernetes-upgrade-644038 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)      kubelet          Node kubernetes-upgrade-644038 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                   node-controller  Node kubernetes-upgrade-644038 event: Registered Node kubernetes-upgrade-644038 in Controller
	
	
	==> dmesg <==
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.143865] systemd-fstab-generator[560]: Ignoring "noauto" option for root device
	[  +0.057154] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.049693] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.225255] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.140949] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.324095] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[Sep17 18:13] systemd-fstab-generator[719]: Ignoring "noauto" option for root device
	[  +0.063200] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.992602] systemd-fstab-generator[841]: Ignoring "noauto" option for root device
	[  +6.911852] systemd-fstab-generator[1241]: Ignoring "noauto" option for root device
	[  +0.076187] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.035713] kauditd_printk_skb: 28 callbacks suppressed
	[ +30.575795] kauditd_printk_skb: 71 callbacks suppressed
	[Sep17 18:14] systemd-fstab-generator[2262]: Ignoring "noauto" option for root device
	[  +0.170165] systemd-fstab-generator[2274]: Ignoring "noauto" option for root device
	[  +0.189347] systemd-fstab-generator[2288]: Ignoring "noauto" option for root device
	[  +0.155137] systemd-fstab-generator[2301]: Ignoring "noauto" option for root device
	[  +0.339377] systemd-fstab-generator[2329]: Ignoring "noauto" option for root device
	[  +2.791282] systemd-fstab-generator[3077]: Ignoring "noauto" option for root device
	[  +3.638655] kauditd_printk_skb: 224 callbacks suppressed
	[ +24.217355] systemd-fstab-generator[3644]: Ignoring "noauto" option for root device
	[Sep17 18:15] kauditd_printk_skb: 56 callbacks suppressed
	[  +0.780857] systemd-fstab-generator[4114]: Ignoring "noauto" option for root device
	
	
	==> etcd [cdc4d2995ad7fb75e3c92e2e967ff17caad9028540a08e99a2a6009498f082f8] <==
	{"level":"info","ts":"2024-09-17T18:14:57.984784Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"65b6c097a273f11b","local-member-id":"f11c9bd62b5da16b","added-peer-id":"f11c9bd62b5da16b","added-peer-peer-urls":["https://192.168.50.134:2380"]}
	{"level":"info","ts":"2024-09-17T18:14:57.984969Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"65b6c097a273f11b","local-member-id":"f11c9bd62b5da16b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:14:57.985069Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:14:57.990397Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T18:14:57.995623Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.134:2380"}
	{"level":"info","ts":"2024-09-17T18:14:57.995861Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.134:2380"}
	{"level":"info","ts":"2024-09-17T18:14:57.995538Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-17T18:14:57.997952Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"f11c9bd62b5da16b","initial-advertise-peer-urls":["https://192.168.50.134:2380"],"listen-peer-urls":["https://192.168.50.134:2380"],"advertise-client-urls":["https://192.168.50.134:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.134:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-17T18:14:58.000438Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-17T18:14:59.042843Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f11c9bd62b5da16b is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-17T18:14:59.042902Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f11c9bd62b5da16b became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-17T18:14:59.042941Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f11c9bd62b5da16b received MsgPreVoteResp from f11c9bd62b5da16b at term 3"}
	{"level":"info","ts":"2024-09-17T18:14:59.042955Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f11c9bd62b5da16b became candidate at term 4"}
	{"level":"info","ts":"2024-09-17T18:14:59.042960Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f11c9bd62b5da16b received MsgVoteResp from f11c9bd62b5da16b at term 4"}
	{"level":"info","ts":"2024-09-17T18:14:59.042968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f11c9bd62b5da16b became leader at term 4"}
	{"level":"info","ts":"2024-09-17T18:14:59.042975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f11c9bd62b5da16b elected leader f11c9bd62b5da16b at term 4"}
	{"level":"info","ts":"2024-09-17T18:14:59.044561Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f11c9bd62b5da16b","local-member-attributes":"{Name:kubernetes-upgrade-644038 ClientURLs:[https://192.168.50.134:2379]}","request-path":"/0/members/f11c9bd62b5da16b/attributes","cluster-id":"65b6c097a273f11b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T18:14:59.044616Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T18:14:59.045076Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T18:14:59.045868Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T18:14:59.046713Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.134:2379"}
	{"level":"info","ts":"2024-09-17T18:14:59.048034Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T18:14:59.049281Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-17T18:14:59.062295Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T18:14:59.062381Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [cf6ed605272c7b377276d316e46536bada906593fbf9948ca39e478592d14498] <==
	{"level":"info","ts":"2024-09-17T18:14:30.078446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f11c9bd62b5da16b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-17T18:14:30.078722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f11c9bd62b5da16b received MsgPreVoteResp from f11c9bd62b5da16b at term 2"}
	{"level":"info","ts":"2024-09-17T18:14:30.079046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f11c9bd62b5da16b became candidate at term 3"}
	{"level":"info","ts":"2024-09-17T18:14:30.079698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f11c9bd62b5da16b received MsgVoteResp from f11c9bd62b5da16b at term 3"}
	{"level":"info","ts":"2024-09-17T18:14:30.083208Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f11c9bd62b5da16b became leader at term 3"}
	{"level":"info","ts":"2024-09-17T18:14:30.083294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f11c9bd62b5da16b elected leader f11c9bd62b5da16b at term 3"}
	{"level":"info","ts":"2024-09-17T18:14:30.099854Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f11c9bd62b5da16b","local-member-attributes":"{Name:kubernetes-upgrade-644038 ClientURLs:[https://192.168.50.134:2379]}","request-path":"/0/members/f11c9bd62b5da16b/attributes","cluster-id":"65b6c097a273f11b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T18:14:30.102761Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T18:14:30.103107Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T18:14:30.103910Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T18:14:30.148677Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T18:14:30.148719Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T18:14:30.170971Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T18:14:30.171857Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-17T18:14:30.191116Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.134:2379"}
	{"level":"info","ts":"2024-09-17T18:14:55.047346Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-17T18:14:55.047405Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"kubernetes-upgrade-644038","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.134:2380"],"advertise-client-urls":["https://192.168.50.134:2379"]}
	{"level":"warn","ts":"2024-09-17T18:14:55.047546Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-17T18:14:55.047627Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-17T18:14:55.049468Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.134:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-17T18:14:55.049496Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.134:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-17T18:14:55.049537Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f11c9bd62b5da16b","current-leader-member-id":"f11c9bd62b5da16b"}
	{"level":"info","ts":"2024-09-17T18:14:55.053080Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.50.134:2380"}
	{"level":"info","ts":"2024-09-17T18:14:55.053163Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.50.134:2380"}
	{"level":"info","ts":"2024-09-17T18:14:55.053173Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"kubernetes-upgrade-644038","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.134:2380"],"advertise-client-urls":["https://192.168.50.134:2379"]}
	
	
	==> kernel <==
	 18:15:05 up 2 min,  0 users,  load average: 1.10, 0.29, 0.10
	Linux kubernetes-upgrade-644038 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0ab5909dd5d7f6118236de0f58c48a2fff7a8ae8a66de3b66d2e04494670d7aa] <==
	I0917 18:14:44.995439       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0917 18:14:44.996351       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0917 18:14:44.996993       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0917 18:14:44.991728       1 apiapproval_controller.go:201] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0917 18:14:44.999817       1 controller.go:176] quota evaluator worker shutdown
	I0917 18:14:44.999931       1 controller.go:176] quota evaluator worker shutdown
	I0917 18:14:45.000058       1 controller.go:176] quota evaluator worker shutdown
	I0917 18:14:45.000174       1 controller.go:176] quota evaluator worker shutdown
	I0917 18:14:44.991736       1 nonstructuralschema_controller.go:207] Shutting down NonStructuralSchemaConditionController
	I0917 18:14:44.991744       1 naming_controller.go:305] Shutting down NamingConditionController
	I0917 18:14:44.991752       1 crd_finalizer.go:281] Shutting down CRDFinalizer
	I0917 18:14:44.991758       1 controller.go:132] Ending legacy_token_tracking_controller
	I0917 18:14:45.001993       1 controller.go:133] Shutting down legacy_token_tracking_controller
	I0917 18:14:44.991766       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	I0917 18:14:44.991776       1 apf_controller.go:389] Shutting down API Priority and Fairness config worker
	I0917 18:14:44.991785       1 storage_flowcontrol.go:186] APF bootstrap ensurer is exiting
	I0917 18:14:44.991798       1 cluster_authentication_trust_controller.go:466] Shutting down cluster_authentication_trust_controller controller
	I0917 18:14:44.991808       1 autoregister_controller.go:168] Shutting down autoregister controller
	I0917 18:14:44.991817       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0917 18:14:44.991827       1 system_namespaces_controller.go:76] Shutting down system namespaces controller
	I0917 18:14:44.991926       1 local_available_controller.go:172] Shutting down LocalAvailability controller
	I0917 18:14:44.991947       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0917 18:14:44.991953       1 apiservice_controller.go:134] Shutting down APIServiceRegistrationController
	I0917 18:14:44.991998       1 remote_available_controller.go:427] Shutting down RemoteAvailability controller
	I0917 18:14:44.992507       1 establishing_controller.go:92] Shutting down EstablishingController
	
	
	==> kube-apiserver [8da49855e1000372f2826e3f6166e563f8d8b4fffc18b7ea50142f7a52607096] <==
	I0917 18:15:00.537068       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0917 18:15:00.537158       1 aggregator.go:171] initial CRD sync complete...
	I0917 18:15:00.537169       1 autoregister_controller.go:144] Starting autoregister controller
	I0917 18:15:00.537175       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0917 18:15:00.537181       1 cache.go:39] Caches are synced for autoregister controller
	I0917 18:15:00.539874       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0917 18:15:00.539977       1 policy_source.go:224] refreshing policies
	I0917 18:15:00.608720       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 18:15:00.622816       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0917 18:15:00.623192       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0917 18:15:00.624457       1 shared_informer.go:320] Caches are synced for configmaps
	I0917 18:15:00.624571       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0917 18:15:00.627760       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0917 18:15:00.628063       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0917 18:15:00.628104       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0917 18:15:00.634486       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0917 18:15:01.436131       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0917 18:15:01.857027       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.134]
	I0917 18:15:01.858445       1 controller.go:615] quota admission added evaluator for: endpoints
	I0917 18:15:01.864566       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0917 18:15:02.389595       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0917 18:15:02.409332       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0917 18:15:02.461859       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0917 18:15:02.511561       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0917 18:15:02.539383       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [625aa012c8958ec6257ca6b65a853eee78d0d404d1432ea719a70d506f07ce3b] <==
	I0917 18:14:35.512502       1 shared_informer.go:320] Caches are synced for persistent volume
	I0917 18:14:35.515378       1 shared_informer.go:320] Caches are synced for PV protection
	I0917 18:14:35.519547       1 shared_informer.go:320] Caches are synced for stateful set
	I0917 18:14:35.521118       1 shared_informer.go:320] Caches are synced for job
	I0917 18:14:35.523642       1 shared_informer.go:320] Caches are synced for cronjob
	I0917 18:14:35.524892       1 shared_informer.go:320] Caches are synced for disruption
	I0917 18:14:35.530390       1 shared_informer.go:320] Caches are synced for PVC protection
	I0917 18:14:35.533717       1 shared_informer.go:320] Caches are synced for ephemeral
	I0917 18:14:35.537906       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0917 18:14:35.542324       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0917 18:14:35.565168       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0917 18:14:35.612558       1 shared_informer.go:320] Caches are synced for attach detach
	I0917 18:14:35.661455       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0917 18:14:35.670167       1 shared_informer.go:320] Caches are synced for resource quota
	I0917 18:14:35.708544       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0917 18:14:35.708624       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-644038"
	I0917 18:14:35.718325       1 shared_informer.go:320] Caches are synced for crt configmap
	I0917 18:14:35.721499       1 shared_informer.go:320] Caches are synced for resource quota
	I0917 18:14:35.728493       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="163.023311ms"
	I0917 18:14:35.729718       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="92.588µs"
	I0917 18:14:35.761196       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0917 18:14:36.155865       1 shared_informer.go:320] Caches are synced for garbage collector
	I0917 18:14:36.191440       1 shared_informer.go:320] Caches are synced for garbage collector
	I0917 18:14:36.191578       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0917 18:14:40.541208       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="93.574µs"
	
	
	==> kube-controller-manager [a3add9f200fac8d2dc2ab9cffae3833847c2da424d60127a41af264a7563d570] <==
	I0917 18:15:03.888190       1 shared_informer.go:320] Caches are synced for GC
	I0917 18:15:03.891368       1 shared_informer.go:320] Caches are synced for node
	I0917 18:15:03.891458       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0917 18:15:03.891528       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0917 18:15:03.891564       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0917 18:15:03.891574       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0917 18:15:03.891687       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-644038"
	I0917 18:15:03.892374       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0917 18:15:03.894982       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0917 18:15:03.897580       1 shared_informer.go:320] Caches are synced for persistent volume
	I0917 18:15:03.897581       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0917 18:15:03.900683       1 shared_informer.go:320] Caches are synced for ephemeral
	I0917 18:15:03.933766       1 shared_informer.go:320] Caches are synced for HPA
	I0917 18:15:04.034807       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0917 18:15:04.080440       1 shared_informer.go:320] Caches are synced for taint
	I0917 18:15:04.081392       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0917 18:15:04.081512       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-644038"
	I0917 18:15:04.081572       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0917 18:15:04.096720       1 shared_informer.go:320] Caches are synced for resource quota
	I0917 18:15:04.133704       1 shared_informer.go:320] Caches are synced for resource quota
	I0917 18:15:04.303877       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="406.080958ms"
	I0917 18:15:04.304015       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="53.56µs"
	I0917 18:15:04.549364       1 shared_informer.go:320] Caches are synced for garbage collector
	I0917 18:15:04.549406       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0917 18:15:04.549686       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [ae210bc0adab30d1fff1a80e0b660ab8e1bf9374e28886f520836c3f956fe501] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 18:14:29.760900       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 18:14:32.270616       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.134"]
	E0917 18:14:32.270730       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 18:14:32.335340       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 18:14:32.335393       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 18:14:32.335422       1 server_linux.go:169] "Using iptables Proxier"
	I0917 18:14:32.338653       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 18:14:32.339223       1 server.go:483] "Version info" version="v1.31.1"
	I0917 18:14:32.339353       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 18:14:32.341168       1 config.go:199] "Starting service config controller"
	I0917 18:14:32.341349       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 18:14:32.341418       1 config.go:105] "Starting endpoint slice config controller"
	I0917 18:14:32.341437       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 18:14:32.344903       1 config.go:328] "Starting node config controller"
	I0917 18:14:32.344962       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 18:14:32.442346       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 18:14:32.442571       1 shared_informer.go:320] Caches are synced for service config
	I0917 18:14:32.446344       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f4eea2c832a40da3776deb2321cf2969b9d569cc333e53ae4ef6a405f1ca5595] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 18:15:01.662840       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 18:15:01.677783       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.134"]
	E0917 18:15:01.678279       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 18:15:01.732789       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 18:15:01.732903       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 18:15:01.732999       1 server_linux.go:169] "Using iptables Proxier"
	I0917 18:15:01.735793       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 18:15:01.736130       1 server.go:483] "Version info" version="v1.31.1"
	I0917 18:15:01.736438       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 18:15:01.737706       1 config.go:199] "Starting service config controller"
	I0917 18:15:01.737859       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 18:15:01.737987       1 config.go:105] "Starting endpoint slice config controller"
	I0917 18:15:01.738068       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 18:15:01.738780       1 config.go:328] "Starting node config controller"
	I0917 18:15:01.738865       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 18:15:01.839154       1 shared_informer.go:320] Caches are synced for service config
	I0917 18:15:01.839170       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 18:15:01.839153       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4bd59ede5ba6792f315d549193c31d2341274f50fd4905af7b805e07f979ff31] <==
	I0917 18:14:58.756620       1 serving.go:386] Generated self-signed cert in-memory
	W0917 18:15:00.496702       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0917 18:15:00.496829       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0917 18:15:00.496866       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0917 18:15:00.497159       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0917 18:15:00.535624       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0917 18:15:00.538418       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 18:15:00.541086       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0917 18:15:00.541639       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 18:15:00.541720       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0917 18:15:00.541890       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 18:15:00.643545       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e69042beb60b69f5363ae3039716f648105512e150bb36b4c119d18229dad1ab] <==
	I0917 18:14:29.995165       1 serving.go:386] Generated self-signed cert in-memory
	W0917 18:14:32.151767       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0917 18:14:32.152411       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0917 18:14:32.152517       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0917 18:14:32.152554       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0917 18:14:32.258286       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0917 18:14:32.258382       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 18:14:32.265814       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0917 18:14:32.265989       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 18:14:32.266039       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0917 18:14:32.266073       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 18:14:32.366563       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0917 18:14:55.319727       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0917 18:14:55.319891       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0917 18:14:55.320312       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0917 18:14:55.320634       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 17 18:14:57 kubernetes-upgrade-644038 kubelet[3651]: I0917 18:14:57.279379    3651 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93ffcf1970c20dc0fb98df756be1b79e37e90cb19fdb094fc83471a40f588565"} err="failed to get container status \"93ffcf1970c20dc0fb98df756be1b79e37e90cb19fdb094fc83471a40f588565\": rpc error: code = NotFound desc = could not find container \"93ffcf1970c20dc0fb98df756be1b79e37e90cb19fdb094fc83471a40f588565\": container with ID starting with 93ffcf1970c20dc0fb98df756be1b79e37e90cb19fdb094fc83471a40f588565 not found: ID does not exist"
	Sep 17 18:14:57 kubernetes-upgrade-644038 kubelet[3651]: I0917 18:14:57.379278    3651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d70e9195d8956e82b5df2495628c9136-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-644038\" (UID: \"d70e9195d8956e82b5df2495628c9136\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-644038"
	Sep 17 18:14:57 kubernetes-upgrade-644038 kubelet[3651]: I0917 18:14:57.459705    3651 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-644038"
	Sep 17 18:14:57 kubernetes-upgrade-644038 kubelet[3651]: E0917 18:14:57.460569    3651 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.134:8443: connect: connection refused" node="kubernetes-upgrade-644038"
	Sep 17 18:14:57 kubernetes-upgrade-644038 kubelet[3651]: I0917 18:14:57.568211    3651 scope.go:117] "RemoveContainer" containerID="cf6ed605272c7b377276d316e46536bada906593fbf9948ca39e478592d14498"
	Sep 17 18:14:57 kubernetes-upgrade-644038 kubelet[3651]: I0917 18:14:57.568332    3651 scope.go:117] "RemoveContainer" containerID="0ab5909dd5d7f6118236de0f58c48a2fff7a8ae8a66de3b66d2e04494670d7aa"
	Sep 17 18:14:57 kubernetes-upgrade-644038 kubelet[3651]: I0917 18:14:57.575521    3651 scope.go:117] "RemoveContainer" containerID="625aa012c8958ec6257ca6b65a853eee78d0d404d1432ea719a70d506f07ce3b"
	Sep 17 18:14:57 kubernetes-upgrade-644038 kubelet[3651]: I0917 18:14:57.575780    3651 scope.go:117] "RemoveContainer" containerID="e69042beb60b69f5363ae3039716f648105512e150bb36b4c119d18229dad1ab"
	Sep 17 18:14:57 kubernetes-upgrade-644038 kubelet[3651]: E0917 18:14:57.676046    3651 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-644038?timeout=10s\": dial tcp 192.168.50.134:8443: connect: connection refused" interval="800ms"
	Sep 17 18:14:57 kubernetes-upgrade-644038 kubelet[3651]: I0917 18:14:57.863285    3651 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-644038"
	Sep 17 18:14:57 kubernetes-upgrade-644038 kubelet[3651]: E0917 18:14:57.864655    3651 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.134:8443: connect: connection refused" node="kubernetes-upgrade-644038"
	Sep 17 18:14:58 kubernetes-upgrade-644038 kubelet[3651]: I0917 18:14:58.666873    3651 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-644038"
	Sep 17 18:15:00 kubernetes-upgrade-644038 kubelet[3651]: I0917 18:15:00.605090    3651 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-644038"
	Sep 17 18:15:00 kubernetes-upgrade-644038 kubelet[3651]: I0917 18:15:00.605285    3651 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-644038"
	Sep 17 18:15:00 kubernetes-upgrade-644038 kubelet[3651]: I0917 18:15:00.605331    3651 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 17 18:15:00 kubernetes-upgrade-644038 kubelet[3651]: I0917 18:15:00.607031    3651 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 17 18:15:01 kubernetes-upgrade-644038 kubelet[3651]: I0917 18:15:01.048569    3651 apiserver.go:52] "Watching apiserver"
	Sep 17 18:15:01 kubernetes-upgrade-644038 kubelet[3651]: I0917 18:15:01.066872    3651 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 17 18:15:01 kubernetes-upgrade-644038 kubelet[3651]: I0917 18:15:01.127912    3651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16a23bae-a3e7-4749-9fbe-ab281cfd4782-lib-modules\") pod \"kube-proxy-rmxw9\" (UID: \"16a23bae-a3e7-4749-9fbe-ab281cfd4782\") " pod="kube-system/kube-proxy-rmxw9"
	Sep 17 18:15:01 kubernetes-upgrade-644038 kubelet[3651]: I0917 18:15:01.128903    3651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16a23bae-a3e7-4749-9fbe-ab281cfd4782-xtables-lock\") pod \"kube-proxy-rmxw9\" (UID: \"16a23bae-a3e7-4749-9fbe-ab281cfd4782\") " pod="kube-system/kube-proxy-rmxw9"
	Sep 17 18:15:01 kubernetes-upgrade-644038 kubelet[3651]: I0917 18:15:01.129046    3651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/18cd416b-6de8-4b85-8393-40a39ef67f3f-tmp\") pod \"storage-provisioner\" (UID: \"18cd416b-6de8-4b85-8393-40a39ef67f3f\") " pod="kube-system/storage-provisioner"
	Sep 17 18:15:01 kubernetes-upgrade-644038 kubelet[3651]: E0917 18:15:01.304557    3651 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-kubernetes-upgrade-644038\" already exists" pod="kube-system/etcd-kubernetes-upgrade-644038"
	Sep 17 18:15:01 kubernetes-upgrade-644038 kubelet[3651]: I0917 18:15:01.354395    3651 scope.go:117] "RemoveContainer" containerID="ae210bc0adab30d1fff1a80e0b660ab8e1bf9374e28886f520836c3f956fe501"
	Sep 17 18:15:01 kubernetes-upgrade-644038 kubelet[3651]: I0917 18:15:01.354718    3651 scope.go:117] "RemoveContainer" containerID="190bbcc2474474ffce3263bca0f993ae3853e76e36370b1de1ea37faf4b1697c"
	Sep 17 18:15:05 kubernetes-upgrade-644038 kubelet[3651]: I0917 18:15:05.633904    3651 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [2e5a0f89a24d0708005686f37ba0438876f0524b41d3725650fd426aea4a479d] <==
	I0917 18:14:42.655035       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 18:14:42.669332       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 18:14:42.669443       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0917 18:14:46.126794       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0917 18:14:50.385342       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0917 18:14:53.981449       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0917 18:14:57.032585       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0917 18:15:00.052331       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	I0917 18:15:03.718838       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 18:15:03.719054       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-644038_135d6966-ef1f-40ee-aa4a-a2c2d02eb5c9!
	I0917 18:15:03.720373       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"60a944cd-4a9f-46e2-bb69-e5c97db81799", APIVersion:"v1", ResourceVersion:"592", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-644038_135d6966-ef1f-40ee-aa4a-a2c2d02eb5c9 became leader
	I0917 18:15:03.819410       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-644038_135d6966-ef1f-40ee-aa4a-a2c2d02eb5c9!
	
	
	==> storage-provisioner [58d3db62db2ff3f3d2b459cfb402820e8ebc9b6d7002e422fd2d482dcae82643] <==
	I0917 18:13:48.880043       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 18:13:48.890073       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 18:13:48.890164       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0917 18:13:48.901807       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 18:13:48.901967       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-644038_44b919ff-90bb-49e4-9456-05d7021bbfb5!
	I0917 18:13:48.902497       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"60a944cd-4a9f-46e2-bb69-e5c97db81799", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-644038_44b919ff-90bb-49e4-9456-05d7021bbfb5 became leader
	I0917 18:13:49.002950       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-644038_44b919ff-90bb-49e4-9456-05d7021bbfb5!
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 18:15:04.523546   62959 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19662-11085/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-644038 -n kubernetes-upgrade-644038
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-644038 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-644038" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-644038
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-644038: (1.225666316s)
--- FAIL: TestKubernetesUpgrade (495.09s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (81.91s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-246701 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-246701 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m17.393273745s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-246701] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-246701" primary control-plane node in "pause-246701" cluster
	* Updating the running kvm2 "pause-246701" VM ...
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-246701" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 18:09:55.999972   58375 out.go:345] Setting OutFile to fd 1 ...
	I0917 18:09:56.000240   58375 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:09:56.000250   58375 out.go:358] Setting ErrFile to fd 2...
	I0917 18:09:56.000255   58375 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:09:56.000440   58375 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 18:09:56.000975   58375 out.go:352] Setting JSON to false
	I0917 18:09:56.002007   58375 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6711,"bootTime":1726589885,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 18:09:56.002107   58375 start.go:139] virtualization: kvm guest
	I0917 18:09:56.004303   58375 out.go:177] * [pause-246701] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 18:09:56.005800   58375 notify.go:220] Checking for updates...
	I0917 18:09:56.005810   58375 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 18:09:56.007144   58375 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 18:09:56.008377   58375 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:09:56.009516   58375 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 18:09:56.010642   58375 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 18:09:56.011764   58375 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 18:09:56.013588   58375 config.go:182] Loaded profile config "pause-246701": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:09:56.014193   58375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:09:56.014246   58375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:09:56.030387   58375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40095
	I0917 18:09:56.030797   58375 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:09:56.031302   58375 main.go:141] libmachine: Using API Version  1
	I0917 18:09:56.031322   58375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:09:56.031755   58375 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:09:56.031971   58375 main.go:141] libmachine: (pause-246701) Calling .DriverName
	I0917 18:09:56.032213   58375 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 18:09:56.032509   58375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:09:56.032555   58375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:09:56.048391   58375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40165
	I0917 18:09:56.048964   58375 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:09:56.049512   58375 main.go:141] libmachine: Using API Version  1
	I0917 18:09:56.049536   58375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:09:56.049882   58375 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:09:56.050033   58375 main.go:141] libmachine: (pause-246701) Calling .DriverName
	I0917 18:09:56.089080   58375 out.go:177] * Using the kvm2 driver based on existing profile
	I0917 18:09:56.090363   58375 start.go:297] selected driver: kvm2
	I0917 18:09:56.090379   58375 start.go:901] validating driver "kvm2" against &{Name:pause-246701 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.1 ClusterName:pause-246701 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:09:56.090532   58375 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 18:09:56.090872   58375 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:09:56.090971   58375 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19662-11085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0917 18:09:56.108304   58375 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0917 18:09:56.109345   58375 cni.go:84] Creating CNI manager for ""
	I0917 18:09:56.109410   58375 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:09:56.109492   58375 start.go:340] cluster config:
	{Name:pause-246701 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:pause-246701 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:09:56.109705   58375 iso.go:125] acquiring lock: {Name:mkee7131e09b1c5eb94d106bda884bcbdbf6f906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:09:56.111954   58375 out.go:177] * Starting "pause-246701" primary control-plane node in "pause-246701" cluster
	I0917 18:09:56.113283   58375 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 18:09:56.113338   58375 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0917 18:09:56.113347   58375 cache.go:56] Caching tarball of preloaded images
	I0917 18:09:56.113438   58375 preload.go:172] Found /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 18:09:56.113448   58375 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0917 18:09:56.113555   58375 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/pause-246701/config.json ...
	I0917 18:09:56.113784   58375 start.go:360] acquireMachinesLock for pause-246701: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 18:10:10.650859   58375 start.go:364] duration metric: took 14.537047312s to acquireMachinesLock for "pause-246701"
	I0917 18:10:10.650926   58375 start.go:96] Skipping create...Using existing machine configuration
	I0917 18:10:10.650934   58375 fix.go:54] fixHost starting: 
	I0917 18:10:10.651305   58375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:10:10.651361   58375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:10:10.669387   58375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36647
	I0917 18:10:10.669820   58375 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:10:10.670325   58375 main.go:141] libmachine: Using API Version  1
	I0917 18:10:10.670347   58375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:10:10.670716   58375 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:10:10.670913   58375 main.go:141] libmachine: (pause-246701) Calling .DriverName
	I0917 18:10:10.671104   58375 main.go:141] libmachine: (pause-246701) Calling .GetState
	I0917 18:10:10.672990   58375 fix.go:112] recreateIfNeeded on pause-246701: state=Running err=<nil>
	W0917 18:10:10.673015   58375 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 18:10:10.675534   58375 out.go:177] * Updating the running kvm2 "pause-246701" VM ...
	I0917 18:10:10.676958   58375 machine.go:93] provisionDockerMachine start ...
	I0917 18:10:10.676987   58375 main.go:141] libmachine: (pause-246701) Calling .DriverName
	I0917 18:10:10.677280   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHHostname
	I0917 18:10:10.680287   58375 main.go:141] libmachine: (pause-246701) DBG | domain pause-246701 has defined MAC address 52:54:00:35:be:cc in network mk-pause-246701
	I0917 18:10:10.680765   58375 main.go:141] libmachine: (pause-246701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:be:cc", ip: ""} in network mk-pause-246701: {Iface:virbr1 ExpiryTime:2024-09-17 19:08:48 +0000 UTC Type:0 Mac:52:54:00:35:be:cc Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:pause-246701 Clientid:01:52:54:00:35:be:cc}
	I0917 18:10:10.680791   58375 main.go:141] libmachine: (pause-246701) DBG | domain pause-246701 has defined IP address 192.168.39.167 and MAC address 52:54:00:35:be:cc in network mk-pause-246701
	I0917 18:10:10.680954   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHPort
	I0917 18:10:10.681137   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHKeyPath
	I0917 18:10:10.681329   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHKeyPath
	I0917 18:10:10.681491   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHUsername
	I0917 18:10:10.681664   58375 main.go:141] libmachine: Using SSH client type: native
	I0917 18:10:10.681851   58375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0917 18:10:10.681861   58375 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 18:10:10.790539   58375 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-246701
	
	I0917 18:10:10.790580   58375 main.go:141] libmachine: (pause-246701) Calling .GetMachineName
	I0917 18:10:10.790816   58375 buildroot.go:166] provisioning hostname "pause-246701"
	I0917 18:10:10.790839   58375 main.go:141] libmachine: (pause-246701) Calling .GetMachineName
	I0917 18:10:10.791042   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHHostname
	I0917 18:10:10.794440   58375 main.go:141] libmachine: (pause-246701) DBG | domain pause-246701 has defined MAC address 52:54:00:35:be:cc in network mk-pause-246701
	I0917 18:10:10.794883   58375 main.go:141] libmachine: (pause-246701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:be:cc", ip: ""} in network mk-pause-246701: {Iface:virbr1 ExpiryTime:2024-09-17 19:08:48 +0000 UTC Type:0 Mac:52:54:00:35:be:cc Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:pause-246701 Clientid:01:52:54:00:35:be:cc}
	I0917 18:10:10.794924   58375 main.go:141] libmachine: (pause-246701) DBG | domain pause-246701 has defined IP address 192.168.39.167 and MAC address 52:54:00:35:be:cc in network mk-pause-246701
	I0917 18:10:10.795150   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHPort
	I0917 18:10:10.795351   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHKeyPath
	I0917 18:10:10.795506   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHKeyPath
	I0917 18:10:10.795672   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHUsername
	I0917 18:10:10.795838   58375 main.go:141] libmachine: Using SSH client type: native
	I0917 18:10:10.796033   58375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0917 18:10:10.796050   58375 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-246701 && echo "pause-246701" | sudo tee /etc/hostname
	I0917 18:10:10.928198   58375 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-246701
	
	I0917 18:10:10.928228   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHHostname
	I0917 18:10:10.930907   58375 main.go:141] libmachine: (pause-246701) DBG | domain pause-246701 has defined MAC address 52:54:00:35:be:cc in network mk-pause-246701
	I0917 18:10:10.931254   58375 main.go:141] libmachine: (pause-246701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:be:cc", ip: ""} in network mk-pause-246701: {Iface:virbr1 ExpiryTime:2024-09-17 19:08:48 +0000 UTC Type:0 Mac:52:54:00:35:be:cc Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:pause-246701 Clientid:01:52:54:00:35:be:cc}
	I0917 18:10:10.931304   58375 main.go:141] libmachine: (pause-246701) DBG | domain pause-246701 has defined IP address 192.168.39.167 and MAC address 52:54:00:35:be:cc in network mk-pause-246701
	I0917 18:10:10.931464   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHPort
	I0917 18:10:10.931684   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHKeyPath
	I0917 18:10:10.931862   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHKeyPath
	I0917 18:10:10.932000   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHUsername
	I0917 18:10:10.932185   58375 main.go:141] libmachine: Using SSH client type: native
	I0917 18:10:10.932348   58375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0917 18:10:10.932364   58375 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-246701' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-246701/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-246701' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:10:11.042939   58375 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:10:11.042970   58375 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:10:11.043005   58375 buildroot.go:174] setting up certificates
	I0917 18:10:11.043018   58375 provision.go:84] configureAuth start
	I0917 18:10:11.043030   58375 main.go:141] libmachine: (pause-246701) Calling .GetMachineName
	I0917 18:10:11.043440   58375 main.go:141] libmachine: (pause-246701) Calling .GetIP
	I0917 18:10:11.046819   58375 main.go:141] libmachine: (pause-246701) DBG | domain pause-246701 has defined MAC address 52:54:00:35:be:cc in network mk-pause-246701
	I0917 18:10:11.047351   58375 main.go:141] libmachine: (pause-246701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:be:cc", ip: ""} in network mk-pause-246701: {Iface:virbr1 ExpiryTime:2024-09-17 19:08:48 +0000 UTC Type:0 Mac:52:54:00:35:be:cc Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:pause-246701 Clientid:01:52:54:00:35:be:cc}
	I0917 18:10:11.047398   58375 main.go:141] libmachine: (pause-246701) DBG | domain pause-246701 has defined IP address 192.168.39.167 and MAC address 52:54:00:35:be:cc in network mk-pause-246701
	I0917 18:10:11.047583   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHHostname
	I0917 18:10:11.049862   58375 main.go:141] libmachine: (pause-246701) DBG | domain pause-246701 has defined MAC address 52:54:00:35:be:cc in network mk-pause-246701
	I0917 18:10:11.050260   58375 main.go:141] libmachine: (pause-246701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:be:cc", ip: ""} in network mk-pause-246701: {Iface:virbr1 ExpiryTime:2024-09-17 19:08:48 +0000 UTC Type:0 Mac:52:54:00:35:be:cc Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:pause-246701 Clientid:01:52:54:00:35:be:cc}
	I0917 18:10:11.050284   58375 main.go:141] libmachine: (pause-246701) DBG | domain pause-246701 has defined IP address 192.168.39.167 and MAC address 52:54:00:35:be:cc in network mk-pause-246701
	I0917 18:10:11.050435   58375 provision.go:143] copyHostCerts
	I0917 18:10:11.050500   58375 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:10:11.050512   58375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:10:11.050574   58375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:10:11.050695   58375 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:10:11.050703   58375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:10:11.050735   58375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:10:11.050815   58375 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:10:11.050825   58375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:10:11.050854   58375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:10:11.050972   58375 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.pause-246701 san=[127.0.0.1 192.168.39.167 localhost minikube pause-246701]
	I0917 18:10:11.209806   58375 provision.go:177] copyRemoteCerts
	I0917 18:10:11.209857   58375 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:10:11.209888   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHHostname
	I0917 18:10:11.213324   58375 main.go:141] libmachine: (pause-246701) DBG | domain pause-246701 has defined MAC address 52:54:00:35:be:cc in network mk-pause-246701
	I0917 18:10:11.213803   58375 main.go:141] libmachine: (pause-246701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:be:cc", ip: ""} in network mk-pause-246701: {Iface:virbr1 ExpiryTime:2024-09-17 19:08:48 +0000 UTC Type:0 Mac:52:54:00:35:be:cc Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:pause-246701 Clientid:01:52:54:00:35:be:cc}
	I0917 18:10:11.213826   58375 main.go:141] libmachine: (pause-246701) DBG | domain pause-246701 has defined IP address 192.168.39.167 and MAC address 52:54:00:35:be:cc in network mk-pause-246701
	I0917 18:10:11.214106   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHPort
	I0917 18:10:11.214326   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHKeyPath
	I0917 18:10:11.214520   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHUsername
	I0917 18:10:11.214644   58375 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/pause-246701/id_rsa Username:docker}
	I0917 18:10:11.305647   58375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 18:10:11.336311   58375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:10:11.372870   58375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0917 18:10:11.401855   58375 provision.go:87] duration metric: took 358.825133ms to configureAuth
	I0917 18:10:11.401881   58375 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:10:11.402079   58375 config.go:182] Loaded profile config "pause-246701": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:10:11.402147   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHHostname
	I0917 18:10:11.404921   58375 main.go:141] libmachine: (pause-246701) DBG | domain pause-246701 has defined MAC address 52:54:00:35:be:cc in network mk-pause-246701
	I0917 18:10:11.405386   58375 main.go:141] libmachine: (pause-246701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:be:cc", ip: ""} in network mk-pause-246701: {Iface:virbr1 ExpiryTime:2024-09-17 19:08:48 +0000 UTC Type:0 Mac:52:54:00:35:be:cc Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:pause-246701 Clientid:01:52:54:00:35:be:cc}
	I0917 18:10:11.405418   58375 main.go:141] libmachine: (pause-246701) DBG | domain pause-246701 has defined IP address 192.168.39.167 and MAC address 52:54:00:35:be:cc in network mk-pause-246701
	I0917 18:10:11.405639   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHPort
	I0917 18:10:11.405822   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHKeyPath
	I0917 18:10:11.406020   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHKeyPath
	I0917 18:10:11.406133   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHUsername
	I0917 18:10:11.406289   58375 main.go:141] libmachine: Using SSH client type: native
	I0917 18:10:11.406448   58375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0917 18:10:11.406461   58375 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:10:16.969008   58375 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:10:16.969036   58375 machine.go:96] duration metric: took 6.292060922s to provisionDockerMachine
	I0917 18:10:16.969049   58375 start.go:293] postStartSetup for "pause-246701" (driver="kvm2")
	I0917 18:10:16.969061   58375 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:10:16.969095   58375 main.go:141] libmachine: (pause-246701) Calling .DriverName
	I0917 18:10:16.969408   58375 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:10:16.969441   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHHostname
	I0917 18:10:16.972493   58375 main.go:141] libmachine: (pause-246701) DBG | domain pause-246701 has defined MAC address 52:54:00:35:be:cc in network mk-pause-246701
	I0917 18:10:16.972882   58375 main.go:141] libmachine: (pause-246701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:be:cc", ip: ""} in network mk-pause-246701: {Iface:virbr1 ExpiryTime:2024-09-17 19:08:48 +0000 UTC Type:0 Mac:52:54:00:35:be:cc Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:pause-246701 Clientid:01:52:54:00:35:be:cc}
	I0917 18:10:16.972928   58375 main.go:141] libmachine: (pause-246701) DBG | domain pause-246701 has defined IP address 192.168.39.167 and MAC address 52:54:00:35:be:cc in network mk-pause-246701
	I0917 18:10:16.973102   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHPort
	I0917 18:10:16.973309   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHKeyPath
	I0917 18:10:16.973455   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHUsername
	I0917 18:10:16.973566   58375 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/pause-246701/id_rsa Username:docker}
	I0917 18:10:17.057448   58375 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:10:17.063460   58375 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:10:17.063485   58375 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:10:17.063560   58375 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:10:17.063675   58375 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:10:17.063778   58375 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:10:17.076568   58375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:10:17.105983   58375 start.go:296] duration metric: took 136.919723ms for postStartSetup
	I0917 18:10:17.106031   58375 fix.go:56] duration metric: took 6.455097049s for fixHost
	I0917 18:10:17.106052   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHHostname
	I0917 18:10:17.109252   58375 main.go:141] libmachine: (pause-246701) DBG | domain pause-246701 has defined MAC address 52:54:00:35:be:cc in network mk-pause-246701
	I0917 18:10:17.109658   58375 main.go:141] libmachine: (pause-246701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:be:cc", ip: ""} in network mk-pause-246701: {Iface:virbr1 ExpiryTime:2024-09-17 19:08:48 +0000 UTC Type:0 Mac:52:54:00:35:be:cc Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:pause-246701 Clientid:01:52:54:00:35:be:cc}
	I0917 18:10:17.109686   58375 main.go:141] libmachine: (pause-246701) DBG | domain pause-246701 has defined IP address 192.168.39.167 and MAC address 52:54:00:35:be:cc in network mk-pause-246701
	I0917 18:10:17.109882   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHPort
	I0917 18:10:17.110082   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHKeyPath
	I0917 18:10:17.110285   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHKeyPath
	I0917 18:10:17.110421   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHUsername
	I0917 18:10:17.110590   58375 main.go:141] libmachine: Using SSH client type: native
	I0917 18:10:17.110803   58375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0917 18:10:17.110816   58375 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:10:17.224859   58375 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726596617.179468862
	
	I0917 18:10:17.224885   58375 fix.go:216] guest clock: 1726596617.179468862
	I0917 18:10:17.224895   58375 fix.go:229] Guest: 2024-09-17 18:10:17.179468862 +0000 UTC Remote: 2024-09-17 18:10:17.10603598 +0000 UTC m=+21.150466776 (delta=73.432882ms)
	I0917 18:10:17.224927   58375 fix.go:200] guest clock delta is within tolerance: 73.432882ms
	I0917 18:10:17.224934   58375 start.go:83] releasing machines lock for "pause-246701", held for 6.574029969s
	I0917 18:10:17.224956   58375 main.go:141] libmachine: (pause-246701) Calling .DriverName
	I0917 18:10:17.225252   58375 main.go:141] libmachine: (pause-246701) Calling .GetIP
	I0917 18:10:17.228604   58375 main.go:141] libmachine: (pause-246701) DBG | domain pause-246701 has defined MAC address 52:54:00:35:be:cc in network mk-pause-246701
	I0917 18:10:17.229092   58375 main.go:141] libmachine: (pause-246701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:be:cc", ip: ""} in network mk-pause-246701: {Iface:virbr1 ExpiryTime:2024-09-17 19:08:48 +0000 UTC Type:0 Mac:52:54:00:35:be:cc Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:pause-246701 Clientid:01:52:54:00:35:be:cc}
	I0917 18:10:17.229128   58375 main.go:141] libmachine: (pause-246701) DBG | domain pause-246701 has defined IP address 192.168.39.167 and MAC address 52:54:00:35:be:cc in network mk-pause-246701
	I0917 18:10:17.229302   58375 main.go:141] libmachine: (pause-246701) Calling .DriverName
	I0917 18:10:17.229807   58375 main.go:141] libmachine: (pause-246701) Calling .DriverName
	I0917 18:10:17.230016   58375 main.go:141] libmachine: (pause-246701) Calling .DriverName
	I0917 18:10:17.230143   58375 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:10:17.230191   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHHostname
	I0917 18:10:17.230519   58375 ssh_runner.go:195] Run: cat /version.json
	I0917 18:10:17.230538   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHHostname
	I0917 18:10:17.233545   58375 main.go:141] libmachine: (pause-246701) DBG | domain pause-246701 has defined MAC address 52:54:00:35:be:cc in network mk-pause-246701
	I0917 18:10:17.234219   58375 main.go:141] libmachine: (pause-246701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:be:cc", ip: ""} in network mk-pause-246701: {Iface:virbr1 ExpiryTime:2024-09-17 19:08:48 +0000 UTC Type:0 Mac:52:54:00:35:be:cc Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:pause-246701 Clientid:01:52:54:00:35:be:cc}
	I0917 18:10:17.234244   58375 main.go:141] libmachine: (pause-246701) DBG | domain pause-246701 has defined IP address 192.168.39.167 and MAC address 52:54:00:35:be:cc in network mk-pause-246701
	I0917 18:10:17.234322   58375 main.go:141] libmachine: (pause-246701) DBG | domain pause-246701 has defined MAC address 52:54:00:35:be:cc in network mk-pause-246701
	I0917 18:10:17.234356   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHPort
	I0917 18:10:17.234585   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHKeyPath
	I0917 18:10:17.234745   58375 main.go:141] libmachine: (pause-246701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:be:cc", ip: ""} in network mk-pause-246701: {Iface:virbr1 ExpiryTime:2024-09-17 19:08:48 +0000 UTC Type:0 Mac:52:54:00:35:be:cc Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:pause-246701 Clientid:01:52:54:00:35:be:cc}
	I0917 18:10:17.234790   58375 main.go:141] libmachine: (pause-246701) DBG | domain pause-246701 has defined IP address 192.168.39.167 and MAC address 52:54:00:35:be:cc in network mk-pause-246701
	I0917 18:10:17.234811   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHUsername
	I0917 18:10:17.234996   58375 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/pause-246701/id_rsa Username:docker}
	I0917 18:10:17.235021   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHPort
	I0917 18:10:17.235201   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHKeyPath
	I0917 18:10:17.235356   58375 main.go:141] libmachine: (pause-246701) Calling .GetSSHUsername
	I0917 18:10:17.235555   58375 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/pause-246701/id_rsa Username:docker}
	I0917 18:10:17.351096   58375 ssh_runner.go:195] Run: systemctl --version
	I0917 18:10:17.358836   58375 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:10:17.521740   58375 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:10:17.528682   58375 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:10:17.528753   58375 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:10:17.539917   58375 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 18:10:17.539945   58375 start.go:495] detecting cgroup driver to use...
	I0917 18:10:17.540000   58375 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:10:17.563333   58375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:10:17.579484   58375 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:10:17.579586   58375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:10:17.595825   58375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:10:17.614506   58375 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:10:17.785466   58375 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:10:17.947837   58375 docker.go:233] disabling docker service ...
	I0917 18:10:17.947946   58375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:10:17.971962   58375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:10:17.987298   58375 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:10:18.141548   58375 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:10:18.287876   58375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:10:18.310626   58375 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:10:18.340393   58375 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 18:10:18.340458   58375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:10:18.353647   58375 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:10:18.353720   58375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:10:18.366453   58375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:10:18.378521   58375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:10:18.396409   58375 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:10:18.408224   58375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:10:18.420779   58375 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:10:18.435057   58375 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:10:18.450098   58375 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:10:18.461182   58375 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:10:18.473186   58375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:10:18.627901   58375 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:10:25.689183   58375 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.06124232s)
	I0917 18:10:25.689217   58375 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:10:25.689296   58375 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:10:25.697314   58375 start.go:563] Will wait 60s for crictl version
	I0917 18:10:25.697387   58375 ssh_runner.go:195] Run: which crictl
	I0917 18:10:25.701581   58375 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:10:25.749380   58375 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:10:25.749537   58375 ssh_runner.go:195] Run: crio --version
	I0917 18:10:25.783930   58375 ssh_runner.go:195] Run: crio --version
	I0917 18:10:25.818817   58375 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 18:10:25.820231   58375 main.go:141] libmachine: (pause-246701) Calling .GetIP
	I0917 18:10:25.823454   58375 main.go:141] libmachine: (pause-246701) DBG | domain pause-246701 has defined MAC address 52:54:00:35:be:cc in network mk-pause-246701
	I0917 18:10:25.823815   58375 main.go:141] libmachine: (pause-246701) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:be:cc", ip: ""} in network mk-pause-246701: {Iface:virbr1 ExpiryTime:2024-09-17 19:08:48 +0000 UTC Type:0 Mac:52:54:00:35:be:cc Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:pause-246701 Clientid:01:52:54:00:35:be:cc}
	I0917 18:10:25.823841   58375 main.go:141] libmachine: (pause-246701) DBG | domain pause-246701 has defined IP address 192.168.39.167 and MAC address 52:54:00:35:be:cc in network mk-pause-246701
	I0917 18:10:25.824215   58375 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0917 18:10:25.830187   58375 kubeadm.go:883] updating cluster {Name:pause-246701 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:pause-246701 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:10:25.830354   58375 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 18:10:25.830414   58375 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:10:25.885024   58375 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 18:10:25.885058   58375 crio.go:433] Images already preloaded, skipping extraction
	I0917 18:10:25.885119   58375 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:10:25.928720   58375 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 18:10:25.928748   58375 cache_images.go:84] Images are preloaded, skipping loading
	I0917 18:10:25.928762   58375 kubeadm.go:934] updating node { 192.168.39.167 8443 v1.31.1 crio true true} ...
	I0917 18:10:25.928886   58375 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-246701 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.167
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:pause-246701 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:10:25.928969   58375 ssh_runner.go:195] Run: crio config
	I0917 18:10:25.987537   58375 cni.go:84] Creating CNI manager for ""
	I0917 18:10:25.987566   58375 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:10:25.987577   58375 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:10:25.987603   58375 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.167 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-246701 NodeName:pause-246701 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.167"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.167 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 18:10:25.987787   58375 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.167
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-246701"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.167
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.167"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:10:25.987861   58375 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 18:10:26.001142   58375 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:10:26.001221   58375 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:10:26.015349   58375 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0917 18:10:26.038867   58375 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:10:26.061950   58375 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0917 18:10:26.083148   58375 ssh_runner.go:195] Run: grep 192.168.39.167	control-plane.minikube.internal$ /etc/hosts
	I0917 18:10:26.088460   58375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:10:26.234802   58375 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:10:26.255117   58375 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/pause-246701 for IP: 192.168.39.167
	I0917 18:10:26.255140   58375 certs.go:194] generating shared ca certs ...
	I0917 18:10:26.255156   58375 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:10:26.255341   58375 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:10:26.255400   58375 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:10:26.255414   58375 certs.go:256] generating profile certs ...
	I0917 18:10:26.255521   58375 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/pause-246701/client.key
	I0917 18:10:26.255624   58375 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/pause-246701/apiserver.key.1bb97981
	I0917 18:10:26.255683   58375 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/pause-246701/proxy-client.key
	I0917 18:10:26.255853   58375 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:10:26.255892   58375 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:10:26.255905   58375 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:10:26.255928   58375 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:10:26.255951   58375 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:10:26.255970   58375 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:10:26.256009   58375 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:10:26.256617   58375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:10:26.287964   58375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:10:26.317077   58375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:10:26.352588   58375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:10:26.386001   58375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/pause-246701/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0917 18:10:26.421251   58375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/pause-246701/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 18:10:26.453320   58375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/pause-246701/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:10:26.485682   58375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/pause-246701/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 18:10:26.520470   58375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:10:26.556940   58375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:10:26.585520   58375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:10:26.614738   58375 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:10:26.633611   58375 ssh_runner.go:195] Run: openssl version
	I0917 18:10:26.641464   58375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:10:26.658057   58375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:10:26.665015   58375 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:10:26.665091   58375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:10:26.673836   58375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:10:26.774049   58375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:10:26.788154   58375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:10:26.803377   58375 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:10:26.803453   58375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:10:26.823168   58375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:10:26.926215   58375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:10:26.989558   58375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:10:27.015144   58375 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:10:27.015225   58375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:10:27.033879   58375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:10:27.170805   58375 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:10:27.258220   58375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 18:10:27.299841   58375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 18:10:27.375855   58375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 18:10:27.477821   58375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 18:10:27.523197   58375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 18:10:27.547694   58375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 18:10:27.572190   58375 kubeadm.go:392] StartCluster: {Name:pause-246701 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:pause-246701 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:10:27.572371   58375 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:10:27.572435   58375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:10:27.699980   58375 cri.go:89] found id: "01807adcb2c6478e08477b688e9213108420bbd562e88f21e7708e3ecdfd077b"
	I0917 18:10:27.700008   58375 cri.go:89] found id: "1f0edc99624aa4f33a3454d0dc5a08cbdd7de87adc98e628385aac56e62708e0"
	I0917 18:10:27.700014   58375 cri.go:89] found id: "f55176a7c1e4d61fe9e7fbe8d5b007636be692a07306798dea61b38f0124fbec"
	I0917 18:10:27.700019   58375 cri.go:89] found id: "fde175b4dcf392df5e049450ec3eb0f92583c8a3d28d0147040470f4fac40487"
	I0917 18:10:27.700023   58375 cri.go:89] found id: "4716e05f970f788777a6108ea9b4750ee64a9b7a5033ae95a93e5dd0216dbf65"
	I0917 18:10:27.700027   58375 cri.go:89] found id: "5f053a0840c2ed6a41dcc4d34fbfc23e10e3c96d8e1f854a5b7fadc23ca47845"
	I0917 18:10:27.700031   58375 cri.go:89] found id: "33d59acb7ffaf293520cce65a3eecb173df48b67b954734f142eb2ed3b8849e8"
	I0917 18:10:27.700035   58375 cri.go:89] found id: "80b367d092435a1351c64a2b2e3bf5cfba7bd47e66ff41495623fc364dc67db0"
	I0917 18:10:27.700065   58375 cri.go:89] found id: ""
	I0917 18:10:27.700116   58375 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-246701 -n pause-246701
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-246701 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-246701 logs -n 25: (1.60779071s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-639892 sudo                  | cilium-639892             | jenkins | v1.34.0 | 17 Sep 24 18:07 UTC |                     |
	|         | systemctl status containerd            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                |                           |         |         |                     |                     |
	| ssh     | -p cilium-639892 sudo                  | cilium-639892             | jenkins | v1.34.0 | 17 Sep 24 18:07 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-639892 sudo cat              | cilium-639892             | jenkins | v1.34.0 | 17 Sep 24 18:07 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-639892 sudo cat              | cilium-639892             | jenkins | v1.34.0 | 17 Sep 24 18:07 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-639892 sudo                  | cilium-639892             | jenkins | v1.34.0 | 17 Sep 24 18:07 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-639892 sudo                  | cilium-639892             | jenkins | v1.34.0 | 17 Sep 24 18:07 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-639892 sudo                  | cilium-639892             | jenkins | v1.34.0 | 17 Sep 24 18:07 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-639892 sudo find             | cilium-639892             | jenkins | v1.34.0 | 17 Sep 24 18:07 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-639892 sudo crio             | cilium-639892             | jenkins | v1.34.0 | 17 Sep 24 18:07 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-639892                       | cilium-639892             | jenkins | v1.34.0 | 17 Sep 24 18:07 UTC | 17 Sep 24 18:07 UTC |
	| start   | -p running-upgrade-271344              | minikube                  | jenkins | v1.26.0 | 17 Sep 24 18:07 UTC | 17 Sep 24 18:09 UTC |
	|         | --memory=2200 --vm-driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p offline-crio-624774                 | offline-crio-624774       | jenkins | v1.34.0 | 17 Sep 24 18:08 UTC | 17 Sep 24 18:08 UTC |
	| start   | -p pause-246701 --memory=2048          | pause-246701              | jenkins | v1.34.0 | 17 Sep 24 18:08 UTC | 17 Sep 24 18:09 UTC |
	|         | --install-addons=false                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-775296 stop            | minikube                  | jenkins | v1.26.0 | 17 Sep 24 18:08 UTC | 17 Sep 24 18:08 UTC |
	| start   | -p stopped-upgrade-775296              | stopped-upgrade-775296    | jenkins | v1.34.0 | 17 Sep 24 18:08 UTC | 17 Sep 24 18:09 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p running-upgrade-271344              | running-upgrade-271344    | jenkins | v1.34.0 | 17 Sep 24 18:09 UTC | 17 Sep 24 18:10 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-775296              | stopped-upgrade-775296    | jenkins | v1.34.0 | 17 Sep 24 18:09 UTC | 17 Sep 24 18:09 UTC |
	| start   | -p NoKubernetes-267093                 | NoKubernetes-267093       | jenkins | v1.34.0 | 17 Sep 24 18:09 UTC |                     |
	|         | --no-kubernetes                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20              |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-267093                 | NoKubernetes-267093       | jenkins | v1.34.0 | 17 Sep 24 18:09 UTC | 17 Sep 24 18:10 UTC |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-246701                        | pause-246701              | jenkins | v1.34.0 | 17 Sep 24 18:09 UTC | 17 Sep 24 18:11 UTC |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-267093                 | NoKubernetes-267093       | jenkins | v1.34.0 | 17 Sep 24 18:10 UTC | 17 Sep 24 18:10 UTC |
	|         | --no-kubernetes --driver=kvm2          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-271344              | running-upgrade-271344    | jenkins | v1.34.0 | 17 Sep 24 18:10 UTC | 17 Sep 24 18:10 UTC |
	| start   | -p force-systemd-flag-722424           | force-systemd-flag-722424 | jenkins | v1.34.0 | 17 Sep 24 18:10 UTC |                     |
	|         | --memory=2048 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-267093                 | NoKubernetes-267093       | jenkins | v1.34.0 | 17 Sep 24 18:10 UTC | 17 Sep 24 18:10 UTC |
	| start   | -p NoKubernetes-267093                 | NoKubernetes-267093       | jenkins | v1.34.0 | 17 Sep 24 18:10 UTC |                     |
	|         | --no-kubernetes --driver=kvm2          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 18:10:49
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 18:10:49.934878   59176 out.go:345] Setting OutFile to fd 1 ...
	I0917 18:10:49.934987   59176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:10:49.934991   59176 out.go:358] Setting ErrFile to fd 2...
	I0917 18:10:49.934994   59176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:10:49.935208   59176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 18:10:49.935800   59176 out.go:352] Setting JSON to false
	I0917 18:10:49.936806   59176 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6765,"bootTime":1726589885,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 18:10:49.936889   59176 start.go:139] virtualization: kvm guest
	I0917 18:10:49.939113   59176 out.go:177] * [NoKubernetes-267093] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 18:10:49.940553   59176 notify.go:220] Checking for updates...
	I0917 18:10:49.940573   59176 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 18:10:49.941896   59176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 18:10:49.943246   59176 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:10:49.944487   59176 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 18:10:49.945831   59176 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 18:10:49.947200   59176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 18:10:49.948939   59176 config.go:182] Loaded profile config "force-systemd-flag-722424": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:10:49.949023   59176 config.go:182] Loaded profile config "kubernetes-upgrade-644038": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0917 18:10:49.949128   59176 config.go:182] Loaded profile config "pause-246701": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:10:49.949142   59176 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0917 18:10:49.949210   59176 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 18:10:49.986626   59176 out.go:177] * Using the kvm2 driver based on user configuration
	I0917 18:10:49.987835   59176 start.go:297] selected driver: kvm2
	I0917 18:10:49.987841   59176 start.go:901] validating driver "kvm2" against <nil>
	I0917 18:10:49.987851   59176 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 18:10:49.988109   59176 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0917 18:10:49.988171   59176 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:10:49.988235   59176 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19662-11085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0917 18:10:50.005310   59176 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0917 18:10:50.005356   59176 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 18:10:50.005889   59176 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0917 18:10:50.006050   59176 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 18:10:50.006074   59176 cni.go:84] Creating CNI manager for ""
	I0917 18:10:50.006116   59176 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:10:50.006120   59176 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 18:10:50.006133   59176 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0917 18:10:50.006188   59176 start.go:340] cluster config:
	{Name:NoKubernetes-267093 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-267093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:10:50.006278   59176 iso.go:125] acquiring lock: {Name:mkee7131e09b1c5eb94d106bda884bcbdbf6f906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:10:50.009059   59176 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-267093
	I0917 18:10:45.291426   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:45.291906   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | unable to find current IP address of domain force-systemd-flag-722424 in network mk-force-systemd-flag-722424
	I0917 18:10:45.291929   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | I0917 18:10:45.291858   58889 retry.go:31] will retry after 2.013760418s: waiting for machine to come up
	I0917 18:10:47.307834   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:47.308379   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | unable to find current IP address of domain force-systemd-flag-722424 in network mk-force-systemd-flag-722424
	I0917 18:10:47.308402   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | I0917 18:10:47.308326   58889 retry.go:31] will retry after 2.972611801s: waiting for machine to come up
	I0917 18:10:48.656150   58375 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 59be282d82c994554051afe711c5e54f9956bb70bc58a9769c9785903771359e 7eb34f9ac3911759cc6a62e2d8b69a97435125e9bd1540f4f5182362fc105a31 01807adcb2c6478e08477b688e9213108420bbd562e88f21e7708e3ecdfd077b 1f0edc99624aa4f33a3454d0dc5a08cbdd7de87adc98e628385aac56e62708e0 f55176a7c1e4d61fe9e7fbe8d5b007636be692a07306798dea61b38f0124fbec fde175b4dcf392df5e049450ec3eb0f92583c8a3d28d0147040470f4fac40487 4716e05f970f788777a6108ea9b4750ee64a9b7a5033ae95a93e5dd0216dbf65 5f053a0840c2ed6a41dcc4d34fbfc23e10e3c96d8e1f854a5b7fadc23ca47845 33d59acb7ffaf293520cce65a3eecb173df48b67b954734f142eb2ed3b8849e8 80b367d092435a1351c64a2b2e3bf5cfba7bd47e66ff41495623fc364dc67db0: (20.582307034s)
	W0917 18:10:48.656244   58375 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 59be282d82c994554051afe711c5e54f9956bb70bc58a9769c9785903771359e 7eb34f9ac3911759cc6a62e2d8b69a97435125e9bd1540f4f5182362fc105a31 01807adcb2c6478e08477b688e9213108420bbd562e88f21e7708e3ecdfd077b 1f0edc99624aa4f33a3454d0dc5a08cbdd7de87adc98e628385aac56e62708e0 f55176a7c1e4d61fe9e7fbe8d5b007636be692a07306798dea61b38f0124fbec fde175b4dcf392df5e049450ec3eb0f92583c8a3d28d0147040470f4fac40487 4716e05f970f788777a6108ea9b4750ee64a9b7a5033ae95a93e5dd0216dbf65 5f053a0840c2ed6a41dcc4d34fbfc23e10e3c96d8e1f854a5b7fadc23ca47845 33d59acb7ffaf293520cce65a3eecb173df48b67b954734f142eb2ed3b8849e8 80b367d092435a1351c64a2b2e3bf5cfba7bd47e66ff41495623fc364dc67db0: Process exited with status 1
	stdout:
	59be282d82c994554051afe711c5e54f9956bb70bc58a9769c9785903771359e
	7eb34f9ac3911759cc6a62e2d8b69a97435125e9bd1540f4f5182362fc105a31
	01807adcb2c6478e08477b688e9213108420bbd562e88f21e7708e3ecdfd077b
	1f0edc99624aa4f33a3454d0dc5a08cbdd7de87adc98e628385aac56e62708e0
	f55176a7c1e4d61fe9e7fbe8d5b007636be692a07306798dea61b38f0124fbec
	fde175b4dcf392df5e049450ec3eb0f92583c8a3d28d0147040470f4fac40487
	
	stderr:
	E0917 18:10:48.608902    2924 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4716e05f970f788777a6108ea9b4750ee64a9b7a5033ae95a93e5dd0216dbf65\": container with ID starting with 4716e05f970f788777a6108ea9b4750ee64a9b7a5033ae95a93e5dd0216dbf65 not found: ID does not exist" containerID="4716e05f970f788777a6108ea9b4750ee64a9b7a5033ae95a93e5dd0216dbf65"
	time="2024-09-17T18:10:48Z" level=fatal msg="stopping the container \"4716e05f970f788777a6108ea9b4750ee64a9b7a5033ae95a93e5dd0216dbf65\": rpc error: code = NotFound desc = could not find container \"4716e05f970f788777a6108ea9b4750ee64a9b7a5033ae95a93e5dd0216dbf65\": container with ID starting with 4716e05f970f788777a6108ea9b4750ee64a9b7a5033ae95a93e5dd0216dbf65 not found: ID does not exist"
	I0917 18:10:48.656304   58375 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 18:10:48.707261   58375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:10:48.722120   58375 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Sep 17 18:09 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Sep 17 18:09 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Sep 17 18:09 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Sep 17 18:09 /etc/kubernetes/scheduler.conf
	
	I0917 18:10:48.722186   58375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:10:48.734681   58375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:10:48.746535   58375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:10:48.756789   58375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0917 18:10:48.756855   58375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:10:48.768997   58375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:10:48.778865   58375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0917 18:10:48.778932   58375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:10:48.790532   58375 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:10:48.802606   58375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:10:48.864369   58375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:10:49.780795   58375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:10:50.038491   58375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:10:50.101404   58375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:10:50.197263   58375 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:10:50.197353   58375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:10:50.698081   58375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:10:50.844416   53861 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:10:50.844723   53861 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:10:51.198506   58375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:10:51.214842   58375 api_server.go:72] duration metric: took 1.017580439s to wait for apiserver process to appear ...
	I0917 18:10:51.214870   58375 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:10:51.214894   58375 api_server.go:253] Checking apiserver healthz at https://192.168.39.167:8443/healthz ...
	I0917 18:10:53.177251   58375 api_server.go:279] https://192.168.39.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:10:53.177287   58375 api_server.go:103] status: https://192.168.39.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:10:53.177305   58375 api_server.go:253] Checking apiserver healthz at https://192.168.39.167:8443/healthz ...
	I0917 18:10:53.243990   58375 api_server.go:279] https://192.168.39.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:10:53.244024   58375 api_server.go:103] status: https://192.168.39.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:10:53.244043   58375 api_server.go:253] Checking apiserver healthz at https://192.168.39.167:8443/healthz ...
	I0917 18:10:53.290941   58375 api_server.go:279] https://192.168.39.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:10:53.290968   58375 api_server.go:103] status: https://192.168.39.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:10:53.715440   58375 api_server.go:253] Checking apiserver healthz at https://192.168.39.167:8443/healthz ...
	I0917 18:10:53.721648   58375 api_server.go:279] https://192.168.39.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:10:53.721679   58375 api_server.go:103] status: https://192.168.39.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:10:54.215212   58375 api_server.go:253] Checking apiserver healthz at https://192.168.39.167:8443/healthz ...
	I0917 18:10:54.222108   58375 api_server.go:279] https://192.168.39.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:10:54.222141   58375 api_server.go:103] status: https://192.168.39.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:10:54.715756   58375 api_server.go:253] Checking apiserver healthz at https://192.168.39.167:8443/healthz ...
	I0917 18:10:54.720122   58375 api_server.go:279] https://192.168.39.167:8443/healthz returned 200:
	ok
	I0917 18:10:54.726632   58375 api_server.go:141] control plane version: v1.31.1
	I0917 18:10:54.726657   58375 api_server.go:131] duration metric: took 3.511779718s to wait for apiserver health ...
	I0917 18:10:54.726664   58375 cni.go:84] Creating CNI manager for ""
	I0917 18:10:54.726670   58375 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:10:54.728481   58375 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:10:50.010337   59176 preload.go:131] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W0917 18:10:50.041128   59176 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0917 18:10:50.041323   59176 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/NoKubernetes-267093/config.json ...
	I0917 18:10:50.041360   59176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/NoKubernetes-267093/config.json: {Name:mk55306f1bdb213fa8ea7b864c2a5c0cfe508335 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:10:50.041557   59176 start.go:360] acquireMachinesLock for NoKubernetes-267093: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 18:10:50.282278   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:50.282783   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | unable to find current IP address of domain force-systemd-flag-722424 in network mk-force-systemd-flag-722424
	I0917 18:10:50.282815   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | I0917 18:10:50.282735   58889 retry.go:31] will retry after 3.159205139s: waiting for machine to come up
	I0917 18:10:53.443476   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:53.444150   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | unable to find current IP address of domain force-systemd-flag-722424 in network mk-force-systemd-flag-722424
	I0917 18:10:53.444175   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | I0917 18:10:53.444103   58889 retry.go:31] will retry after 5.581189391s: waiting for machine to come up
	I0917 18:10:54.729695   58375 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:10:54.740982   58375 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:10:54.763140   58375 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:10:54.763213   58375 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 18:10:54.763234   58375 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 18:10:54.772194   58375 system_pods.go:59] 6 kube-system pods found
	I0917 18:10:54.772227   58375 system_pods.go:61] "coredns-7c65d6cfc9-dkldh" [1c024a80-e613-48c2-b2c2-79bb05774a91] Running
	I0917 18:10:54.772238   58375 system_pods.go:61] "etcd-pause-246701" [d4796279-f272-4e02-a266-5da5f4aafec1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 18:10:54.772245   58375 system_pods.go:61] "kube-apiserver-pause-246701" [6355bbd7-6301-4aaf-aa88-0951b3a578e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 18:10:54.772253   58375 system_pods.go:61] "kube-controller-manager-pause-246701" [6c4f553f-4689-45b2-8c50-c237f39bbe89] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 18:10:54.772260   58375 system_pods.go:61] "kube-proxy-vxgcn" [de638753-d03b-438c-8d1f-d43af2bbcce4] Running
	I0917 18:10:54.772268   58375 system_pods.go:61] "kube-scheduler-pause-246701" [836bc257-b131-4c83-add1-70edf6f7fb9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 18:10:54.772279   58375 system_pods.go:74] duration metric: took 9.118021ms to wait for pod list to return data ...
	I0917 18:10:54.772292   58375 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:10:54.776731   58375 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:10:54.776757   58375 node_conditions.go:123] node cpu capacity is 2
	I0917 18:10:54.776767   58375 node_conditions.go:105] duration metric: took 4.470719ms to run NodePressure ...
	I0917 18:10:54.776782   58375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:10:55.037803   58375 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0917 18:10:55.041708   58375 kubeadm.go:739] kubelet initialised
	I0917 18:10:55.041729   58375 kubeadm.go:740] duration metric: took 3.900914ms waiting for restarted kubelet to initialise ...
	I0917 18:10:55.041738   58375 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:10:55.045730   58375 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-dkldh" in "kube-system" namespace to be "Ready" ...
	I0917 18:10:55.050796   58375 pod_ready.go:93] pod "coredns-7c65d6cfc9-dkldh" in "kube-system" namespace has status "Ready":"True"
	I0917 18:10:55.050816   58375 pod_ready.go:82] duration metric: took 5.059078ms for pod "coredns-7c65d6cfc9-dkldh" in "kube-system" namespace to be "Ready" ...
	I0917 18:10:55.050824   58375 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:10:59.029318   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.029807   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has current primary IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.029825   58866 main.go:141] libmachine: (force-systemd-flag-722424) Found IP for machine: 192.168.72.193
	I0917 18:10:59.029835   58866 main.go:141] libmachine: (force-systemd-flag-722424) Reserving static IP address...
	I0917 18:10:59.030251   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | unable to find host DHCP lease matching {name: "force-systemd-flag-722424", mac: "52:54:00:a0:eb:8c", ip: "192.168.72.193"} in network mk-force-systemd-flag-722424
	I0917 18:10:59.111691   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | Getting to WaitForSSH function...
	I0917 18:10:59.111732   58866 main.go:141] libmachine: (force-systemd-flag-722424) Reserved static IP address: 192.168.72.193
	I0917 18:10:59.111745   58866 main.go:141] libmachine: (force-systemd-flag-722424) Waiting for SSH to be available...
	I0917 18:10:59.114177   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.114588   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:10:59.114616   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.114883   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | Using SSH client type: external
	I0917 18:10:59.114905   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/force-systemd-flag-722424/id_rsa (-rw-------)
	I0917 18:10:59.114945   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.193 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/force-systemd-flag-722424/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:10:59.114962   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | About to run SSH command:
	I0917 18:10:59.114977   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | exit 0
	I0917 18:10:59.241617   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | SSH cmd err, output: <nil>: 
	I0917 18:10:59.241930   58866 main.go:141] libmachine: (force-systemd-flag-722424) KVM machine creation complete!
	I0917 18:10:59.242332   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetConfigRaw
	I0917 18:10:59.242938   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .DriverName
	I0917 18:10:59.243168   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .DriverName
	I0917 18:10:59.243364   58866 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0917 18:10:59.243383   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetState
	I0917 18:10:59.244879   58866 main.go:141] libmachine: Detecting operating system of created instance...
	I0917 18:10:59.244895   58866 main.go:141] libmachine: Waiting for SSH to be available...
	I0917 18:10:59.244903   58866 main.go:141] libmachine: Getting to WaitForSSH function...
	I0917 18:10:59.244911   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHHostname
	I0917 18:10:59.247839   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.248264   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:10:59.248295   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.248439   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHPort
	I0917 18:10:59.248640   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:10:59.248817   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:10:59.248984   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHUsername
	I0917 18:10:59.249244   58866 main.go:141] libmachine: Using SSH client type: native
	I0917 18:10:59.249506   58866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.193 22 <nil> <nil>}
	I0917 18:10:59.249524   58866 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0917 18:10:59.352609   58866 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:10:59.352634   58866 main.go:141] libmachine: Detecting the provisioner...
	I0917 18:10:59.352646   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHHostname
	I0917 18:10:59.355295   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.355697   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:10:59.355739   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.355868   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHPort
	I0917 18:10:59.356041   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:10:59.356211   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:10:59.356315   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHUsername
	I0917 18:10:59.356451   58866 main.go:141] libmachine: Using SSH client type: native
	I0917 18:10:59.356636   58866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.193 22 <nil> <nil>}
	I0917 18:10:59.356649   58866 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0917 18:10:59.458383   58866 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0917 18:10:59.458488   58866 main.go:141] libmachine: found compatible host: buildroot
	I0917 18:10:59.458504   58866 main.go:141] libmachine: Provisioning with buildroot...
	I0917 18:10:59.458512   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetMachineName
	I0917 18:10:59.458761   58866 buildroot.go:166] provisioning hostname "force-systemd-flag-722424"
	I0917 18:10:59.458783   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetMachineName
	I0917 18:10:59.458925   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHHostname
	I0917 18:10:59.461411   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.461702   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:10:59.461725   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.461823   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHPort
	I0917 18:10:59.462010   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:10:59.462169   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:10:59.462314   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHUsername
	I0917 18:10:59.462485   58866 main.go:141] libmachine: Using SSH client type: native
	I0917 18:10:59.462694   58866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.193 22 <nil> <nil>}
	I0917 18:10:59.462710   58866 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-722424 && echo "force-systemd-flag-722424" | sudo tee /etc/hostname
	I0917 18:10:59.577635   58866 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-722424
	
	I0917 18:10:59.577665   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHHostname
	I0917 18:10:59.580292   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.580629   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:10:59.580668   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.580900   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHPort
	I0917 18:10:59.581111   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:10:59.581282   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:10:59.581416   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHUsername
	I0917 18:10:59.581561   58866 main.go:141] libmachine: Using SSH client type: native
	I0917 18:10:59.581782   58866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.193 22 <nil> <nil>}
	I0917 18:10:59.581810   58866 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-722424' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-722424/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-722424' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:10:59.687087   58866 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:10:59.687126   58866 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:10:59.687184   58866 buildroot.go:174] setting up certificates
	I0917 18:10:59.687205   58866 provision.go:84] configureAuth start
	I0917 18:10:59.687224   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetMachineName
	I0917 18:10:59.687533   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetIP
	I0917 18:10:59.690331   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.690731   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:10:59.690753   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.690907   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHHostname
	I0917 18:10:59.692953   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.693247   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:10:59.693273   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.693414   58866 provision.go:143] copyHostCerts
	I0917 18:10:59.693441   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:10:59.693501   58866 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:10:59.693511   58866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:10:59.693564   58866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:10:59.693652   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:10:59.693670   58866 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:10:59.693674   58866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:10:59.693699   58866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:10:59.693764   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:10:59.693780   58866 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:10:59.693786   58866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:10:59.693802   58866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:10:59.693861   58866 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-722424 san=[127.0.0.1 192.168.72.193 force-systemd-flag-722424 localhost minikube]
	I0917 18:10:59.867868   58866 provision.go:177] copyRemoteCerts
	I0917 18:10:59.867935   58866 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:10:59.867963   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHHostname
	I0917 18:10:59.870837   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.871204   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:10:59.871229   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.871414   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHPort
	I0917 18:10:59.871603   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:10:59.871782   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHUsername
	I0917 18:10:59.871943   58866 sshutil.go:53] new ssh client: &{IP:192.168.72.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/force-systemd-flag-722424/id_rsa Username:docker}
	I0917 18:10:59.959908   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 18:10:59.959978   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:10:59.989707   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 18:10:59.989781   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0917 18:11:00.019023   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 18:11:00.019097   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 18:11:00.047639   58866 provision.go:87] duration metric: took 360.418667ms to configureAuth
	I0917 18:11:00.047665   58866 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:11:00.047840   58866 config.go:182] Loaded profile config "force-systemd-flag-722424": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:11:00.047908   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHHostname
	I0917 18:11:00.050491   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.050815   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:11:00.050853   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.051094   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHPort
	I0917 18:11:00.051275   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:11:00.051478   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:11:00.051651   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHUsername
	I0917 18:11:00.051813   58866 main.go:141] libmachine: Using SSH client type: native
	I0917 18:11:00.052014   58866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.193 22 <nil> <nil>}
	I0917 18:11:00.052039   58866 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:11:00.514537   59176 start.go:364] duration metric: took 10.472915323s to acquireMachinesLock for "NoKubernetes-267093"
	I0917 18:11:00.514596   59176 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-267093 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-267093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 18:11:00.514741   59176 start.go:125] createHost starting for "" (driver="kvm2")
	I0917 18:10:57.058649   58375 pod_ready.go:103] pod "etcd-pause-246701" in "kube-system" namespace has status "Ready":"False"
	I0917 18:10:59.560707   58375 pod_ready.go:103] pod "etcd-pause-246701" in "kube-system" namespace has status "Ready":"False"
	I0917 18:11:00.270726   58866 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:11:00.270760   58866 main.go:141] libmachine: Checking connection to Docker...
	I0917 18:11:00.270773   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetURL
	I0917 18:11:00.272238   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | Using libvirt version 6000000
	I0917 18:11:00.274897   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.275306   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:11:00.275337   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.275499   58866 main.go:141] libmachine: Docker is up and running!
	I0917 18:11:00.275512   58866 main.go:141] libmachine: Reticulating splines...
	I0917 18:11:00.275521   58866 client.go:171] duration metric: took 24.960078669s to LocalClient.Create
	I0917 18:11:00.275549   58866 start.go:167] duration metric: took 24.960138327s to libmachine.API.Create "force-systemd-flag-722424"
	I0917 18:11:00.275561   58866 start.go:293] postStartSetup for "force-systemd-flag-722424" (driver="kvm2")
	I0917 18:11:00.275575   58866 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:11:00.275598   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .DriverName
	I0917 18:11:00.275866   58866 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:11:00.275888   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHHostname
	I0917 18:11:00.278397   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.278829   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:11:00.278849   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.279038   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHPort
	I0917 18:11:00.279225   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:11:00.279392   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHUsername
	I0917 18:11:00.279510   58866 sshutil.go:53] new ssh client: &{IP:192.168.72.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/force-systemd-flag-722424/id_rsa Username:docker}
	I0917 18:11:00.359683   58866 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:11:00.364192   58866 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:11:00.364219   58866 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:11:00.364286   58866 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:11:00.364373   58866 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:11:00.364384   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> /etc/ssl/certs/182592.pem
	I0917 18:11:00.364465   58866 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:11:00.374283   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:11:00.400120   58866 start.go:296] duration metric: took 124.543905ms for postStartSetup
	I0917 18:11:00.400177   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetConfigRaw
	I0917 18:11:00.400819   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetIP
	I0917 18:11:00.403665   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.403992   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:11:00.404016   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.404280   58866 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/config.json ...
	I0917 18:11:00.404492   58866 start.go:128] duration metric: took 25.107904017s to createHost
	I0917 18:11:00.404523   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHHostname
	I0917 18:11:00.406875   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.407186   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:11:00.407214   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.407361   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHPort
	I0917 18:11:00.407581   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:11:00.407740   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:11:00.407947   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHUsername
	I0917 18:11:00.408130   58866 main.go:141] libmachine: Using SSH client type: native
	I0917 18:11:00.408300   58866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.193 22 <nil> <nil>}
	I0917 18:11:00.408311   58866 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:11:00.514344   58866 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726596660.489673957
	
	I0917 18:11:00.514373   58866 fix.go:216] guest clock: 1726596660.489673957
	I0917 18:11:00.514383   58866 fix.go:229] Guest: 2024-09-17 18:11:00.489673957 +0000 UTC Remote: 2024-09-17 18:11:00.404508 +0000 UTC m=+25.217860548 (delta=85.165957ms)
	I0917 18:11:00.514408   58866 fix.go:200] guest clock delta is within tolerance: 85.165957ms
	I0917 18:11:00.514415   58866 start.go:83] releasing machines lock for "force-systemd-flag-722424", held for 25.217930432s
	I0917 18:11:00.514446   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .DriverName
	I0917 18:11:00.514716   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetIP
	I0917 18:11:00.517742   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.518135   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:11:00.518161   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.518373   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .DriverName
	I0917 18:11:00.518859   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .DriverName
	I0917 18:11:00.519028   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .DriverName
	I0917 18:11:00.519134   58866 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:11:00.519191   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHHostname
	I0917 18:11:00.519255   58866 ssh_runner.go:195] Run: cat /version.json
	I0917 18:11:00.519284   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHHostname
	I0917 18:11:00.522007   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.522268   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.522390   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:11:00.522421   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.522607   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHPort
	I0917 18:11:00.522724   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:11:00.522744   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.522751   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:11:00.522879   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHPort
	I0917 18:11:00.522927   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHUsername
	I0917 18:11:00.523022   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:11:00.523032   58866 sshutil.go:53] new ssh client: &{IP:192.168.72.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/force-systemd-flag-722424/id_rsa Username:docker}
	I0917 18:11:00.523162   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHUsername
	I0917 18:11:00.523545   58866 sshutil.go:53] new ssh client: &{IP:192.168.72.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/force-systemd-flag-722424/id_rsa Username:docker}
	I0917 18:11:00.629684   58866 ssh_runner.go:195] Run: systemctl --version
	I0917 18:11:00.637344   58866 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:11:00.804032   58866 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:11:00.810730   58866 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:11:00.810805   58866 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:11:00.829608   58866 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:11:00.829636   58866 start.go:495] detecting cgroup driver to use...
	I0917 18:11:00.829650   58866 start.go:499] using "systemd" cgroup driver as enforced via flags
	I0917 18:11:00.829714   58866 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:11:00.847702   58866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:11:00.863256   58866 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:11:00.863324   58866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:11:00.878545   58866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:11:00.893911   58866 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:11:01.018538   58866 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:11:01.165859   58866 docker.go:233] disabling docker service ...
	I0917 18:11:01.165923   58866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:11:01.180569   58866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:11:01.194536   58866 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:11:01.337220   58866 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:11:01.480781   58866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:11:01.495295   58866 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:11:01.515646   58866 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 18:11:01.515733   58866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:11:01.527288   58866 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 18:11:01.527368   58866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:11:01.539525   58866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:11:01.551259   58866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:11:01.563352   58866 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:11:01.575357   58866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:11:01.587452   58866 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:11:01.606193   58866 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:11:01.618021   58866 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:11:01.631380   58866 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:11:01.631449   58866 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:11:01.648473   58866 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:11:01.662048   58866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:11:01.807832   58866 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:11:01.904438   58866 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:11:01.904524   58866 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:11:01.909628   58866 start.go:563] Will wait 60s for crictl version
	I0917 18:11:01.909691   58866 ssh_runner.go:195] Run: which crictl
	I0917 18:11:01.913695   58866 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:11:01.973370   58866 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:11:01.973466   58866 ssh_runner.go:195] Run: crio --version
	I0917 18:11:02.010818   58866 ssh_runner.go:195] Run: crio --version
	I0917 18:11:02.043824   58866 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 18:11:00.517138   59176 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	I0917 18:11:00.517444   59176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:11:00.517478   59176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:11:00.538142   59176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42847
	I0917 18:11:00.538656   59176 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:11:00.539269   59176 main.go:141] libmachine: Using API Version  1
	I0917 18:11:00.539307   59176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:11:00.539702   59176 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:11:00.539917   59176 main.go:141] libmachine: (NoKubernetes-267093) Calling .GetMachineName
	I0917 18:11:00.540093   59176 main.go:141] libmachine: (NoKubernetes-267093) Calling .DriverName
	I0917 18:11:00.540232   59176 start.go:159] libmachine.API.Create for "NoKubernetes-267093" (driver="kvm2")
	I0917 18:11:00.540256   59176 client.go:168] LocalClient.Create starting
	I0917 18:11:00.540293   59176 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem
	I0917 18:11:00.540328   59176 main.go:141] libmachine: Decoding PEM data...
	I0917 18:11:00.540348   59176 main.go:141] libmachine: Parsing certificate...
	I0917 18:11:00.540423   59176 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem
	I0917 18:11:00.540456   59176 main.go:141] libmachine: Decoding PEM data...
	I0917 18:11:00.540467   59176 main.go:141] libmachine: Parsing certificate...
	I0917 18:11:00.540519   59176 main.go:141] libmachine: Running pre-create checks...
	I0917 18:11:00.540527   59176 main.go:141] libmachine: (NoKubernetes-267093) Calling .PreCreateCheck
	I0917 18:11:00.540873   59176 main.go:141] libmachine: (NoKubernetes-267093) Calling .GetConfigRaw
	I0917 18:11:00.541407   59176 main.go:141] libmachine: Creating machine...
	I0917 18:11:00.541414   59176 main.go:141] libmachine: (NoKubernetes-267093) Calling .Create
	I0917 18:11:00.541549   59176 main.go:141] libmachine: (NoKubernetes-267093) Creating KVM machine...
	I0917 18:11:00.542932   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | found existing default KVM network
	I0917 18:11:00.544457   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:00.544281   59262 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:fa:8c:15} reservation:<nil>}
	I0917 18:11:00.545483   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:00.545388   59262 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:89:17:1a} reservation:<nil>}
	I0917 18:11:00.546921   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:00.546831   59262 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003051c0}
	I0917 18:11:00.546975   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | created network xml: 
	I0917 18:11:00.546998   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | <network>
	I0917 18:11:00.547004   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG |   <name>mk-NoKubernetes-267093</name>
	I0917 18:11:00.547014   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG |   <dns enable='no'/>
	I0917 18:11:00.547018   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG |   
	I0917 18:11:00.547030   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0917 18:11:00.547034   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG |     <dhcp>
	I0917 18:11:00.547042   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0917 18:11:00.547046   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG |     </dhcp>
	I0917 18:11:00.547049   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG |   </ip>
	I0917 18:11:00.547053   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG |   
	I0917 18:11:00.547056   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | </network>
	I0917 18:11:00.547061   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | 
	I0917 18:11:00.552614   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | trying to create private KVM network mk-NoKubernetes-267093 192.168.61.0/24...
	I0917 18:11:00.630789   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | private KVM network mk-NoKubernetes-267093 192.168.61.0/24 created
	I0917 18:11:00.630858   59176 main.go:141] libmachine: (NoKubernetes-267093) Setting up store path in /home/jenkins/minikube-integration/19662-11085/.minikube/machines/NoKubernetes-267093 ...
	I0917 18:11:00.630881   59176 main.go:141] libmachine: (NoKubernetes-267093) Building disk image from file:///home/jenkins/minikube-integration/19662-11085/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0917 18:11:00.631075   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:00.630905   59262 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 18:11:00.631102   59176 main.go:141] libmachine: (NoKubernetes-267093) Downloading /home/jenkins/minikube-integration/19662-11085/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19662-11085/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0917 18:11:00.881793   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:00.881672   59262 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/NoKubernetes-267093/id_rsa...
	I0917 18:11:01.098716   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:01.098582   59262 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/NoKubernetes-267093/NoKubernetes-267093.rawdisk...
	I0917 18:11:01.098729   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | Writing magic tar header
	I0917 18:11:01.098741   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | Writing SSH key tar header
	I0917 18:11:01.098747   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:01.098726   59262 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19662-11085/.minikube/machines/NoKubernetes-267093 ...
	I0917 18:11:01.098855   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/NoKubernetes-267093
	I0917 18:11:01.098899   59176 main.go:141] libmachine: (NoKubernetes-267093) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube/machines/NoKubernetes-267093 (perms=drwx------)
	I0917 18:11:01.098918   59176 main.go:141] libmachine: (NoKubernetes-267093) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube/machines (perms=drwxr-xr-x)
	I0917 18:11:01.098925   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube/machines
	I0917 18:11:01.098939   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 18:11:01.098946   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085
	I0917 18:11:01.098954   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0917 18:11:01.098957   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | Checking permissions on dir: /home/jenkins
	I0917 18:11:01.098964   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | Checking permissions on dir: /home
	I0917 18:11:01.098968   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | Skipping /home - not owner
	I0917 18:11:01.098976   59176 main.go:141] libmachine: (NoKubernetes-267093) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube (perms=drwxr-xr-x)
	I0917 18:11:01.098981   59176 main.go:141] libmachine: (NoKubernetes-267093) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085 (perms=drwxrwxr-x)
	I0917 18:11:01.099009   59176 main.go:141] libmachine: (NoKubernetes-267093) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0917 18:11:01.099025   59176 main.go:141] libmachine: (NoKubernetes-267093) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0917 18:11:01.099034   59176 main.go:141] libmachine: (NoKubernetes-267093) Creating domain...
	I0917 18:11:01.100227   59176 main.go:141] libmachine: (NoKubernetes-267093) define libvirt domain using xml: 
	I0917 18:11:01.100250   59176 main.go:141] libmachine: (NoKubernetes-267093) <domain type='kvm'>
	I0917 18:11:01.100265   59176 main.go:141] libmachine: (NoKubernetes-267093)   <name>NoKubernetes-267093</name>
	I0917 18:11:01.100277   59176 main.go:141] libmachine: (NoKubernetes-267093)   <memory unit='MiB'>6000</memory>
	I0917 18:11:01.100300   59176 main.go:141] libmachine: (NoKubernetes-267093)   <vcpu>2</vcpu>
	I0917 18:11:01.100306   59176 main.go:141] libmachine: (NoKubernetes-267093)   <features>
	I0917 18:11:01.100313   59176 main.go:141] libmachine: (NoKubernetes-267093)     <acpi/>
	I0917 18:11:01.100327   59176 main.go:141] libmachine: (NoKubernetes-267093)     <apic/>
	I0917 18:11:01.100334   59176 main.go:141] libmachine: (NoKubernetes-267093)     <pae/>
	I0917 18:11:01.100343   59176 main.go:141] libmachine: (NoKubernetes-267093)     
	I0917 18:11:01.100350   59176 main.go:141] libmachine: (NoKubernetes-267093)   </features>
	I0917 18:11:01.100356   59176 main.go:141] libmachine: (NoKubernetes-267093)   <cpu mode='host-passthrough'>
	I0917 18:11:01.100363   59176 main.go:141] libmachine: (NoKubernetes-267093)   
	I0917 18:11:01.100368   59176 main.go:141] libmachine: (NoKubernetes-267093)   </cpu>
	I0917 18:11:01.100375   59176 main.go:141] libmachine: (NoKubernetes-267093)   <os>
	I0917 18:11:01.100382   59176 main.go:141] libmachine: (NoKubernetes-267093)     <type>hvm</type>
	I0917 18:11:01.100389   59176 main.go:141] libmachine: (NoKubernetes-267093)     <boot dev='cdrom'/>
	I0917 18:11:01.100396   59176 main.go:141] libmachine: (NoKubernetes-267093)     <boot dev='hd'/>
	I0917 18:11:01.100404   59176 main.go:141] libmachine: (NoKubernetes-267093)     <bootmenu enable='no'/>
	I0917 18:11:01.100415   59176 main.go:141] libmachine: (NoKubernetes-267093)   </os>
	I0917 18:11:01.100422   59176 main.go:141] libmachine: (NoKubernetes-267093)   <devices>
	I0917 18:11:01.100435   59176 main.go:141] libmachine: (NoKubernetes-267093)     <disk type='file' device='cdrom'>
	I0917 18:11:01.100449   59176 main.go:141] libmachine: (NoKubernetes-267093)       <source file='/home/jenkins/minikube-integration/19662-11085/.minikube/machines/NoKubernetes-267093/boot2docker.iso'/>
	I0917 18:11:01.100455   59176 main.go:141] libmachine: (NoKubernetes-267093)       <target dev='hdc' bus='scsi'/>
	I0917 18:11:01.100461   59176 main.go:141] libmachine: (NoKubernetes-267093)       <readonly/>
	I0917 18:11:01.100466   59176 main.go:141] libmachine: (NoKubernetes-267093)     </disk>
	I0917 18:11:01.100478   59176 main.go:141] libmachine: (NoKubernetes-267093)     <disk type='file' device='disk'>
	I0917 18:11:01.100485   59176 main.go:141] libmachine: (NoKubernetes-267093)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0917 18:11:01.100498   59176 main.go:141] libmachine: (NoKubernetes-267093)       <source file='/home/jenkins/minikube-integration/19662-11085/.minikube/machines/NoKubernetes-267093/NoKubernetes-267093.rawdisk'/>
	I0917 18:11:01.100503   59176 main.go:141] libmachine: (NoKubernetes-267093)       <target dev='hda' bus='virtio'/>
	I0917 18:11:01.100509   59176 main.go:141] libmachine: (NoKubernetes-267093)     </disk>
	I0917 18:11:01.100514   59176 main.go:141] libmachine: (NoKubernetes-267093)     <interface type='network'>
	I0917 18:11:01.100521   59176 main.go:141] libmachine: (NoKubernetes-267093)       <source network='mk-NoKubernetes-267093'/>
	I0917 18:11:01.100527   59176 main.go:141] libmachine: (NoKubernetes-267093)       <model type='virtio'/>
	I0917 18:11:01.100533   59176 main.go:141] libmachine: (NoKubernetes-267093)     </interface>
	I0917 18:11:01.100537   59176 main.go:141] libmachine: (NoKubernetes-267093)     <interface type='network'>
	I0917 18:11:01.100549   59176 main.go:141] libmachine: (NoKubernetes-267093)       <source network='default'/>
	I0917 18:11:01.100566   59176 main.go:141] libmachine: (NoKubernetes-267093)       <model type='virtio'/>
	I0917 18:11:01.100573   59176 main.go:141] libmachine: (NoKubernetes-267093)     </interface>
	I0917 18:11:01.100579   59176 main.go:141] libmachine: (NoKubernetes-267093)     <serial type='pty'>
	I0917 18:11:01.100585   59176 main.go:141] libmachine: (NoKubernetes-267093)       <target port='0'/>
	I0917 18:11:01.100590   59176 main.go:141] libmachine: (NoKubernetes-267093)     </serial>
	I0917 18:11:01.100597   59176 main.go:141] libmachine: (NoKubernetes-267093)     <console type='pty'>
	I0917 18:11:01.100604   59176 main.go:141] libmachine: (NoKubernetes-267093)       <target type='serial' port='0'/>
	I0917 18:11:01.100610   59176 main.go:141] libmachine: (NoKubernetes-267093)     </console>
	I0917 18:11:01.100615   59176 main.go:141] libmachine: (NoKubernetes-267093)     <rng model='virtio'>
	I0917 18:11:01.100629   59176 main.go:141] libmachine: (NoKubernetes-267093)       <backend model='random'>/dev/random</backend>
	I0917 18:11:01.100638   59176 main.go:141] libmachine: (NoKubernetes-267093)     </rng>
	I0917 18:11:01.100645   59176 main.go:141] libmachine: (NoKubernetes-267093)     
	I0917 18:11:01.100650   59176 main.go:141] libmachine: (NoKubernetes-267093)     
	I0917 18:11:01.100656   59176 main.go:141] libmachine: (NoKubernetes-267093)   </devices>
	I0917 18:11:01.100661   59176 main.go:141] libmachine: (NoKubernetes-267093) </domain>
	I0917 18:11:01.100672   59176 main.go:141] libmachine: (NoKubernetes-267093) 
	I0917 18:11:01.105137   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | domain NoKubernetes-267093 has defined MAC address 52:54:00:b3:33:9b in network default
	I0917 18:11:01.105718   59176 main.go:141] libmachine: (NoKubernetes-267093) Ensuring networks are active...
	I0917 18:11:01.105732   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | domain NoKubernetes-267093 has defined MAC address 52:54:00:0c:d2:bc in network mk-NoKubernetes-267093
	I0917 18:11:01.106367   59176 main.go:141] libmachine: (NoKubernetes-267093) Ensuring network default is active
	I0917 18:11:01.106635   59176 main.go:141] libmachine: (NoKubernetes-267093) Ensuring network mk-NoKubernetes-267093 is active
	I0917 18:11:01.107060   59176 main.go:141] libmachine: (NoKubernetes-267093) Getting domain xml...
	I0917 18:11:01.107723   59176 main.go:141] libmachine: (NoKubernetes-267093) Creating domain...
	I0917 18:11:02.501726   59176 main.go:141] libmachine: (NoKubernetes-267093) Waiting to get IP...
	I0917 18:11:02.502778   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | domain NoKubernetes-267093 has defined MAC address 52:54:00:0c:d2:bc in network mk-NoKubernetes-267093
	I0917 18:11:02.503309   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | unable to find current IP address of domain NoKubernetes-267093 in network mk-NoKubernetes-267093
	I0917 18:11:02.503329   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:02.503269   59262 retry.go:31] will retry after 194.277684ms: waiting for machine to come up
	I0917 18:11:02.699875   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | domain NoKubernetes-267093 has defined MAC address 52:54:00:0c:d2:bc in network mk-NoKubernetes-267093
	I0917 18:11:02.700401   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | unable to find current IP address of domain NoKubernetes-267093 in network mk-NoKubernetes-267093
	I0917 18:11:02.700421   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:02.700368   59262 retry.go:31] will retry after 248.553852ms: waiting for machine to come up
	I0917 18:11:02.950852   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | domain NoKubernetes-267093 has defined MAC address 52:54:00:0c:d2:bc in network mk-NoKubernetes-267093
	I0917 18:11:02.951591   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | unable to find current IP address of domain NoKubernetes-267093 in network mk-NoKubernetes-267093
	I0917 18:11:02.951632   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:02.951538   59262 retry.go:31] will retry after 352.682061ms: waiting for machine to come up
	I0917 18:11:03.306017   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | domain NoKubernetes-267093 has defined MAC address 52:54:00:0c:d2:bc in network mk-NoKubernetes-267093
	I0917 18:11:03.306635   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | unable to find current IP address of domain NoKubernetes-267093 in network mk-NoKubernetes-267093
	I0917 18:11:03.306658   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:03.306588   59262 retry.go:31] will retry after 430.231323ms: waiting for machine to come up
	I0917 18:11:03.738275   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | domain NoKubernetes-267093 has defined MAC address 52:54:00:0c:d2:bc in network mk-NoKubernetes-267093
	I0917 18:11:03.738825   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | unable to find current IP address of domain NoKubernetes-267093 in network mk-NoKubernetes-267093
	I0917 18:11:03.738845   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:03.738780   59262 retry.go:31] will retry after 535.135352ms: waiting for machine to come up
	I0917 18:11:04.275783   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | domain NoKubernetes-267093 has defined MAC address 52:54:00:0c:d2:bc in network mk-NoKubernetes-267093
	I0917 18:11:04.276348   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | unable to find current IP address of domain NoKubernetes-267093 in network mk-NoKubernetes-267093
	I0917 18:11:04.276381   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:04.276301   59262 retry.go:31] will retry after 828.941966ms: waiting for machine to come up
	I0917 18:11:02.045041   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetIP
	I0917 18:11:02.048867   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:02.049487   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:11:02.049517   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:02.049781   58866 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0917 18:11:02.055016   58866 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:11:02.068785   58866 kubeadm.go:883] updating cluster {Name:force-systemd-flag-722424 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:force-systemd-flag-722424 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.193 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:11:02.068894   58866 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 18:11:02.068933   58866 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:11:02.102203   58866 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0917 18:11:02.102275   58866 ssh_runner.go:195] Run: which lz4
	I0917 18:11:02.106336   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0917 18:11:02.106416   58866 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 18:11:02.110687   58866 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 18:11:02.110719   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0917 18:11:03.628332   58866 crio.go:462] duration metric: took 1.521935538s to copy over tarball
	I0917 18:11:03.628414   58866 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 18:11:02.058409   58375 pod_ready.go:103] pod "etcd-pause-246701" in "kube-system" namespace has status "Ready":"False"
	I0917 18:11:04.560652   58375 pod_ready.go:103] pod "etcd-pause-246701" in "kube-system" namespace has status "Ready":"False"
	I0917 18:11:05.059393   58375 pod_ready.go:93] pod "etcd-pause-246701" in "kube-system" namespace has status "Ready":"True"
	I0917 18:11:05.059429   58375 pod_ready.go:82] duration metric: took 10.008598103s for pod "etcd-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:05.059441   58375 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:05.853071   58866 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.224620276s)
	I0917 18:11:05.853103   58866 crio.go:469] duration metric: took 2.224740933s to extract the tarball
	I0917 18:11:05.853112   58866 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 18:11:05.893906   58866 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:11:05.946593   58866 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 18:11:05.946615   58866 cache_images.go:84] Images are preloaded, skipping loading
	I0917 18:11:05.946622   58866 kubeadm.go:934] updating node { 192.168.72.193 8443 v1.31.1 crio true true} ...
	I0917 18:11:05.946737   58866 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-722424 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.193
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-722424 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:11:05.946799   58866 ssh_runner.go:195] Run: crio config
	I0917 18:11:06.005144   58866 cni.go:84] Creating CNI manager for ""
	I0917 18:11:06.005164   58866 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:11:06.005173   58866 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:11:06.005197   58866 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.193 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-722424 NodeName:force-systemd-flag-722424 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.193"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.193 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 18:11:06.005385   58866 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.193
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-722424"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.193
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.193"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:11:06.005454   58866 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 18:11:06.019438   58866 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:11:06.019517   58866 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:11:06.032605   58866 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0917 18:11:06.054178   58866 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:11:06.073774   58866 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0917 18:11:06.093697   58866 ssh_runner.go:195] Run: grep 192.168.72.193	control-plane.minikube.internal$ /etc/hosts
	I0917 18:11:06.097762   58866 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.193	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:11:06.111777   58866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:11:06.229014   58866 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:11:06.246644   58866 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424 for IP: 192.168.72.193
	I0917 18:11:06.246670   58866 certs.go:194] generating shared ca certs ...
	I0917 18:11:06.246698   58866 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:11:06.246887   58866 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:11:06.246949   58866 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:11:06.246963   58866 certs.go:256] generating profile certs ...
	I0917 18:11:06.247040   58866 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/client.key
	I0917 18:11:06.247075   58866 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/client.crt with IP's: []
	I0917 18:11:06.381827   58866 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/client.crt ...
	I0917 18:11:06.381860   58866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/client.crt: {Name:mk86097a41c322095d29daf8e622b2b28a99e1a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:11:06.382049   58866 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/client.key ...
	I0917 18:11:06.382070   58866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/client.key: {Name:mkba77cf4011828199491b7a86203b715802cb3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:11:06.382202   58866 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/apiserver.key.7fbe9d66
	I0917 18:11:06.382228   58866 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/apiserver.crt.7fbe9d66 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.193]
	I0917 18:11:06.495300   58866 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/apiserver.crt.7fbe9d66 ...
	I0917 18:11:06.495331   58866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/apiserver.crt.7fbe9d66: {Name:mk0f8ed889397e7c2c4cba36a22642380e555e4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:11:06.495487   58866 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/apiserver.key.7fbe9d66 ...
	I0917 18:11:06.495499   58866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/apiserver.key.7fbe9d66: {Name:mk36f61736ce6e8c93dff79e54f21136bf5676d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:11:06.495573   58866 certs.go:381] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/apiserver.crt.7fbe9d66 -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/apiserver.crt
	I0917 18:11:06.495669   58866 certs.go:385] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/apiserver.key.7fbe9d66 -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/apiserver.key
	I0917 18:11:06.495763   58866 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/proxy-client.key
	I0917 18:11:06.495782   58866 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/proxy-client.crt with IP's: []
	I0917 18:11:06.597612   58866 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/proxy-client.crt ...
	I0917 18:11:06.597643   58866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/proxy-client.crt: {Name:mk19a59ef793535a031f97388f4542bcd4803fd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:11:06.597810   58866 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/proxy-client.key ...
	I0917 18:11:06.597825   58866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/proxy-client.key: {Name:mkc124e54c2c0ec78debdbff969152258da04920 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:11:06.597897   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 18:11:06.597916   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 18:11:06.597931   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 18:11:06.597945   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 18:11:06.597958   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 18:11:06.597971   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 18:11:06.597983   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 18:11:06.597995   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 18:11:06.598053   58866 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:11:06.598090   58866 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:11:06.598100   58866 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:11:06.598124   58866 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:11:06.598147   58866 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:11:06.598167   58866 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:11:06.598209   58866 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:11:06.598234   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem -> /usr/share/ca-certificates/18259.pem
	I0917 18:11:06.598263   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> /usr/share/ca-certificates/182592.pem
	I0917 18:11:06.598276   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:11:06.598808   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:11:06.628688   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:11:06.655202   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:11:06.684871   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:11:06.714276   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0917 18:11:06.743755   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 18:11:06.774263   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:11:06.804691   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 18:11:06.834822   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:11:06.863587   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:11:06.891087   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:11:06.917696   58866 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:11:06.937738   58866 ssh_runner.go:195] Run: openssl version
	I0917 18:11:06.944797   58866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:11:06.960105   58866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:11:06.966504   58866 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:11:06.966562   58866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:11:06.973218   58866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:11:06.985107   58866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:11:07.001874   58866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:11:07.008460   58866 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:11:07.008534   58866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:11:07.017107   58866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:11:07.036073   58866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:11:07.060162   58866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:11:07.066845   58866 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:11:07.066913   58866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:11:07.077271   58866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:11:07.096066   58866 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:11:07.101775   58866 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 18:11:07.101834   58866 kubeadm.go:392] StartCluster: {Name:force-systemd-flag-722424 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.1 ClusterName:force-systemd-flag-722424 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.193 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:11:07.101924   58866 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:11:07.101982   58866 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:11:07.143064   58866 cri.go:89] found id: ""
	I0917 18:11:07.143141   58866 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:11:07.154240   58866 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:11:07.165029   58866 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:11:07.175778   58866 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:11:07.175801   58866 kubeadm.go:157] found existing configuration files:
	
	I0917 18:11:07.175866   58866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:11:07.185744   58866 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:11:07.185823   58866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:11:07.196612   58866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:11:07.206772   58866 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:11:07.206840   58866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:11:07.217977   58866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:11:07.227882   58866 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:11:07.227949   58866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:11:07.238655   58866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:11:07.248418   58866 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:11:07.248493   58866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:11:07.258861   58866 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:11:07.377104   58866 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 18:11:07.377265   58866 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:11:07.499737   58866 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:11:07.499901   58866 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:11:07.500039   58866 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 18:11:07.511785   58866 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:11:07.564894   58866 out.go:235]   - Generating certificates and keys ...
	I0917 18:11:07.565071   58866 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:11:07.565180   58866 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:11:07.586663   58866 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 18:11:07.781422   58866 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 18:11:07.921986   58866 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 18:11:08.265062   58866 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 18:11:08.315340   58866 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 18:11:08.315529   58866 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-722424 localhost] and IPs [192.168.72.193 127.0.0.1 ::1]
	I0917 18:11:08.416208   58866 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 18:11:08.416434   58866 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-722424 localhost] and IPs [192.168.72.193 127.0.0.1 ::1]
	I0917 18:11:08.585541   58866 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 18:11:08.679195   58866 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 18:11:08.743216   58866 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 18:11:08.743542   58866 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:11:08.853752   58866 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:11:08.932756   58866 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 18:11:09.177843   58866 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:11:09.232425   58866 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:11:09.447110   58866 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:11:09.447679   58866 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:11:09.451184   58866 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:11:05.107465   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | domain NoKubernetes-267093 has defined MAC address 52:54:00:0c:d2:bc in network mk-NoKubernetes-267093
	I0917 18:11:05.107971   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | unable to find current IP address of domain NoKubernetes-267093 in network mk-NoKubernetes-267093
	I0917 18:11:05.107996   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:05.107933   59262 retry.go:31] will retry after 1.037870312s: waiting for machine to come up
	I0917 18:11:06.147244   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | domain NoKubernetes-267093 has defined MAC address 52:54:00:0c:d2:bc in network mk-NoKubernetes-267093
	I0917 18:11:06.147951   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | unable to find current IP address of domain NoKubernetes-267093 in network mk-NoKubernetes-267093
	I0917 18:11:06.147986   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:06.147926   59262 retry.go:31] will retry after 1.084729336s: waiting for machine to come up
	I0917 18:11:07.233974   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | domain NoKubernetes-267093 has defined MAC address 52:54:00:0c:d2:bc in network mk-NoKubernetes-267093
	I0917 18:11:07.234447   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | unable to find current IP address of domain NoKubernetes-267093 in network mk-NoKubernetes-267093
	I0917 18:11:07.234467   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:07.234381   59262 retry.go:31] will retry after 1.81974704s: waiting for machine to come up
	I0917 18:11:09.055788   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | domain NoKubernetes-267093 has defined MAC address 52:54:00:0c:d2:bc in network mk-NoKubernetes-267093
	I0917 18:11:09.056264   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | unable to find current IP address of domain NoKubernetes-267093 in network mk-NoKubernetes-267093
	I0917 18:11:09.056280   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:09.056205   59262 retry.go:31] will retry after 1.630185156s: waiting for machine to come up
	I0917 18:11:07.068277   58375 pod_ready.go:103] pod "kube-apiserver-pause-246701" in "kube-system" namespace has status "Ready":"False"
	I0917 18:11:09.569066   58375 pod_ready.go:103] pod "kube-apiserver-pause-246701" in "kube-system" namespace has status "Ready":"False"
	I0917 18:11:10.069777   58375 pod_ready.go:93] pod "kube-apiserver-pause-246701" in "kube-system" namespace has status "Ready":"True"
	I0917 18:11:10.069810   58375 pod_ready.go:82] duration metric: took 5.010359365s for pod "kube-apiserver-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:10.069831   58375 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:10.076290   58375 pod_ready.go:93] pod "kube-controller-manager-pause-246701" in "kube-system" namespace has status "Ready":"True"
	I0917 18:11:10.076316   58375 pod_ready.go:82] duration metric: took 6.475825ms for pod "kube-controller-manager-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:10.076327   58375 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-vxgcn" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:10.081977   58375 pod_ready.go:93] pod "kube-proxy-vxgcn" in "kube-system" namespace has status "Ready":"True"
	I0917 18:11:10.082006   58375 pod_ready.go:82] duration metric: took 5.671891ms for pod "kube-proxy-vxgcn" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:10.082019   58375 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:10.087808   58375 pod_ready.go:93] pod "kube-scheduler-pause-246701" in "kube-system" namespace has status "Ready":"True"
	I0917 18:11:10.087831   58375 pod_ready.go:82] duration metric: took 5.804937ms for pod "kube-scheduler-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:10.087838   58375 pod_ready.go:39] duration metric: took 15.046091552s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:11:10.087853   58375 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 18:11:10.107302   58375 ops.go:34] apiserver oom_adj: -16
	I0917 18:11:10.107328   58375 kubeadm.go:597] duration metric: took 42.159106129s to restartPrimaryControlPlane
	I0917 18:11:10.107343   58375 kubeadm.go:394] duration metric: took 42.535162008s to StartCluster
	I0917 18:11:10.107364   58375 settings.go:142] acquiring lock: {Name:mk34898861b3ed534da4bd8404feab95fe690001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:11:10.107442   58375 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:11:10.108104   58375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:11:10.108359   58375 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 18:11:10.108436   58375 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 18:11:10.108725   58375 config.go:182] Loaded profile config "pause-246701": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:11:10.111365   58375 out.go:177] * Enabled addons: 
	I0917 18:11:10.111385   58375 out.go:177] * Verifying Kubernetes components...
	I0917 18:11:09.453247   58866 out.go:235]   - Booting up control plane ...
	I0917 18:11:09.453378   58866 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:11:09.453508   58866 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:11:09.453615   58866 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:11:09.471050   58866 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:11:09.478679   58866 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:11:09.478776   58866 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:11:09.627956   58866 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 18:11:09.628115   58866 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 18:11:10.129019   58866 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.637536ms
	I0917 18:11:10.129161   58866 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 18:11:10.112960   58375 addons.go:510] duration metric: took 4.532285ms for enable addons: enabled=[]
	I0917 18:11:10.112994   58375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:11:10.279460   58375 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:11:10.302491   58375 node_ready.go:35] waiting up to 6m0s for node "pause-246701" to be "Ready" ...
	I0917 18:11:10.306062   58375 node_ready.go:49] node "pause-246701" has status "Ready":"True"
	I0917 18:11:10.306086   58375 node_ready.go:38] duration metric: took 3.560004ms for node "pause-246701" to be "Ready" ...
	I0917 18:11:10.306093   58375 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:11:10.311355   58375 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dkldh" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:10.464444   58375 pod_ready.go:93] pod "coredns-7c65d6cfc9-dkldh" in "kube-system" namespace has status "Ready":"True"
	I0917 18:11:10.464488   58375 pod_ready.go:82] duration metric: took 153.104082ms for pod "coredns-7c65d6cfc9-dkldh" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:10.464501   58375 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:10.864824   58375 pod_ready.go:93] pod "etcd-pause-246701" in "kube-system" namespace has status "Ready":"True"
	I0917 18:11:10.864849   58375 pod_ready.go:82] duration metric: took 400.340534ms for pod "etcd-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:10.864858   58375 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:11.264832   58375 pod_ready.go:93] pod "kube-apiserver-pause-246701" in "kube-system" namespace has status "Ready":"True"
	I0917 18:11:11.264858   58375 pod_ready.go:82] duration metric: took 399.993599ms for pod "kube-apiserver-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:11.264867   58375 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:11.665494   58375 pod_ready.go:93] pod "kube-controller-manager-pause-246701" in "kube-system" namespace has status "Ready":"True"
	I0917 18:11:11.665588   58375 pod_ready.go:82] duration metric: took 400.710991ms for pod "kube-controller-manager-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:11.665615   58375 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vxgcn" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:12.065539   58375 pod_ready.go:93] pod "kube-proxy-vxgcn" in "kube-system" namespace has status "Ready":"True"
	I0917 18:11:12.065575   58375 pod_ready.go:82] duration metric: took 399.941304ms for pod "kube-proxy-vxgcn" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:12.065589   58375 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:12.465103   58375 pod_ready.go:93] pod "kube-scheduler-pause-246701" in "kube-system" namespace has status "Ready":"True"
	I0917 18:11:12.465135   58375 pod_ready.go:82] duration metric: took 399.53748ms for pod "kube-scheduler-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:12.465146   58375 pod_ready.go:39] duration metric: took 2.159042692s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:11:12.465163   58375 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:11:12.465242   58375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:11:12.487151   58375 api_server.go:72] duration metric: took 2.378745999s to wait for apiserver process to appear ...
	I0917 18:11:12.487187   58375 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:11:12.487234   58375 api_server.go:253] Checking apiserver healthz at https://192.168.39.167:8443/healthz ...
	I0917 18:11:12.494137   58375 api_server.go:279] https://192.168.39.167:8443/healthz returned 200:
	ok
	I0917 18:11:12.495390   58375 api_server.go:141] control plane version: v1.31.1
	I0917 18:11:12.495414   58375 api_server.go:131] duration metric: took 8.218787ms to wait for apiserver health ...
	I0917 18:11:12.495422   58375 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:11:12.668374   58375 system_pods.go:59] 6 kube-system pods found
	I0917 18:11:12.668428   58375 system_pods.go:61] "coredns-7c65d6cfc9-dkldh" [1c024a80-e613-48c2-b2c2-79bb05774a91] Running
	I0917 18:11:12.668435   58375 system_pods.go:61] "etcd-pause-246701" [d4796279-f272-4e02-a266-5da5f4aafec1] Running
	I0917 18:11:12.668442   58375 system_pods.go:61] "kube-apiserver-pause-246701" [6355bbd7-6301-4aaf-aa88-0951b3a578e1] Running
	I0917 18:11:12.668448   58375 system_pods.go:61] "kube-controller-manager-pause-246701" [6c4f553f-4689-45b2-8c50-c237f39bbe89] Running
	I0917 18:11:12.668454   58375 system_pods.go:61] "kube-proxy-vxgcn" [de638753-d03b-438c-8d1f-d43af2bbcce4] Running
	I0917 18:11:12.668459   58375 system_pods.go:61] "kube-scheduler-pause-246701" [836bc257-b131-4c83-add1-70edf6f7fb9b] Running
	I0917 18:11:12.668467   58375 system_pods.go:74] duration metric: took 173.038167ms to wait for pod list to return data ...
	I0917 18:11:12.668476   58375 default_sa.go:34] waiting for default service account to be created ...
	I0917 18:11:12.865188   58375 default_sa.go:45] found service account: "default"
	I0917 18:11:12.865223   58375 default_sa.go:55] duration metric: took 196.738943ms for default service account to be created ...
	I0917 18:11:12.865248   58375 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 18:11:13.067058   58375 system_pods.go:86] 6 kube-system pods found
	I0917 18:11:13.067097   58375 system_pods.go:89] "coredns-7c65d6cfc9-dkldh" [1c024a80-e613-48c2-b2c2-79bb05774a91] Running
	I0917 18:11:13.067105   58375 system_pods.go:89] "etcd-pause-246701" [d4796279-f272-4e02-a266-5da5f4aafec1] Running
	I0917 18:11:13.067111   58375 system_pods.go:89] "kube-apiserver-pause-246701" [6355bbd7-6301-4aaf-aa88-0951b3a578e1] Running
	I0917 18:11:13.067117   58375 system_pods.go:89] "kube-controller-manager-pause-246701" [6c4f553f-4689-45b2-8c50-c237f39bbe89] Running
	I0917 18:11:13.067123   58375 system_pods.go:89] "kube-proxy-vxgcn" [de638753-d03b-438c-8d1f-d43af2bbcce4] Running
	I0917 18:11:13.067128   58375 system_pods.go:89] "kube-scheduler-pause-246701" [836bc257-b131-4c83-add1-70edf6f7fb9b] Running
	I0917 18:11:13.067137   58375 system_pods.go:126] duration metric: took 201.881436ms to wait for k8s-apps to be running ...
	I0917 18:11:13.067146   58375 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 18:11:13.067202   58375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:11:13.083149   58375 system_svc.go:56] duration metric: took 15.991297ms WaitForService to wait for kubelet
	I0917 18:11:13.083184   58375 kubeadm.go:582] duration metric: took 2.974789811s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 18:11:13.083209   58375 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:11:13.264118   58375 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:11:13.264143   58375 node_conditions.go:123] node cpu capacity is 2
	I0917 18:11:13.264154   58375 node_conditions.go:105] duration metric: took 180.939596ms to run NodePressure ...
	I0917 18:11:13.264164   58375 start.go:241] waiting for startup goroutines ...
	I0917 18:11:13.264170   58375 start.go:246] waiting for cluster config update ...
	I0917 18:11:13.264177   58375 start.go:255] writing updated cluster config ...
	I0917 18:11:13.264455   58375 ssh_runner.go:195] Run: rm -f paused
	I0917 18:11:13.321760   58375 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 18:11:13.324003   58375 out.go:177] * Done! kubectl is now configured to use "pause-246701" cluster and "default" namespace by default
	I0917 18:11:10.843277   53861 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:11:10.843514   53861 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	
	==> CRI-O <==
	Sep 17 18:11:14 pause-246701 crio[2291]: time="2024-09-17 18:11:14.021797267Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726596674021758843,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=324a80f5-9dbb-4e3a-ba26-10d7e3efe082 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:11:14 pause-246701 crio[2291]: time="2024-09-17 18:11:14.022842851Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b0fd9b7-2b91-4d78-93c0-511dfcb606c4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:11:14 pause-246701 crio[2291]: time="2024-09-17 18:11:14.022917344Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b0fd9b7-2b91-4d78-93c0-511dfcb606c4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:11:14 pause-246701 crio[2291]: time="2024-09-17 18:11:14.023248913Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e36ed07cad23ca7e45e0ef78137e5611776afa55b9815807f66fb0ed85a99556,PodSandboxId:c1dbcf329a3fe0baab44df5059f981aa8d3b6e9b4f1a59b6a973504a5e2f0818,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726596650657267029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ddff28aa8ccc0582f50b01e72762fef,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522b430ef1b3944822b390aa7972fbb4ca0d95838f2ed6bf233dd76563902419,PodSandboxId:f689143900b533ba6c8eb2ffb5f114af7284348cbfcd4b4651ecd85534cedc83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726596650660591404,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9be2266f3aff2f758a3717d99f55ddc,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c83fe770d5129c316fda477f179c5751dba22618f8024c50b1276ca370b52953,PodSandboxId:8b5b8a054b6606762ca430d911d7158c889cc2299325f43b02d79bbc0e81de14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726596650636678591,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b4c7e73ea292afa0be4ff4b4b13e840,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e941d039eb663b198187704a615d9351839eeb925965a890fc1e0308609ad47,PodSandboxId:9cd865f7d89ed38200b42a28237a798e5d64cdcfc14b59690cc3d614affd5aa8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726596647289091519,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e6fb720ecd1c1a6b1371e0c42ab7381,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5b65e38703d6deda573e2b1c3c3f9d60b44e65801baddd4b382fbef4eb1a8b5,PodSandboxId:21d37cdab9a8460580ca74bc40e89c1c7cabfbf05f9c9b1afdb7fd4e024787ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726596628219238122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dkldh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c024a80-e613-48c2-b2c2-79bb05774a91,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ee133b83897f5511ef614608a44582e929cfaf2d167679328a73e61846d31cf,PodSandboxId:dfc9b352c2c27ecf4ce9fc4d07ebbb5ed2d1960f862b6793290cc9505f82d0cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726596627478615128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxgcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de638753-d03b-438c-8d1f-d43af2bbcce4,},Annotations:map[string]string{io
.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59be282d82c994554051afe711c5e54f9956bb70bc58a9769c9785903771359e,PodSandboxId:f689143900b533ba6c8eb2ffb5f114af7284348cbfcd4b4651ecd85534cedc83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726596627346655721,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9be2266f3aff2f758a3717d99f55ddc,},Annotations:map[string]s
tring{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01807adcb2c6478e08477b688e9213108420bbd562e88f21e7708e3ecdfd077b,PodSandboxId:c1dbcf329a3fe0baab44df5059f981aa8d3b6e9b4f1a59b6a973504a5e2f0818,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726596627284480793,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ddff28aa8ccc0582f50b01e72762fef,},Annotations:map[string]string{io.kubernetes
.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb34f9ac3911759cc6a62e2d8b69a97435125e9bd1540f4f5182362fc105a31,PodSandboxId:9cd865f7d89ed38200b42a28237a798e5d64cdcfc14b59690cc3d614affd5aa8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726596627296774789,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e6fb720ecd1c1a6b1371e0c42ab7381,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0edc99624aa4f33a3454d0dc5a08cbdd7de87adc98e628385aac56e62708e0,PodSandboxId:8b5b8a054b6606762ca430d911d7158c889cc2299325f43b02d79bbc0e81de14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726596627115500381,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b4c7e73ea292afa0be4ff4b4b13e840,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f55176a7c1e4d61fe9e7fbe8d5b007636be692a07306798dea61b38f0124fbec,PodSandboxId:379e5671aca9ddead60548d48abf110a9f519d09c0e91e666a4735acaa648738,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726596561478944199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dkldh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c024a80-e613-48c2-b2c2-79bb05774a91,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde175b4dcf392df5e049450ec3eb0f92583c8a3d28d0147040470f4fac40487,PodSandboxId:5e3ccc0fc94188285738229cd5da6b3d4698b14ee94ed616abdd38177de4e7f5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726596560941560068,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxgcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: de638753-d03b-438c-8d1f-d43af2bbcce4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b0fd9b7-2b91-4d78-93c0-511dfcb606c4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:11:14 pause-246701 crio[2291]: time="2024-09-17 18:11:14.079575162Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=96f68a96-e7dd-4176-ac46-c7c8ac201c2e name=/runtime.v1.RuntimeService/Version
	Sep 17 18:11:14 pause-246701 crio[2291]: time="2024-09-17 18:11:14.079677466Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=96f68a96-e7dd-4176-ac46-c7c8ac201c2e name=/runtime.v1.RuntimeService/Version
	Sep 17 18:11:14 pause-246701 crio[2291]: time="2024-09-17 18:11:14.081190777Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1861bb8f-baed-4d68-bc1e-7dad794b2752 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:11:14 pause-246701 crio[2291]: time="2024-09-17 18:11:14.081594940Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726596674081570317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1861bb8f-baed-4d68-bc1e-7dad794b2752 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:11:14 pause-246701 crio[2291]: time="2024-09-17 18:11:14.082305252Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=413302ea-8e66-43c8-837f-71fc8bdf13a0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:11:14 pause-246701 crio[2291]: time="2024-09-17 18:11:14.082384059Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=413302ea-8e66-43c8-837f-71fc8bdf13a0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:11:14 pause-246701 crio[2291]: time="2024-09-17 18:11:14.082655609Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e36ed07cad23ca7e45e0ef78137e5611776afa55b9815807f66fb0ed85a99556,PodSandboxId:c1dbcf329a3fe0baab44df5059f981aa8d3b6e9b4f1a59b6a973504a5e2f0818,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726596650657267029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ddff28aa8ccc0582f50b01e72762fef,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522b430ef1b3944822b390aa7972fbb4ca0d95838f2ed6bf233dd76563902419,PodSandboxId:f689143900b533ba6c8eb2ffb5f114af7284348cbfcd4b4651ecd85534cedc83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726596650660591404,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9be2266f3aff2f758a3717d99f55ddc,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c83fe770d5129c316fda477f179c5751dba22618f8024c50b1276ca370b52953,PodSandboxId:8b5b8a054b6606762ca430d911d7158c889cc2299325f43b02d79bbc0e81de14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726596650636678591,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b4c7e73ea292afa0be4ff4b4b13e840,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e941d039eb663b198187704a615d9351839eeb925965a890fc1e0308609ad47,PodSandboxId:9cd865f7d89ed38200b42a28237a798e5d64cdcfc14b59690cc3d614affd5aa8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726596647289091519,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e6fb720ecd1c1a6b1371e0c42ab7381,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5b65e38703d6deda573e2b1c3c3f9d60b44e65801baddd4b382fbef4eb1a8b5,PodSandboxId:21d37cdab9a8460580ca74bc40e89c1c7cabfbf05f9c9b1afdb7fd4e024787ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726596628219238122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dkldh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c024a80-e613-48c2-b2c2-79bb05774a91,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ee133b83897f5511ef614608a44582e929cfaf2d167679328a73e61846d31cf,PodSandboxId:dfc9b352c2c27ecf4ce9fc4d07ebbb5ed2d1960f862b6793290cc9505f82d0cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726596627478615128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxgcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de638753-d03b-438c-8d1f-d43af2bbcce4,},Annotations:map[string]string{io
.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59be282d82c994554051afe711c5e54f9956bb70bc58a9769c9785903771359e,PodSandboxId:f689143900b533ba6c8eb2ffb5f114af7284348cbfcd4b4651ecd85534cedc83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726596627346655721,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9be2266f3aff2f758a3717d99f55ddc,},Annotations:map[string]s
tring{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01807adcb2c6478e08477b688e9213108420bbd562e88f21e7708e3ecdfd077b,PodSandboxId:c1dbcf329a3fe0baab44df5059f981aa8d3b6e9b4f1a59b6a973504a5e2f0818,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726596627284480793,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ddff28aa8ccc0582f50b01e72762fef,},Annotations:map[string]string{io.kubernetes
.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb34f9ac3911759cc6a62e2d8b69a97435125e9bd1540f4f5182362fc105a31,PodSandboxId:9cd865f7d89ed38200b42a28237a798e5d64cdcfc14b59690cc3d614affd5aa8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726596627296774789,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e6fb720ecd1c1a6b1371e0c42ab7381,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0edc99624aa4f33a3454d0dc5a08cbdd7de87adc98e628385aac56e62708e0,PodSandboxId:8b5b8a054b6606762ca430d911d7158c889cc2299325f43b02d79bbc0e81de14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726596627115500381,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b4c7e73ea292afa0be4ff4b4b13e840,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f55176a7c1e4d61fe9e7fbe8d5b007636be692a07306798dea61b38f0124fbec,PodSandboxId:379e5671aca9ddead60548d48abf110a9f519d09c0e91e666a4735acaa648738,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726596561478944199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dkldh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c024a80-e613-48c2-b2c2-79bb05774a91,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde175b4dcf392df5e049450ec3eb0f92583c8a3d28d0147040470f4fac40487,PodSandboxId:5e3ccc0fc94188285738229cd5da6b3d4698b14ee94ed616abdd38177de4e7f5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726596560941560068,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxgcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: de638753-d03b-438c-8d1f-d43af2bbcce4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=413302ea-8e66-43c8-837f-71fc8bdf13a0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:11:14 pause-246701 crio[2291]: time="2024-09-17 18:11:14.134659552Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=310d0e93-be96-401f-850a-f9db4cfb1ea1 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:11:14 pause-246701 crio[2291]: time="2024-09-17 18:11:14.134763757Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=310d0e93-be96-401f-850a-f9db4cfb1ea1 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:11:14 pause-246701 crio[2291]: time="2024-09-17 18:11:14.136439709Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1b48fc8d-9ba1-4883-ab11-7c67ae7d9543 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:11:14 pause-246701 crio[2291]: time="2024-09-17 18:11:14.136847567Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726596674136820462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1b48fc8d-9ba1-4883-ab11-7c67ae7d9543 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:11:14 pause-246701 crio[2291]: time="2024-09-17 18:11:14.137634468Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aaffa8c3-2958-494b-9dec-099f4ea92c91 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:11:14 pause-246701 crio[2291]: time="2024-09-17 18:11:14.137712015Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aaffa8c3-2958-494b-9dec-099f4ea92c91 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:11:14 pause-246701 crio[2291]: time="2024-09-17 18:11:14.137973530Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e36ed07cad23ca7e45e0ef78137e5611776afa55b9815807f66fb0ed85a99556,PodSandboxId:c1dbcf329a3fe0baab44df5059f981aa8d3b6e9b4f1a59b6a973504a5e2f0818,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726596650657267029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ddff28aa8ccc0582f50b01e72762fef,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522b430ef1b3944822b390aa7972fbb4ca0d95838f2ed6bf233dd76563902419,PodSandboxId:f689143900b533ba6c8eb2ffb5f114af7284348cbfcd4b4651ecd85534cedc83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726596650660591404,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9be2266f3aff2f758a3717d99f55ddc,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c83fe770d5129c316fda477f179c5751dba22618f8024c50b1276ca370b52953,PodSandboxId:8b5b8a054b6606762ca430d911d7158c889cc2299325f43b02d79bbc0e81de14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726596650636678591,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b4c7e73ea292afa0be4ff4b4b13e840,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e941d039eb663b198187704a615d9351839eeb925965a890fc1e0308609ad47,PodSandboxId:9cd865f7d89ed38200b42a28237a798e5d64cdcfc14b59690cc3d614affd5aa8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726596647289091519,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e6fb720ecd1c1a6b1371e0c42ab7381,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5b65e38703d6deda573e2b1c3c3f9d60b44e65801baddd4b382fbef4eb1a8b5,PodSandboxId:21d37cdab9a8460580ca74bc40e89c1c7cabfbf05f9c9b1afdb7fd4e024787ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726596628219238122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dkldh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c024a80-e613-48c2-b2c2-79bb05774a91,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ee133b83897f5511ef614608a44582e929cfaf2d167679328a73e61846d31cf,PodSandboxId:dfc9b352c2c27ecf4ce9fc4d07ebbb5ed2d1960f862b6793290cc9505f82d0cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726596627478615128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxgcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de638753-d03b-438c-8d1f-d43af2bbcce4,},Annotations:map[string]string{io
.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59be282d82c994554051afe711c5e54f9956bb70bc58a9769c9785903771359e,PodSandboxId:f689143900b533ba6c8eb2ffb5f114af7284348cbfcd4b4651ecd85534cedc83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726596627346655721,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9be2266f3aff2f758a3717d99f55ddc,},Annotations:map[string]s
tring{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01807adcb2c6478e08477b688e9213108420bbd562e88f21e7708e3ecdfd077b,PodSandboxId:c1dbcf329a3fe0baab44df5059f981aa8d3b6e9b4f1a59b6a973504a5e2f0818,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726596627284480793,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ddff28aa8ccc0582f50b01e72762fef,},Annotations:map[string]string{io.kubernetes
.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb34f9ac3911759cc6a62e2d8b69a97435125e9bd1540f4f5182362fc105a31,PodSandboxId:9cd865f7d89ed38200b42a28237a798e5d64cdcfc14b59690cc3d614affd5aa8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726596627296774789,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e6fb720ecd1c1a6b1371e0c42ab7381,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0edc99624aa4f33a3454d0dc5a08cbdd7de87adc98e628385aac56e62708e0,PodSandboxId:8b5b8a054b6606762ca430d911d7158c889cc2299325f43b02d79bbc0e81de14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726596627115500381,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b4c7e73ea292afa0be4ff4b4b13e840,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f55176a7c1e4d61fe9e7fbe8d5b007636be692a07306798dea61b38f0124fbec,PodSandboxId:379e5671aca9ddead60548d48abf110a9f519d09c0e91e666a4735acaa648738,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726596561478944199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dkldh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c024a80-e613-48c2-b2c2-79bb05774a91,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde175b4dcf392df5e049450ec3eb0f92583c8a3d28d0147040470f4fac40487,PodSandboxId:5e3ccc0fc94188285738229cd5da6b3d4698b14ee94ed616abdd38177de4e7f5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726596560941560068,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxgcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: de638753-d03b-438c-8d1f-d43af2bbcce4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aaffa8c3-2958-494b-9dec-099f4ea92c91 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:11:14 pause-246701 crio[2291]: time="2024-09-17 18:11:14.182577398Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4589ac79-7231-467c-b24d-4b4f59cb4e75 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:11:14 pause-246701 crio[2291]: time="2024-09-17 18:11:14.182653795Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4589ac79-7231-467c-b24d-4b4f59cb4e75 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:11:14 pause-246701 crio[2291]: time="2024-09-17 18:11:14.187442993Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7f5065f8-7d37-40d9-bbe1-c56ad77feab3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:11:14 pause-246701 crio[2291]: time="2024-09-17 18:11:14.187820557Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726596674187791666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7f5065f8-7d37-40d9-bbe1-c56ad77feab3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:11:14 pause-246701 crio[2291]: time="2024-09-17 18:11:14.189082300Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3e7e0029-2227-4d65-bab2-5ff244457a7d name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:11:14 pause-246701 crio[2291]: time="2024-09-17 18:11:14.189189842Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3e7e0029-2227-4d65-bab2-5ff244457a7d name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:11:14 pause-246701 crio[2291]: time="2024-09-17 18:11:14.189484808Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e36ed07cad23ca7e45e0ef78137e5611776afa55b9815807f66fb0ed85a99556,PodSandboxId:c1dbcf329a3fe0baab44df5059f981aa8d3b6e9b4f1a59b6a973504a5e2f0818,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726596650657267029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ddff28aa8ccc0582f50b01e72762fef,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522b430ef1b3944822b390aa7972fbb4ca0d95838f2ed6bf233dd76563902419,PodSandboxId:f689143900b533ba6c8eb2ffb5f114af7284348cbfcd4b4651ecd85534cedc83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726596650660591404,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9be2266f3aff2f758a3717d99f55ddc,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c83fe770d5129c316fda477f179c5751dba22618f8024c50b1276ca370b52953,PodSandboxId:8b5b8a054b6606762ca430d911d7158c889cc2299325f43b02d79bbc0e81de14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726596650636678591,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b4c7e73ea292afa0be4ff4b4b13e840,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e941d039eb663b198187704a615d9351839eeb925965a890fc1e0308609ad47,PodSandboxId:9cd865f7d89ed38200b42a28237a798e5d64cdcfc14b59690cc3d614affd5aa8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726596647289091519,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e6fb720ecd1c1a6b1371e0c42ab7381,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5b65e38703d6deda573e2b1c3c3f9d60b44e65801baddd4b382fbef4eb1a8b5,PodSandboxId:21d37cdab9a8460580ca74bc40e89c1c7cabfbf05f9c9b1afdb7fd4e024787ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726596628219238122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dkldh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c024a80-e613-48c2-b2c2-79bb05774a91,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ee133b83897f5511ef614608a44582e929cfaf2d167679328a73e61846d31cf,PodSandboxId:dfc9b352c2c27ecf4ce9fc4d07ebbb5ed2d1960f862b6793290cc9505f82d0cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726596627478615128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxgcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de638753-d03b-438c-8d1f-d43af2bbcce4,},Annotations:map[string]string{io
.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59be282d82c994554051afe711c5e54f9956bb70bc58a9769c9785903771359e,PodSandboxId:f689143900b533ba6c8eb2ffb5f114af7284348cbfcd4b4651ecd85534cedc83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726596627346655721,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9be2266f3aff2f758a3717d99f55ddc,},Annotations:map[string]s
tring{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01807adcb2c6478e08477b688e9213108420bbd562e88f21e7708e3ecdfd077b,PodSandboxId:c1dbcf329a3fe0baab44df5059f981aa8d3b6e9b4f1a59b6a973504a5e2f0818,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726596627284480793,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ddff28aa8ccc0582f50b01e72762fef,},Annotations:map[string]string{io.kubernetes
.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb34f9ac3911759cc6a62e2d8b69a97435125e9bd1540f4f5182362fc105a31,PodSandboxId:9cd865f7d89ed38200b42a28237a798e5d64cdcfc14b59690cc3d614affd5aa8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726596627296774789,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e6fb720ecd1c1a6b1371e0c42ab7381,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0edc99624aa4f33a3454d0dc5a08cbdd7de87adc98e628385aac56e62708e0,PodSandboxId:8b5b8a054b6606762ca430d911d7158c889cc2299325f43b02d79bbc0e81de14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726596627115500381,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b4c7e73ea292afa0be4ff4b4b13e840,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f55176a7c1e4d61fe9e7fbe8d5b007636be692a07306798dea61b38f0124fbec,PodSandboxId:379e5671aca9ddead60548d48abf110a9f519d09c0e91e666a4735acaa648738,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726596561478944199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dkldh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c024a80-e613-48c2-b2c2-79bb05774a91,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde175b4dcf392df5e049450ec3eb0f92583c8a3d28d0147040470f4fac40487,PodSandboxId:5e3ccc0fc94188285738229cd5da6b3d4698b14ee94ed616abdd38177de4e7f5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726596560941560068,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxgcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: de638753-d03b-438c-8d1f-d43af2bbcce4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3e7e0029-2227-4d65-bab2-5ff244457a7d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	522b430ef1b39       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   23 seconds ago       Running             kube-controller-manager   2                   f689143900b53       kube-controller-manager-pause-246701
	e36ed07cad23c       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   23 seconds ago       Running             kube-scheduler            2                   c1dbcf329a3fe       kube-scheduler-pause-246701
	c83fe770d5129       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   23 seconds ago       Running             kube-apiserver            2                   8b5b8a054b660       kube-apiserver-pause-246701
	7e941d039eb66       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   26 seconds ago       Running             etcd                      2                   9cd865f7d89ed       etcd-pause-246701
	f5b65e38703d6       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   46 seconds ago       Running             coredns                   1                   21d37cdab9a84       coredns-7c65d6cfc9-dkldh
	1ee133b83897f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   46 seconds ago       Running             kube-proxy                1                   dfc9b352c2c27       kube-proxy-vxgcn
	59be282d82c99       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   46 seconds ago       Exited              kube-controller-manager   1                   f689143900b53       kube-controller-manager-pause-246701
	7eb34f9ac3911       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   46 seconds ago       Exited              etcd                      1                   9cd865f7d89ed       etcd-pause-246701
	01807adcb2c64       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   46 seconds ago       Exited              kube-scheduler            1                   c1dbcf329a3fe       kube-scheduler-pause-246701
	1f0edc99624aa       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   47 seconds ago       Exited              kube-apiserver            1                   8b5b8a054b660       kube-apiserver-pause-246701
	f55176a7c1e4d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   379e5671aca9d       coredns-7c65d6cfc9-dkldh
	fde175b4dcf39       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   About a minute ago   Exited              kube-proxy                0                   5e3ccc0fc9418       kube-proxy-vxgcn
	
	
	==> coredns [f55176a7c1e4d61fe9e7fbe8d5b007636be692a07306798dea61b38f0124fbec] <==
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1129796489]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 18:09:21.825) (total time: 30002ms):
	Trace[1129796489]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:09:51.826)
	Trace[1129796489]: [30.002034726s] [30.002034726s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1010701865]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 18:09:21.826) (total time: 30001ms):
	Trace[1010701865]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:09:51.827)
	Trace[1010701865]: [30.001009519s] [30.001009519s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1927893987]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 18:09:21.824) (total time: 30003ms):
	Trace[1927893987]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (18:09:51.827)
	Trace[1927893987]: [30.003262181s] [30.003262181s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f5b65e38703d6deda573e2b1c3c3f9d60b44e65801baddd4b382fbef4eb1a8b5] <==
	Trace[703433045]: [10.003919345s] [10.003919345s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1898540675]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 18:10:28.566) (total time: 10001ms):
	Trace[1898540675]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:10:38.567)
	Trace[1898540675]: [10.001201173s] [10.001201173s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1323431054]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 18:10:28.567) (total time: 10000ms):
	Trace[1323431054]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (18:10:38.568)
	Trace[1323431054]: [10.000517482s] [10.000517482s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:53742->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:39072->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:53742->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1870039339]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 18:10:39.454) (total time: 10103ms):
	Trace[1870039339]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:39072->10.96.0.1:443: read: connection reset by peer 10103ms (18:10:49.557)
	Trace[1870039339]: [10.103351813s] [10.103351813s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:39072->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:39066->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1983569036]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 18:10:39.423) (total time: 10134ms):
	Trace[1983569036]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:39066->10.96.0.1:443: read: connection reset by peer 10134ms (18:10:49.557)
	Trace[1983569036]: [10.134446713s] [10.134446713s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:39066->10.96.0.1:443: read: connection reset by peer
	
	
	==> describe nodes <==
	Name:               pause-246701
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-246701
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=pause-246701
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T18_09_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 18:09:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-246701
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 18:11:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 18:10:53 +0000   Tue, 17 Sep 2024 18:09:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 18:10:53 +0000   Tue, 17 Sep 2024 18:09:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 18:10:53 +0000   Tue, 17 Sep 2024 18:09:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 18:10:53 +0000   Tue, 17 Sep 2024 18:09:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.167
	  Hostname:    pause-246701
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad44cfe662c34575a4f4f46d3c15e2fa
	  System UUID:                ad44cfe6-62c3-4575-a4f4-f46d3c15e2fa
	  Boot ID:                    1d0abb0b-1c4d-4c00-9169-956572385348
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-dkldh                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     114s
	  kube-system                 etcd-pause-246701                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         119s
	  kube-system                 kube-apiserver-pause-246701             250m (12%)    0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-pause-246701    200m (10%)    0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-vxgcn                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-scheduler-pause-246701             100m (5%)     0 (0%)      0 (0%)           0 (0%)         119s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 112s               kube-proxy       
	  Normal  Starting                 21s                kube-proxy       
	  Normal  NodeHasSufficientPID     119s               kubelet          Node pause-246701 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  119s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  119s               kubelet          Node pause-246701 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s               kubelet          Node pause-246701 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 119s               kubelet          Starting kubelet.
	  Normal  NodeReady                118s               kubelet          Node pause-246701 status is now: NodeReady
	  Normal  RegisteredNode           116s               node-controller  Node pause-246701 event: Registered Node pause-246701 in Controller
	  Normal  Starting                 24s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node pause-246701 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node pause-246701 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node pause-246701 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                node-controller  Node pause-246701 event: Registered Node pause-246701 in Controller
	
	
	==> dmesg <==
	[  +0.059171] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067362] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.221608] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.141388] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.335812] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[Sep17 18:09] systemd-fstab-generator[740]: Ignoring "noauto" option for root device
	[  +0.078350] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.715520] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.513517] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.424056] systemd-fstab-generator[1216]: Ignoring "noauto" option for root device
	[  +0.082611] kauditd_printk_skb: 41 callbacks suppressed
	[  +4.983147] systemd-fstab-generator[1344]: Ignoring "noauto" option for root device
	[  +0.205066] kauditd_printk_skb: 21 callbacks suppressed
	[ +11.707366] kauditd_printk_skb: 88 callbacks suppressed
	[Sep17 18:10] systemd-fstab-generator[2216]: Ignoring "noauto" option for root device
	[  +0.173721] systemd-fstab-generator[2228]: Ignoring "noauto" option for root device
	[  +0.195317] systemd-fstab-generator[2242]: Ignoring "noauto" option for root device
	[  +0.142634] systemd-fstab-generator[2254]: Ignoring "noauto" option for root device
	[  +0.339461] systemd-fstab-generator[2282]: Ignoring "noauto" option for root device
	[  +7.609088] systemd-fstab-generator[2404]: Ignoring "noauto" option for root device
	[  +0.081387] kauditd_printk_skb: 100 callbacks suppressed
	[ +12.690131] kauditd_printk_skb: 87 callbacks suppressed
	[ +11.012397] systemd-fstab-generator[3212]: Ignoring "noauto" option for root device
	[  +3.623387] kauditd_printk_skb: 37 callbacks suppressed
	[Sep17 18:11] systemd-fstab-generator[3517]: Ignoring "noauto" option for root device
	
	
	==> etcd [7e941d039eb663b198187704a615d9351839eeb925965a890fc1e0308609ad47] <==
	{"level":"info","ts":"2024-09-17T18:10:47.462568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac switched to configuration voters=(2366053629920448428)"}
	{"level":"info","ts":"2024-09-17T18:10:47.462654Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"31f708155da0e645","local-member-id":"20d5e93d92ee8fac","added-peer-id":"20d5e93d92ee8fac","added-peer-peer-urls":["https://192.168.39.167:2380"]}
	{"level":"info","ts":"2024-09-17T18:10:47.462868Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"31f708155da0e645","local-member-id":"20d5e93d92ee8fac","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:10:47.462967Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:10:47.465243Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-17T18:10:47.465569Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"20d5e93d92ee8fac","initial-advertise-peer-urls":["https://192.168.39.167:2380"],"listen-peer-urls":["https://192.168.39.167:2380"],"advertise-client-urls":["https://192.168.39.167:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.167:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-17T18:10:47.465635Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-17T18:10:47.465702Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.167:2380"}
	{"level":"info","ts":"2024-09-17T18:10:47.465744Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.167:2380"}
	{"level":"info","ts":"2024-09-17T18:10:48.446856Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-17T18:10:48.446966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-17T18:10:48.447022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac received MsgPreVoteResp from 20d5e93d92ee8fac at term 2"}
	{"level":"info","ts":"2024-09-17T18:10:48.447069Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac became candidate at term 3"}
	{"level":"info","ts":"2024-09-17T18:10:48.447093Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac received MsgVoteResp from 20d5e93d92ee8fac at term 3"}
	{"level":"info","ts":"2024-09-17T18:10:48.447121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac became leader at term 3"}
	{"level":"info","ts":"2024-09-17T18:10:48.447213Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 20d5e93d92ee8fac elected leader 20d5e93d92ee8fac at term 3"}
	{"level":"info","ts":"2024-09-17T18:10:48.451753Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"20d5e93d92ee8fac","local-member-attributes":"{Name:pause-246701 ClientURLs:[https://192.168.39.167:2379]}","request-path":"/0/members/20d5e93d92ee8fac/attributes","cluster-id":"31f708155da0e645","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T18:10:48.452027Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T18:10:48.452064Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T18:10:48.453267Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T18:10:48.453310Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T18:10:48.453571Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T18:10:48.453861Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T18:10:48.454517Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.167:2379"}
	{"level":"info","ts":"2024-09-17T18:10:48.454752Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [7eb34f9ac3911759cc6a62e2d8b69a97435125e9bd1540f4f5182362fc105a31] <==
	{"level":"info","ts":"2024-09-17T18:10:27.960909Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-09-17T18:10:27.983924Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"31f708155da0e645","local-member-id":"20d5e93d92ee8fac","commit-index":424}
	{"level":"info","ts":"2024-09-17T18:10:27.993395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac switched to configuration voters=()"}
	{"level":"info","ts":"2024-09-17T18:10:27.993571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac became follower at term 2"}
	{"level":"info","ts":"2024-09-17T18:10:27.993677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 20d5e93d92ee8fac [peers: [], term: 2, commit: 424, applied: 0, lastindex: 424, lastterm: 2]"}
	{"level":"warn","ts":"2024-09-17T18:10:28.000383Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-09-17T18:10:28.063701Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":403}
	{"level":"info","ts":"2024-09-17T18:10:28.079580Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-09-17T18:10:28.098610Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"20d5e93d92ee8fac","timeout":"7s"}
	{"level":"info","ts":"2024-09-17T18:10:28.102478Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"20d5e93d92ee8fac"}
	{"level":"info","ts":"2024-09-17T18:10:28.104197Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"20d5e93d92ee8fac","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-17T18:10:28.104829Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T18:10:28.106686Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-17T18:10:28.107348Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-17T18:10:28.125685Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-17T18:10:28.129500Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-17T18:10:28.111248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac switched to configuration voters=(2366053629920448428)"}
	{"level":"info","ts":"2024-09-17T18:10:28.134842Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"31f708155da0e645","local-member-id":"20d5e93d92ee8fac","added-peer-id":"20d5e93d92ee8fac","added-peer-peer-urls":["https://192.168.39.167:2380"]}
	{"level":"info","ts":"2024-09-17T18:10:28.137318Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"31f708155da0e645","local-member-id":"20d5e93d92ee8fac","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:10:28.139208Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:10:28.147316Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-17T18:10:28.154477Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"20d5e93d92ee8fac","initial-advertise-peer-urls":["https://192.168.39.167:2380"],"listen-peer-urls":["https://192.168.39.167:2380"],"advertise-client-urls":["https://192.168.39.167:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.167:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-17T18:10:28.154591Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-17T18:10:28.154724Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.167:2380"}
	{"level":"info","ts":"2024-09-17T18:10:28.154759Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.167:2380"}
	
	
	==> kernel <==
	 18:11:14 up 2 min,  0 users,  load average: 1.33, 0.56, 0.21
	Linux pause-246701 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1f0edc99624aa4f33a3454d0dc5a08cbdd7de87adc98e628385aac56e62708e0] <==
	I0917 18:10:27.783234       1 server.go:142] Version: v1.31.1
	I0917 18:10:27.783271       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 18:10:28.544764       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	W0917 18:10:28.545003       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:28.545095       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0917 18:10:28.568369       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0917 18:10:28.568421       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0917 18:10:28.568762       1 instance.go:232] Using reconciler: lease
	I0917 18:10:28.569440       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0917 18:10:28.570389       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:29.545802       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:29.545811       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:29.571738       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:30.838256       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:31.221899       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:31.240591       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:32.974643       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:33.337422       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:33.484252       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:37.481096       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:37.993103       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:38.343232       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:43.609074       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:44.880502       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:45.571901       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [c83fe770d5129c316fda477f179c5751dba22618f8024c50b1276ca370b52953] <==
	I0917 18:10:53.208736       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0917 18:10:53.209018       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0917 18:10:53.209101       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0917 18:10:53.219349       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0917 18:10:53.228210       1 aggregator.go:171] initial CRD sync complete...
	I0917 18:10:53.228290       1 autoregister_controller.go:144] Starting autoregister controller
	I0917 18:10:53.228315       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0917 18:10:53.228340       1 cache.go:39] Caches are synced for autoregister controller
	I0917 18:10:53.235724       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0917 18:10:53.235768       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0917 18:10:53.255866       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0917 18:10:53.275385       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0917 18:10:53.275446       1 shared_informer.go:320] Caches are synced for configmaps
	I0917 18:10:53.276872       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0917 18:10:53.276937       1 policy_source.go:224] refreshing policies
	E0917 18:10:53.279540       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0917 18:10:53.356436       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 18:10:54.078970       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0917 18:10:54.828087       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0917 18:10:54.845600       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0917 18:10:54.896452       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0917 18:10:54.930459       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0917 18:10:54.938507       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0917 18:10:56.827828       1 controller.go:615] quota admission added evaluator for: endpoints
	I0917 18:10:56.928056       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [522b430ef1b3944822b390aa7972fbb4ca0d95838f2ed6bf233dd76563902419] <==
	I0917 18:10:56.636460       1 shared_informer.go:320] Caches are synced for PV protection
	I0917 18:10:56.638659       1 shared_informer.go:320] Caches are synced for ephemeral
	I0917 18:10:56.639864       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0917 18:10:56.640330       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="352.904µs"
	I0917 18:10:56.658833       1 shared_informer.go:320] Caches are synced for node
	I0917 18:10:56.658903       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0917 18:10:56.658922       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0917 18:10:56.658927       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0917 18:10:56.658931       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0917 18:10:56.659019       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-246701"
	I0917 18:10:56.663946       1 shared_informer.go:320] Caches are synced for endpoint
	I0917 18:10:56.674438       1 shared_informer.go:320] Caches are synced for disruption
	I0917 18:10:56.678231       1 shared_informer.go:320] Caches are synced for stateful set
	I0917 18:10:56.775229       1 shared_informer.go:320] Caches are synced for HPA
	I0917 18:10:56.825228       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0917 18:10:56.829955       1 shared_informer.go:320] Caches are synced for resource quota
	I0917 18:10:56.841676       1 shared_informer.go:320] Caches are synced for resource quota
	I0917 18:10:56.869225       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0917 18:10:57.275400       1 shared_informer.go:320] Caches are synced for garbage collector
	I0917 18:10:57.323628       1 shared_informer.go:320] Caches are synced for garbage collector
	I0917 18:10:57.323673       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0917 18:10:59.175983       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="48.90258ms"
	I0917 18:10:59.176102       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="72.145µs"
	I0917 18:10:59.225993       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="48.249192ms"
	I0917 18:10:59.226103       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="58.066µs"
	
	
	==> kube-controller-manager [59be282d82c994554051afe711c5e54f9956bb70bc58a9769c9785903771359e] <==
	
	
	==> kube-proxy [1ee133b83897f5511ef614608a44582e929cfaf2d167679328a73e61846d31cf] <==
	 >
	E0917 18:10:28.701933       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 18:10:38.709824       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-246701\": net/http: TLS handshake timeout"
	E0917 18:10:49.558720       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-246701\": dial tcp 192.168.39.167:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.167:43038->192.168.39.167:8443: read: connection reset by peer"
	I0917 18:10:53.278767       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.167"]
	E0917 18:10:53.279014       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 18:10:53.347704       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 18:10:53.347835       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 18:10:53.347860       1 server_linux.go:169] "Using iptables Proxier"
	I0917 18:10:53.351635       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 18:10:53.352018       1 server.go:483] "Version info" version="v1.31.1"
	I0917 18:10:53.352048       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 18:10:53.353461       1 config.go:199] "Starting service config controller"
	I0917 18:10:53.353507       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 18:10:53.353544       1 config.go:105] "Starting endpoint slice config controller"
	I0917 18:10:53.353564       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 18:10:53.354351       1 config.go:328] "Starting node config controller"
	I0917 18:10:53.354378       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 18:10:53.454090       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 18:10:53.454102       1 shared_informer.go:320] Caches are synced for service config
	I0917 18:10:53.454607       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [fde175b4dcf392df5e049450ec3eb0f92583c8a3d28d0147040470f4fac40487] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 18:09:21.878467       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 18:09:21.894270       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.167"]
	E0917 18:09:21.894504       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 18:09:21.937592       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 18:09:21.937673       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 18:09:21.937710       1 server_linux.go:169] "Using iptables Proxier"
	I0917 18:09:21.940665       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 18:09:21.941014       1 server.go:483] "Version info" version="v1.31.1"
	I0917 18:09:21.941044       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 18:09:21.947478       1 config.go:199] "Starting service config controller"
	I0917 18:09:21.947513       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 18:09:21.947940       1 config.go:105] "Starting endpoint slice config controller"
	I0917 18:09:21.947975       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 18:09:21.948007       1 config.go:328] "Starting node config controller"
	I0917 18:09:21.948013       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 18:09:22.048413       1 shared_informer.go:320] Caches are synced for service config
	I0917 18:09:22.048537       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 18:09:22.048795       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [01807adcb2c6478e08477b688e9213108420bbd562e88f21e7708e3ecdfd077b] <==
	I0917 18:10:29.117060       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [e36ed07cad23ca7e45e0ef78137e5611776afa55b9815807f66fb0ed85a99556] <==
	I0917 18:10:52.246819       1 serving.go:386] Generated self-signed cert in-memory
	W0917 18:10:53.141026       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0917 18:10:53.141115       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0917 18:10:53.141125       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0917 18:10:53.141204       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0917 18:10:53.274374       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0917 18:10:53.274421       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 18:10:53.282317       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0917 18:10:53.284830       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 18:10:53.285238       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0917 18:10:53.284857       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 18:10:53.385758       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 17 18:10:50 pause-246701 kubelet[3219]: I0917 18:10:50.390532    3219 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b4c7e73ea292afa0be4ff4b4b13e840-ca-certs\") pod \"kube-apiserver-pause-246701\" (UID: \"3b4c7e73ea292afa0be4ff4b4b13e840\") " pod="kube-system/kube-apiserver-pause-246701"
	Sep 17 18:10:50 pause-246701 kubelet[3219]: I0917 18:10:50.390551    3219 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9be2266f3aff2f758a3717d99f55ddc-ca-certs\") pod \"kube-controller-manager-pause-246701\" (UID: \"a9be2266f3aff2f758a3717d99f55ddc\") " pod="kube-system/kube-controller-manager-pause-246701"
	Sep 17 18:10:50 pause-246701 kubelet[3219]: I0917 18:10:50.390566    3219 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9be2266f3aff2f758a3717d99f55ddc-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-246701\" (UID: \"a9be2266f3aff2f758a3717d99f55ddc\") " pod="kube-system/kube-controller-manager-pause-246701"
	Sep 17 18:10:50 pause-246701 kubelet[3219]: I0917 18:10:50.390582    3219 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9ddff28aa8ccc0582f50b01e72762fef-kubeconfig\") pod \"kube-scheduler-pause-246701\" (UID: \"9ddff28aa8ccc0582f50b01e72762fef\") " pod="kube-system/kube-scheduler-pause-246701"
	Sep 17 18:10:50 pause-246701 kubelet[3219]: E0917 18:10:50.391047    3219 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-246701?timeout=10s\": dial tcp 192.168.39.167:8443: connect: connection refused" interval="400ms"
	Sep 17 18:10:50 pause-246701 kubelet[3219]: I0917 18:10:50.526905    3219 kubelet_node_status.go:72] "Attempting to register node" node="pause-246701"
	Sep 17 18:10:50 pause-246701 kubelet[3219]: E0917 18:10:50.527724    3219 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.167:8443: connect: connection refused" node="pause-246701"
	Sep 17 18:10:50 pause-246701 kubelet[3219]: I0917 18:10:50.616458    3219 scope.go:117] "RemoveContainer" containerID="1f0edc99624aa4f33a3454d0dc5a08cbdd7de87adc98e628385aac56e62708e0"
	Sep 17 18:10:50 pause-246701 kubelet[3219]: I0917 18:10:50.619280    3219 scope.go:117] "RemoveContainer" containerID="01807adcb2c6478e08477b688e9213108420bbd562e88f21e7708e3ecdfd077b"
	Sep 17 18:10:50 pause-246701 kubelet[3219]: I0917 18:10:50.619663    3219 scope.go:117] "RemoveContainer" containerID="59be282d82c994554051afe711c5e54f9956bb70bc58a9769c9785903771359e"
	Sep 17 18:10:50 pause-246701 kubelet[3219]: E0917 18:10:50.792405    3219 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-246701?timeout=10s\": dial tcp 192.168.39.167:8443: connect: connection refused" interval="800ms"
	Sep 17 18:10:50 pause-246701 kubelet[3219]: I0917 18:10:50.929865    3219 kubelet_node_status.go:72] "Attempting to register node" node="pause-246701"
	Sep 17 18:10:53 pause-246701 kubelet[3219]: I0917 18:10:53.327088    3219 kubelet_node_status.go:111] "Node was previously registered" node="pause-246701"
	Sep 17 18:10:53 pause-246701 kubelet[3219]: I0917 18:10:53.327317    3219 kubelet_node_status.go:75] "Successfully registered node" node="pause-246701"
	Sep 17 18:10:53 pause-246701 kubelet[3219]: I0917 18:10:53.327345    3219 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 17 18:10:53 pause-246701 kubelet[3219]: I0917 18:10:53.328775    3219 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 17 18:10:53 pause-246701 kubelet[3219]: E0917 18:10:53.338030    3219 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-246701\" already exists" pod="kube-system/kube-apiserver-pause-246701"
	Sep 17 18:10:54 pause-246701 kubelet[3219]: I0917 18:10:54.114246    3219 apiserver.go:52] "Watching apiserver"
	Sep 17 18:10:54 pause-246701 kubelet[3219]: I0917 18:10:54.160047    3219 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 17 18:10:54 pause-246701 kubelet[3219]: I0917 18:10:54.173023    3219 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de638753-d03b-438c-8d1f-d43af2bbcce4-lib-modules\") pod \"kube-proxy-vxgcn\" (UID: \"de638753-d03b-438c-8d1f-d43af2bbcce4\") " pod="kube-system/kube-proxy-vxgcn"
	Sep 17 18:10:54 pause-246701 kubelet[3219]: I0917 18:10:54.173241    3219 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de638753-d03b-438c-8d1f-d43af2bbcce4-xtables-lock\") pod \"kube-proxy-vxgcn\" (UID: \"de638753-d03b-438c-8d1f-d43af2bbcce4\") " pod="kube-system/kube-proxy-vxgcn"
	Sep 17 18:11:00 pause-246701 kubelet[3219]: E0917 18:11:00.226252    3219 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726596660225817779,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:11:00 pause-246701 kubelet[3219]: E0917 18:11:00.226637    3219 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726596660225817779,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:11:10 pause-246701 kubelet[3219]: E0917 18:11:10.229495    3219 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726596670229019264,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:11:10 pause-246701 kubelet[3219]: E0917 18:11:10.229530    3219 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726596670229019264,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-246701 -n pause-246701
helpers_test.go:261: (dbg) Run:  kubectl --context pause-246701 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-246701 -n pause-246701
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-246701 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-246701 logs -n 25: (1.482707323s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-639892 sudo                  | cilium-639892             | jenkins | v1.34.0 | 17 Sep 24 18:07 UTC |                     |
	|         | systemctl status containerd            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                |                           |         |         |                     |                     |
	| ssh     | -p cilium-639892 sudo                  | cilium-639892             | jenkins | v1.34.0 | 17 Sep 24 18:07 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-639892 sudo cat              | cilium-639892             | jenkins | v1.34.0 | 17 Sep 24 18:07 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-639892 sudo cat              | cilium-639892             | jenkins | v1.34.0 | 17 Sep 24 18:07 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-639892 sudo                  | cilium-639892             | jenkins | v1.34.0 | 17 Sep 24 18:07 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-639892 sudo                  | cilium-639892             | jenkins | v1.34.0 | 17 Sep 24 18:07 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-639892 sudo                  | cilium-639892             | jenkins | v1.34.0 | 17 Sep 24 18:07 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-639892 sudo find             | cilium-639892             | jenkins | v1.34.0 | 17 Sep 24 18:07 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-639892 sudo crio             | cilium-639892             | jenkins | v1.34.0 | 17 Sep 24 18:07 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-639892                       | cilium-639892             | jenkins | v1.34.0 | 17 Sep 24 18:07 UTC | 17 Sep 24 18:07 UTC |
	| start   | -p running-upgrade-271344              | minikube                  | jenkins | v1.26.0 | 17 Sep 24 18:07 UTC | 17 Sep 24 18:09 UTC |
	|         | --memory=2200 --vm-driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p offline-crio-624774                 | offline-crio-624774       | jenkins | v1.34.0 | 17 Sep 24 18:08 UTC | 17 Sep 24 18:08 UTC |
	| start   | -p pause-246701 --memory=2048          | pause-246701              | jenkins | v1.34.0 | 17 Sep 24 18:08 UTC | 17 Sep 24 18:09 UTC |
	|         | --install-addons=false                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-775296 stop            | minikube                  | jenkins | v1.26.0 | 17 Sep 24 18:08 UTC | 17 Sep 24 18:08 UTC |
	| start   | -p stopped-upgrade-775296              | stopped-upgrade-775296    | jenkins | v1.34.0 | 17 Sep 24 18:08 UTC | 17 Sep 24 18:09 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p running-upgrade-271344              | running-upgrade-271344    | jenkins | v1.34.0 | 17 Sep 24 18:09 UTC | 17 Sep 24 18:10 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-775296              | stopped-upgrade-775296    | jenkins | v1.34.0 | 17 Sep 24 18:09 UTC | 17 Sep 24 18:09 UTC |
	| start   | -p NoKubernetes-267093                 | NoKubernetes-267093       | jenkins | v1.34.0 | 17 Sep 24 18:09 UTC |                     |
	|         | --no-kubernetes                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20              |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-267093                 | NoKubernetes-267093       | jenkins | v1.34.0 | 17 Sep 24 18:09 UTC | 17 Sep 24 18:10 UTC |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-246701                        | pause-246701              | jenkins | v1.34.0 | 17 Sep 24 18:09 UTC | 17 Sep 24 18:11 UTC |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-267093                 | NoKubernetes-267093       | jenkins | v1.34.0 | 17 Sep 24 18:10 UTC | 17 Sep 24 18:10 UTC |
	|         | --no-kubernetes --driver=kvm2          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-271344              | running-upgrade-271344    | jenkins | v1.34.0 | 17 Sep 24 18:10 UTC | 17 Sep 24 18:10 UTC |
	| start   | -p force-systemd-flag-722424           | force-systemd-flag-722424 | jenkins | v1.34.0 | 17 Sep 24 18:10 UTC |                     |
	|         | --memory=2048 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-267093                 | NoKubernetes-267093       | jenkins | v1.34.0 | 17 Sep 24 18:10 UTC | 17 Sep 24 18:10 UTC |
	| start   | -p NoKubernetes-267093                 | NoKubernetes-267093       | jenkins | v1.34.0 | 17 Sep 24 18:10 UTC |                     |
	|         | --no-kubernetes --driver=kvm2          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 18:10:49
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 18:10:49.934878   59176 out.go:345] Setting OutFile to fd 1 ...
	I0917 18:10:49.934987   59176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:10:49.934991   59176 out.go:358] Setting ErrFile to fd 2...
	I0917 18:10:49.934994   59176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:10:49.935208   59176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 18:10:49.935800   59176 out.go:352] Setting JSON to false
	I0917 18:10:49.936806   59176 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6765,"bootTime":1726589885,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 18:10:49.936889   59176 start.go:139] virtualization: kvm guest
	I0917 18:10:49.939113   59176 out.go:177] * [NoKubernetes-267093] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 18:10:49.940553   59176 notify.go:220] Checking for updates...
	I0917 18:10:49.940573   59176 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 18:10:49.941896   59176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 18:10:49.943246   59176 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:10:49.944487   59176 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 18:10:49.945831   59176 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 18:10:49.947200   59176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 18:10:49.948939   59176 config.go:182] Loaded profile config "force-systemd-flag-722424": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:10:49.949023   59176 config.go:182] Loaded profile config "kubernetes-upgrade-644038": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0917 18:10:49.949128   59176 config.go:182] Loaded profile config "pause-246701": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:10:49.949142   59176 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0917 18:10:49.949210   59176 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 18:10:49.986626   59176 out.go:177] * Using the kvm2 driver based on user configuration
	I0917 18:10:49.987835   59176 start.go:297] selected driver: kvm2
	I0917 18:10:49.987841   59176 start.go:901] validating driver "kvm2" against <nil>
	I0917 18:10:49.987851   59176 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 18:10:49.988109   59176 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0917 18:10:49.988171   59176 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:10:49.988235   59176 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19662-11085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0917 18:10:50.005310   59176 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0917 18:10:50.005356   59176 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 18:10:50.005889   59176 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0917 18:10:50.006050   59176 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 18:10:50.006074   59176 cni.go:84] Creating CNI manager for ""
	I0917 18:10:50.006116   59176 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:10:50.006120   59176 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 18:10:50.006133   59176 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0917 18:10:50.006188   59176 start.go:340] cluster config:
	{Name:NoKubernetes-267093 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-267093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:10:50.006278   59176 iso.go:125] acquiring lock: {Name:mkee7131e09b1c5eb94d106bda884bcbdbf6f906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:10:50.009059   59176 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-267093
	I0917 18:10:45.291426   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:45.291906   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | unable to find current IP address of domain force-systemd-flag-722424 in network mk-force-systemd-flag-722424
	I0917 18:10:45.291929   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | I0917 18:10:45.291858   58889 retry.go:31] will retry after 2.013760418s: waiting for machine to come up
	I0917 18:10:47.307834   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:47.308379   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | unable to find current IP address of domain force-systemd-flag-722424 in network mk-force-systemd-flag-722424
	I0917 18:10:47.308402   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | I0917 18:10:47.308326   58889 retry.go:31] will retry after 2.972611801s: waiting for machine to come up
	I0917 18:10:48.656150   58375 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 59be282d82c994554051afe711c5e54f9956bb70bc58a9769c9785903771359e 7eb34f9ac3911759cc6a62e2d8b69a97435125e9bd1540f4f5182362fc105a31 01807adcb2c6478e08477b688e9213108420bbd562e88f21e7708e3ecdfd077b 1f0edc99624aa4f33a3454d0dc5a08cbdd7de87adc98e628385aac56e62708e0 f55176a7c1e4d61fe9e7fbe8d5b007636be692a07306798dea61b38f0124fbec fde175b4dcf392df5e049450ec3eb0f92583c8a3d28d0147040470f4fac40487 4716e05f970f788777a6108ea9b4750ee64a9b7a5033ae95a93e5dd0216dbf65 5f053a0840c2ed6a41dcc4d34fbfc23e10e3c96d8e1f854a5b7fadc23ca47845 33d59acb7ffaf293520cce65a3eecb173df48b67b954734f142eb2ed3b8849e8 80b367d092435a1351c64a2b2e3bf5cfba7bd47e66ff41495623fc364dc67db0: (20.582307034s)
	W0917 18:10:48.656244   58375 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 59be282d82c994554051afe711c5e54f9956bb70bc58a9769c9785903771359e 7eb34f9ac3911759cc6a62e2d8b69a97435125e9bd1540f4f5182362fc105a31 01807adcb2c6478e08477b688e9213108420bbd562e88f21e7708e3ecdfd077b 1f0edc99624aa4f33a3454d0dc5a08cbdd7de87adc98e628385aac56e62708e0 f55176a7c1e4d61fe9e7fbe8d5b007636be692a07306798dea61b38f0124fbec fde175b4dcf392df5e049450ec3eb0f92583c8a3d28d0147040470f4fac40487 4716e05f970f788777a6108ea9b4750ee64a9b7a5033ae95a93e5dd0216dbf65 5f053a0840c2ed6a41dcc4d34fbfc23e10e3c96d8e1f854a5b7fadc23ca47845 33d59acb7ffaf293520cce65a3eecb173df48b67b954734f142eb2ed3b8849e8 80b367d092435a1351c64a2b2e3bf5cfba7bd47e66ff41495623fc364dc67db0: Process exited with status 1
	stdout:
	59be282d82c994554051afe711c5e54f9956bb70bc58a9769c9785903771359e
	7eb34f9ac3911759cc6a62e2d8b69a97435125e9bd1540f4f5182362fc105a31
	01807adcb2c6478e08477b688e9213108420bbd562e88f21e7708e3ecdfd077b
	1f0edc99624aa4f33a3454d0dc5a08cbdd7de87adc98e628385aac56e62708e0
	f55176a7c1e4d61fe9e7fbe8d5b007636be692a07306798dea61b38f0124fbec
	fde175b4dcf392df5e049450ec3eb0f92583c8a3d28d0147040470f4fac40487
	
	stderr:
	E0917 18:10:48.608902    2924 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4716e05f970f788777a6108ea9b4750ee64a9b7a5033ae95a93e5dd0216dbf65\": container with ID starting with 4716e05f970f788777a6108ea9b4750ee64a9b7a5033ae95a93e5dd0216dbf65 not found: ID does not exist" containerID="4716e05f970f788777a6108ea9b4750ee64a9b7a5033ae95a93e5dd0216dbf65"
	time="2024-09-17T18:10:48Z" level=fatal msg="stopping the container \"4716e05f970f788777a6108ea9b4750ee64a9b7a5033ae95a93e5dd0216dbf65\": rpc error: code = NotFound desc = could not find container \"4716e05f970f788777a6108ea9b4750ee64a9b7a5033ae95a93e5dd0216dbf65\": container with ID starting with 4716e05f970f788777a6108ea9b4750ee64a9b7a5033ae95a93e5dd0216dbf65 not found: ID does not exist"
	I0917 18:10:48.656304   58375 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 18:10:48.707261   58375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:10:48.722120   58375 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Sep 17 18:09 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Sep 17 18:09 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Sep 17 18:09 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Sep 17 18:09 /etc/kubernetes/scheduler.conf
	
	I0917 18:10:48.722186   58375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:10:48.734681   58375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:10:48.746535   58375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:10:48.756789   58375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0917 18:10:48.756855   58375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:10:48.768997   58375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:10:48.778865   58375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0917 18:10:48.778932   58375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:10:48.790532   58375 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:10:48.802606   58375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:10:48.864369   58375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:10:49.780795   58375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:10:50.038491   58375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:10:50.101404   58375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:10:50.197263   58375 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:10:50.197353   58375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:10:50.698081   58375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:10:50.844416   53861 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:10:50.844723   53861 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:10:51.198506   58375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:10:51.214842   58375 api_server.go:72] duration metric: took 1.017580439s to wait for apiserver process to appear ...
	I0917 18:10:51.214870   58375 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:10:51.214894   58375 api_server.go:253] Checking apiserver healthz at https://192.168.39.167:8443/healthz ...
	I0917 18:10:53.177251   58375 api_server.go:279] https://192.168.39.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:10:53.177287   58375 api_server.go:103] status: https://192.168.39.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:10:53.177305   58375 api_server.go:253] Checking apiserver healthz at https://192.168.39.167:8443/healthz ...
	I0917 18:10:53.243990   58375 api_server.go:279] https://192.168.39.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:10:53.244024   58375 api_server.go:103] status: https://192.168.39.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:10:53.244043   58375 api_server.go:253] Checking apiserver healthz at https://192.168.39.167:8443/healthz ...
	I0917 18:10:53.290941   58375 api_server.go:279] https://192.168.39.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:10:53.290968   58375 api_server.go:103] status: https://192.168.39.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:10:53.715440   58375 api_server.go:253] Checking apiserver healthz at https://192.168.39.167:8443/healthz ...
	I0917 18:10:53.721648   58375 api_server.go:279] https://192.168.39.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:10:53.721679   58375 api_server.go:103] status: https://192.168.39.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:10:54.215212   58375 api_server.go:253] Checking apiserver healthz at https://192.168.39.167:8443/healthz ...
	I0917 18:10:54.222108   58375 api_server.go:279] https://192.168.39.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:10:54.222141   58375 api_server.go:103] status: https://192.168.39.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:10:54.715756   58375 api_server.go:253] Checking apiserver healthz at https://192.168.39.167:8443/healthz ...
	I0917 18:10:54.720122   58375 api_server.go:279] https://192.168.39.167:8443/healthz returned 200:
	ok
	I0917 18:10:54.726632   58375 api_server.go:141] control plane version: v1.31.1
	I0917 18:10:54.726657   58375 api_server.go:131] duration metric: took 3.511779718s to wait for apiserver health ...
	I0917 18:10:54.726664   58375 cni.go:84] Creating CNI manager for ""
	I0917 18:10:54.726670   58375 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:10:54.728481   58375 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:10:50.010337   59176 preload.go:131] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W0917 18:10:50.041128   59176 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0917 18:10:50.041323   59176 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/NoKubernetes-267093/config.json ...
	I0917 18:10:50.041360   59176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/NoKubernetes-267093/config.json: {Name:mk55306f1bdb213fa8ea7b864c2a5c0cfe508335 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:10:50.041557   59176 start.go:360] acquireMachinesLock for NoKubernetes-267093: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 18:10:50.282278   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:50.282783   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | unable to find current IP address of domain force-systemd-flag-722424 in network mk-force-systemd-flag-722424
	I0917 18:10:50.282815   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | I0917 18:10:50.282735   58889 retry.go:31] will retry after 3.159205139s: waiting for machine to come up
	I0917 18:10:53.443476   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:53.444150   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | unable to find current IP address of domain force-systemd-flag-722424 in network mk-force-systemd-flag-722424
	I0917 18:10:53.444175   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | I0917 18:10:53.444103   58889 retry.go:31] will retry after 5.581189391s: waiting for machine to come up
	I0917 18:10:54.729695   58375 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:10:54.740982   58375 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:10:54.763140   58375 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:10:54.763213   58375 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 18:10:54.763234   58375 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 18:10:54.772194   58375 system_pods.go:59] 6 kube-system pods found
	I0917 18:10:54.772227   58375 system_pods.go:61] "coredns-7c65d6cfc9-dkldh" [1c024a80-e613-48c2-b2c2-79bb05774a91] Running
	I0917 18:10:54.772238   58375 system_pods.go:61] "etcd-pause-246701" [d4796279-f272-4e02-a266-5da5f4aafec1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 18:10:54.772245   58375 system_pods.go:61] "kube-apiserver-pause-246701" [6355bbd7-6301-4aaf-aa88-0951b3a578e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 18:10:54.772253   58375 system_pods.go:61] "kube-controller-manager-pause-246701" [6c4f553f-4689-45b2-8c50-c237f39bbe89] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 18:10:54.772260   58375 system_pods.go:61] "kube-proxy-vxgcn" [de638753-d03b-438c-8d1f-d43af2bbcce4] Running
	I0917 18:10:54.772268   58375 system_pods.go:61] "kube-scheduler-pause-246701" [836bc257-b131-4c83-add1-70edf6f7fb9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 18:10:54.772279   58375 system_pods.go:74] duration metric: took 9.118021ms to wait for pod list to return data ...
	I0917 18:10:54.772292   58375 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:10:54.776731   58375 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:10:54.776757   58375 node_conditions.go:123] node cpu capacity is 2
	I0917 18:10:54.776767   58375 node_conditions.go:105] duration metric: took 4.470719ms to run NodePressure ...
	I0917 18:10:54.776782   58375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:10:55.037803   58375 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0917 18:10:55.041708   58375 kubeadm.go:739] kubelet initialised
	I0917 18:10:55.041729   58375 kubeadm.go:740] duration metric: took 3.900914ms waiting for restarted kubelet to initialise ...
	I0917 18:10:55.041738   58375 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:10:55.045730   58375 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-dkldh" in "kube-system" namespace to be "Ready" ...
	I0917 18:10:55.050796   58375 pod_ready.go:93] pod "coredns-7c65d6cfc9-dkldh" in "kube-system" namespace has status "Ready":"True"
	I0917 18:10:55.050816   58375 pod_ready.go:82] duration metric: took 5.059078ms for pod "coredns-7c65d6cfc9-dkldh" in "kube-system" namespace to be "Ready" ...
	I0917 18:10:55.050824   58375 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:10:59.029318   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.029807   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has current primary IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.029825   58866 main.go:141] libmachine: (force-systemd-flag-722424) Found IP for machine: 192.168.72.193
	I0917 18:10:59.029835   58866 main.go:141] libmachine: (force-systemd-flag-722424) Reserving static IP address...
	I0917 18:10:59.030251   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | unable to find host DHCP lease matching {name: "force-systemd-flag-722424", mac: "52:54:00:a0:eb:8c", ip: "192.168.72.193"} in network mk-force-systemd-flag-722424
	I0917 18:10:59.111691   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | Getting to WaitForSSH function...
	I0917 18:10:59.111732   58866 main.go:141] libmachine: (force-systemd-flag-722424) Reserved static IP address: 192.168.72.193
	I0917 18:10:59.111745   58866 main.go:141] libmachine: (force-systemd-flag-722424) Waiting for SSH to be available...
	I0917 18:10:59.114177   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.114588   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:10:59.114616   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.114883   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | Using SSH client type: external
	I0917 18:10:59.114905   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/force-systemd-flag-722424/id_rsa (-rw-------)
	I0917 18:10:59.114945   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.193 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/force-systemd-flag-722424/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:10:59.114962   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | About to run SSH command:
	I0917 18:10:59.114977   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | exit 0
	I0917 18:10:59.241617   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | SSH cmd err, output: <nil>: 
	I0917 18:10:59.241930   58866 main.go:141] libmachine: (force-systemd-flag-722424) KVM machine creation complete!
	I0917 18:10:59.242332   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetConfigRaw
	I0917 18:10:59.242938   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .DriverName
	I0917 18:10:59.243168   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .DriverName
	I0917 18:10:59.243364   58866 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0917 18:10:59.243383   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetState
	I0917 18:10:59.244879   58866 main.go:141] libmachine: Detecting operating system of created instance...
	I0917 18:10:59.244895   58866 main.go:141] libmachine: Waiting for SSH to be available...
	I0917 18:10:59.244903   58866 main.go:141] libmachine: Getting to WaitForSSH function...
	I0917 18:10:59.244911   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHHostname
	I0917 18:10:59.247839   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.248264   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:10:59.248295   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.248439   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHPort
	I0917 18:10:59.248640   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:10:59.248817   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:10:59.248984   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHUsername
	I0917 18:10:59.249244   58866 main.go:141] libmachine: Using SSH client type: native
	I0917 18:10:59.249506   58866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.193 22 <nil> <nil>}
	I0917 18:10:59.249524   58866 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0917 18:10:59.352609   58866 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:10:59.352634   58866 main.go:141] libmachine: Detecting the provisioner...
	I0917 18:10:59.352646   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHHostname
	I0917 18:10:59.355295   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.355697   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:10:59.355739   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.355868   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHPort
	I0917 18:10:59.356041   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:10:59.356211   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:10:59.356315   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHUsername
	I0917 18:10:59.356451   58866 main.go:141] libmachine: Using SSH client type: native
	I0917 18:10:59.356636   58866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.193 22 <nil> <nil>}
	I0917 18:10:59.356649   58866 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0917 18:10:59.458383   58866 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0917 18:10:59.458488   58866 main.go:141] libmachine: found compatible host: buildroot
	I0917 18:10:59.458504   58866 main.go:141] libmachine: Provisioning with buildroot...
	I0917 18:10:59.458512   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetMachineName
	I0917 18:10:59.458761   58866 buildroot.go:166] provisioning hostname "force-systemd-flag-722424"
	I0917 18:10:59.458783   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetMachineName
	I0917 18:10:59.458925   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHHostname
	I0917 18:10:59.461411   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.461702   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:10:59.461725   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.461823   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHPort
	I0917 18:10:59.462010   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:10:59.462169   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:10:59.462314   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHUsername
	I0917 18:10:59.462485   58866 main.go:141] libmachine: Using SSH client type: native
	I0917 18:10:59.462694   58866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.193 22 <nil> <nil>}
	I0917 18:10:59.462710   58866 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-722424 && echo "force-systemd-flag-722424" | sudo tee /etc/hostname
	I0917 18:10:59.577635   58866 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-722424
	
	I0917 18:10:59.577665   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHHostname
	I0917 18:10:59.580292   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.580629   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:10:59.580668   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.580900   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHPort
	I0917 18:10:59.581111   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:10:59.581282   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:10:59.581416   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHUsername
	I0917 18:10:59.581561   58866 main.go:141] libmachine: Using SSH client type: native
	I0917 18:10:59.581782   58866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.193 22 <nil> <nil>}
	I0917 18:10:59.581810   58866 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-722424' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-722424/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-722424' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:10:59.687087   58866 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:10:59.687126   58866 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:10:59.687184   58866 buildroot.go:174] setting up certificates
	I0917 18:10:59.687205   58866 provision.go:84] configureAuth start
	I0917 18:10:59.687224   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetMachineName
	I0917 18:10:59.687533   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetIP
	I0917 18:10:59.690331   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.690731   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:10:59.690753   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.690907   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHHostname
	I0917 18:10:59.692953   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.693247   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:10:59.693273   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.693414   58866 provision.go:143] copyHostCerts
	I0917 18:10:59.693441   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:10:59.693501   58866 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:10:59.693511   58866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:10:59.693564   58866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:10:59.693652   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:10:59.693670   58866 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:10:59.693674   58866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:10:59.693699   58866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:10:59.693764   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:10:59.693780   58866 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:10:59.693786   58866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:10:59.693802   58866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:10:59.693861   58866 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-722424 san=[127.0.0.1 192.168.72.193 force-systemd-flag-722424 localhost minikube]
	I0917 18:10:59.867868   58866 provision.go:177] copyRemoteCerts
	I0917 18:10:59.867935   58866 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:10:59.867963   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHHostname
	I0917 18:10:59.870837   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.871204   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:10:59.871229   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:10:59.871414   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHPort
	I0917 18:10:59.871603   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:10:59.871782   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHUsername
	I0917 18:10:59.871943   58866 sshutil.go:53] new ssh client: &{IP:192.168.72.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/force-systemd-flag-722424/id_rsa Username:docker}
	I0917 18:10:59.959908   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 18:10:59.959978   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:10:59.989707   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 18:10:59.989781   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0917 18:11:00.019023   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 18:11:00.019097   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 18:11:00.047639   58866 provision.go:87] duration metric: took 360.418667ms to configureAuth
	I0917 18:11:00.047665   58866 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:11:00.047840   58866 config.go:182] Loaded profile config "force-systemd-flag-722424": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:11:00.047908   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHHostname
	I0917 18:11:00.050491   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.050815   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:11:00.050853   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.051094   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHPort
	I0917 18:11:00.051275   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:11:00.051478   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:11:00.051651   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHUsername
	I0917 18:11:00.051813   58866 main.go:141] libmachine: Using SSH client type: native
	I0917 18:11:00.052014   58866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.193 22 <nil> <nil>}
	I0917 18:11:00.052039   58866 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:11:00.514537   59176 start.go:364] duration metric: took 10.472915323s to acquireMachinesLock for "NoKubernetes-267093"
	I0917 18:11:00.514596   59176 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-267093 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-267093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 18:11:00.514741   59176 start.go:125] createHost starting for "" (driver="kvm2")
	I0917 18:10:57.058649   58375 pod_ready.go:103] pod "etcd-pause-246701" in "kube-system" namespace has status "Ready":"False"
	I0917 18:10:59.560707   58375 pod_ready.go:103] pod "etcd-pause-246701" in "kube-system" namespace has status "Ready":"False"
	I0917 18:11:00.270726   58866 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:11:00.270760   58866 main.go:141] libmachine: Checking connection to Docker...
	I0917 18:11:00.270773   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetURL
	I0917 18:11:00.272238   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | Using libvirt version 6000000
	I0917 18:11:00.274897   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.275306   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:11:00.275337   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.275499   58866 main.go:141] libmachine: Docker is up and running!
	I0917 18:11:00.275512   58866 main.go:141] libmachine: Reticulating splines...
	I0917 18:11:00.275521   58866 client.go:171] duration metric: took 24.960078669s to LocalClient.Create
	I0917 18:11:00.275549   58866 start.go:167] duration metric: took 24.960138327s to libmachine.API.Create "force-systemd-flag-722424"
	I0917 18:11:00.275561   58866 start.go:293] postStartSetup for "force-systemd-flag-722424" (driver="kvm2")
	I0917 18:11:00.275575   58866 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:11:00.275598   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .DriverName
	I0917 18:11:00.275866   58866 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:11:00.275888   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHHostname
	I0917 18:11:00.278397   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.278829   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:11:00.278849   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.279038   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHPort
	I0917 18:11:00.279225   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:11:00.279392   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHUsername
	I0917 18:11:00.279510   58866 sshutil.go:53] new ssh client: &{IP:192.168.72.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/force-systemd-flag-722424/id_rsa Username:docker}
	I0917 18:11:00.359683   58866 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:11:00.364192   58866 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:11:00.364219   58866 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:11:00.364286   58866 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:11:00.364373   58866 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:11:00.364384   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> /etc/ssl/certs/182592.pem
	I0917 18:11:00.364465   58866 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:11:00.374283   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:11:00.400120   58866 start.go:296] duration metric: took 124.543905ms for postStartSetup
	I0917 18:11:00.400177   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetConfigRaw
	I0917 18:11:00.400819   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetIP
	I0917 18:11:00.403665   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.403992   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:11:00.404016   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.404280   58866 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/config.json ...
	I0917 18:11:00.404492   58866 start.go:128] duration metric: took 25.107904017s to createHost
	I0917 18:11:00.404523   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHHostname
	I0917 18:11:00.406875   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.407186   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:11:00.407214   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.407361   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHPort
	I0917 18:11:00.407581   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:11:00.407740   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:11:00.407947   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHUsername
	I0917 18:11:00.408130   58866 main.go:141] libmachine: Using SSH client type: native
	I0917 18:11:00.408300   58866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.193 22 <nil> <nil>}
	I0917 18:11:00.408311   58866 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:11:00.514344   58866 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726596660.489673957
	
	I0917 18:11:00.514373   58866 fix.go:216] guest clock: 1726596660.489673957
	I0917 18:11:00.514383   58866 fix.go:229] Guest: 2024-09-17 18:11:00.489673957 +0000 UTC Remote: 2024-09-17 18:11:00.404508 +0000 UTC m=+25.217860548 (delta=85.165957ms)
	I0917 18:11:00.514408   58866 fix.go:200] guest clock delta is within tolerance: 85.165957ms
	I0917 18:11:00.514415   58866 start.go:83] releasing machines lock for "force-systemd-flag-722424", held for 25.217930432s
	I0917 18:11:00.514446   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .DriverName
	I0917 18:11:00.514716   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetIP
	I0917 18:11:00.517742   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.518135   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:11:00.518161   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.518373   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .DriverName
	I0917 18:11:00.518859   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .DriverName
	I0917 18:11:00.519028   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .DriverName
	I0917 18:11:00.519134   58866 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:11:00.519191   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHHostname
	I0917 18:11:00.519255   58866 ssh_runner.go:195] Run: cat /version.json
	I0917 18:11:00.519284   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHHostname
	I0917 18:11:00.522007   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.522268   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.522390   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:11:00.522421   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.522607   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHPort
	I0917 18:11:00.522724   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:11:00.522744   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:00.522751   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:11:00.522879   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHPort
	I0917 18:11:00.522927   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHUsername
	I0917 18:11:00.523022   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHKeyPath
	I0917 18:11:00.523032   58866 sshutil.go:53] new ssh client: &{IP:192.168.72.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/force-systemd-flag-722424/id_rsa Username:docker}
	I0917 18:11:00.523162   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetSSHUsername
	I0917 18:11:00.523545   58866 sshutil.go:53] new ssh client: &{IP:192.168.72.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/force-systemd-flag-722424/id_rsa Username:docker}
	I0917 18:11:00.629684   58866 ssh_runner.go:195] Run: systemctl --version
	I0917 18:11:00.637344   58866 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:11:00.804032   58866 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:11:00.810730   58866 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:11:00.810805   58866 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:11:00.829608   58866 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:11:00.829636   58866 start.go:495] detecting cgroup driver to use...
	I0917 18:11:00.829650   58866 start.go:499] using "systemd" cgroup driver as enforced via flags
	I0917 18:11:00.829714   58866 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:11:00.847702   58866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:11:00.863256   58866 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:11:00.863324   58866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:11:00.878545   58866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:11:00.893911   58866 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:11:01.018538   58866 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:11:01.165859   58866 docker.go:233] disabling docker service ...
	I0917 18:11:01.165923   58866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:11:01.180569   58866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:11:01.194536   58866 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:11:01.337220   58866 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:11:01.480781   58866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:11:01.495295   58866 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:11:01.515646   58866 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 18:11:01.515733   58866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:11:01.527288   58866 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 18:11:01.527368   58866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:11:01.539525   58866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:11:01.551259   58866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:11:01.563352   58866 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:11:01.575357   58866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:11:01.587452   58866 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:11:01.606193   58866 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:11:01.618021   58866 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:11:01.631380   58866 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:11:01.631449   58866 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:11:01.648473   58866 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:11:01.662048   58866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:11:01.807832   58866 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:11:01.904438   58866 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:11:01.904524   58866 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:11:01.909628   58866 start.go:563] Will wait 60s for crictl version
	I0917 18:11:01.909691   58866 ssh_runner.go:195] Run: which crictl
	I0917 18:11:01.913695   58866 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:11:01.973370   58866 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:11:01.973466   58866 ssh_runner.go:195] Run: crio --version
	I0917 18:11:02.010818   58866 ssh_runner.go:195] Run: crio --version
	I0917 18:11:02.043824   58866 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 18:11:00.517138   59176 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	I0917 18:11:00.517444   59176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:11:00.517478   59176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:11:00.538142   59176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42847
	I0917 18:11:00.538656   59176 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:11:00.539269   59176 main.go:141] libmachine: Using API Version  1
	I0917 18:11:00.539307   59176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:11:00.539702   59176 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:11:00.539917   59176 main.go:141] libmachine: (NoKubernetes-267093) Calling .GetMachineName
	I0917 18:11:00.540093   59176 main.go:141] libmachine: (NoKubernetes-267093) Calling .DriverName
	I0917 18:11:00.540232   59176 start.go:159] libmachine.API.Create for "NoKubernetes-267093" (driver="kvm2")
	I0917 18:11:00.540256   59176 client.go:168] LocalClient.Create starting
	I0917 18:11:00.540293   59176 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem
	I0917 18:11:00.540328   59176 main.go:141] libmachine: Decoding PEM data...
	I0917 18:11:00.540348   59176 main.go:141] libmachine: Parsing certificate...
	I0917 18:11:00.540423   59176 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem
	I0917 18:11:00.540456   59176 main.go:141] libmachine: Decoding PEM data...
	I0917 18:11:00.540467   59176 main.go:141] libmachine: Parsing certificate...
	I0917 18:11:00.540519   59176 main.go:141] libmachine: Running pre-create checks...
	I0917 18:11:00.540527   59176 main.go:141] libmachine: (NoKubernetes-267093) Calling .PreCreateCheck
	I0917 18:11:00.540873   59176 main.go:141] libmachine: (NoKubernetes-267093) Calling .GetConfigRaw
	I0917 18:11:00.541407   59176 main.go:141] libmachine: Creating machine...
	I0917 18:11:00.541414   59176 main.go:141] libmachine: (NoKubernetes-267093) Calling .Create
	I0917 18:11:00.541549   59176 main.go:141] libmachine: (NoKubernetes-267093) Creating KVM machine...
	I0917 18:11:00.542932   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | found existing default KVM network
	I0917 18:11:00.544457   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:00.544281   59262 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:fa:8c:15} reservation:<nil>}
	I0917 18:11:00.545483   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:00.545388   59262 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:89:17:1a} reservation:<nil>}
	I0917 18:11:00.546921   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:00.546831   59262 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003051c0}
	I0917 18:11:00.546975   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | created network xml: 
	I0917 18:11:00.546998   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | <network>
	I0917 18:11:00.547004   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG |   <name>mk-NoKubernetes-267093</name>
	I0917 18:11:00.547014   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG |   <dns enable='no'/>
	I0917 18:11:00.547018   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG |   
	I0917 18:11:00.547030   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0917 18:11:00.547034   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG |     <dhcp>
	I0917 18:11:00.547042   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0917 18:11:00.547046   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG |     </dhcp>
	I0917 18:11:00.547049   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG |   </ip>
	I0917 18:11:00.547053   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG |   
	I0917 18:11:00.547056   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | </network>
	I0917 18:11:00.547061   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | 
	I0917 18:11:00.552614   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | trying to create private KVM network mk-NoKubernetes-267093 192.168.61.0/24...
	I0917 18:11:00.630789   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | private KVM network mk-NoKubernetes-267093 192.168.61.0/24 created
	I0917 18:11:00.630858   59176 main.go:141] libmachine: (NoKubernetes-267093) Setting up store path in /home/jenkins/minikube-integration/19662-11085/.minikube/machines/NoKubernetes-267093 ...
	I0917 18:11:00.630881   59176 main.go:141] libmachine: (NoKubernetes-267093) Building disk image from file:///home/jenkins/minikube-integration/19662-11085/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0917 18:11:00.631075   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:00.630905   59262 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 18:11:00.631102   59176 main.go:141] libmachine: (NoKubernetes-267093) Downloading /home/jenkins/minikube-integration/19662-11085/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19662-11085/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0917 18:11:00.881793   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:00.881672   59262 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/NoKubernetes-267093/id_rsa...
	I0917 18:11:01.098716   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:01.098582   59262 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/NoKubernetes-267093/NoKubernetes-267093.rawdisk...
	I0917 18:11:01.098729   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | Writing magic tar header
	I0917 18:11:01.098741   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | Writing SSH key tar header
	I0917 18:11:01.098747   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:01.098726   59262 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19662-11085/.minikube/machines/NoKubernetes-267093 ...
	I0917 18:11:01.098855   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/NoKubernetes-267093
	I0917 18:11:01.098899   59176 main.go:141] libmachine: (NoKubernetes-267093) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube/machines/NoKubernetes-267093 (perms=drwx------)
	I0917 18:11:01.098918   59176 main.go:141] libmachine: (NoKubernetes-267093) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube/machines (perms=drwxr-xr-x)
	I0917 18:11:01.098925   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube/machines
	I0917 18:11:01.098939   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 18:11:01.098946   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085
	I0917 18:11:01.098954   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0917 18:11:01.098957   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | Checking permissions on dir: /home/jenkins
	I0917 18:11:01.098964   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | Checking permissions on dir: /home
	I0917 18:11:01.098968   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | Skipping /home - not owner
	I0917 18:11:01.098976   59176 main.go:141] libmachine: (NoKubernetes-267093) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube (perms=drwxr-xr-x)
	I0917 18:11:01.098981   59176 main.go:141] libmachine: (NoKubernetes-267093) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085 (perms=drwxrwxr-x)
	I0917 18:11:01.099009   59176 main.go:141] libmachine: (NoKubernetes-267093) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0917 18:11:01.099025   59176 main.go:141] libmachine: (NoKubernetes-267093) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0917 18:11:01.099034   59176 main.go:141] libmachine: (NoKubernetes-267093) Creating domain...
	I0917 18:11:01.100227   59176 main.go:141] libmachine: (NoKubernetes-267093) define libvirt domain using xml: 
	I0917 18:11:01.100250   59176 main.go:141] libmachine: (NoKubernetes-267093) <domain type='kvm'>
	I0917 18:11:01.100265   59176 main.go:141] libmachine: (NoKubernetes-267093)   <name>NoKubernetes-267093</name>
	I0917 18:11:01.100277   59176 main.go:141] libmachine: (NoKubernetes-267093)   <memory unit='MiB'>6000</memory>
	I0917 18:11:01.100300   59176 main.go:141] libmachine: (NoKubernetes-267093)   <vcpu>2</vcpu>
	I0917 18:11:01.100306   59176 main.go:141] libmachine: (NoKubernetes-267093)   <features>
	I0917 18:11:01.100313   59176 main.go:141] libmachine: (NoKubernetes-267093)     <acpi/>
	I0917 18:11:01.100327   59176 main.go:141] libmachine: (NoKubernetes-267093)     <apic/>
	I0917 18:11:01.100334   59176 main.go:141] libmachine: (NoKubernetes-267093)     <pae/>
	I0917 18:11:01.100343   59176 main.go:141] libmachine: (NoKubernetes-267093)     
	I0917 18:11:01.100350   59176 main.go:141] libmachine: (NoKubernetes-267093)   </features>
	I0917 18:11:01.100356   59176 main.go:141] libmachine: (NoKubernetes-267093)   <cpu mode='host-passthrough'>
	I0917 18:11:01.100363   59176 main.go:141] libmachine: (NoKubernetes-267093)   
	I0917 18:11:01.100368   59176 main.go:141] libmachine: (NoKubernetes-267093)   </cpu>
	I0917 18:11:01.100375   59176 main.go:141] libmachine: (NoKubernetes-267093)   <os>
	I0917 18:11:01.100382   59176 main.go:141] libmachine: (NoKubernetes-267093)     <type>hvm</type>
	I0917 18:11:01.100389   59176 main.go:141] libmachine: (NoKubernetes-267093)     <boot dev='cdrom'/>
	I0917 18:11:01.100396   59176 main.go:141] libmachine: (NoKubernetes-267093)     <boot dev='hd'/>
	I0917 18:11:01.100404   59176 main.go:141] libmachine: (NoKubernetes-267093)     <bootmenu enable='no'/>
	I0917 18:11:01.100415   59176 main.go:141] libmachine: (NoKubernetes-267093)   </os>
	I0917 18:11:01.100422   59176 main.go:141] libmachine: (NoKubernetes-267093)   <devices>
	I0917 18:11:01.100435   59176 main.go:141] libmachine: (NoKubernetes-267093)     <disk type='file' device='cdrom'>
	I0917 18:11:01.100449   59176 main.go:141] libmachine: (NoKubernetes-267093)       <source file='/home/jenkins/minikube-integration/19662-11085/.minikube/machines/NoKubernetes-267093/boot2docker.iso'/>
	I0917 18:11:01.100455   59176 main.go:141] libmachine: (NoKubernetes-267093)       <target dev='hdc' bus='scsi'/>
	I0917 18:11:01.100461   59176 main.go:141] libmachine: (NoKubernetes-267093)       <readonly/>
	I0917 18:11:01.100466   59176 main.go:141] libmachine: (NoKubernetes-267093)     </disk>
	I0917 18:11:01.100478   59176 main.go:141] libmachine: (NoKubernetes-267093)     <disk type='file' device='disk'>
	I0917 18:11:01.100485   59176 main.go:141] libmachine: (NoKubernetes-267093)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0917 18:11:01.100498   59176 main.go:141] libmachine: (NoKubernetes-267093)       <source file='/home/jenkins/minikube-integration/19662-11085/.minikube/machines/NoKubernetes-267093/NoKubernetes-267093.rawdisk'/>
	I0917 18:11:01.100503   59176 main.go:141] libmachine: (NoKubernetes-267093)       <target dev='hda' bus='virtio'/>
	I0917 18:11:01.100509   59176 main.go:141] libmachine: (NoKubernetes-267093)     </disk>
	I0917 18:11:01.100514   59176 main.go:141] libmachine: (NoKubernetes-267093)     <interface type='network'>
	I0917 18:11:01.100521   59176 main.go:141] libmachine: (NoKubernetes-267093)       <source network='mk-NoKubernetes-267093'/>
	I0917 18:11:01.100527   59176 main.go:141] libmachine: (NoKubernetes-267093)       <model type='virtio'/>
	I0917 18:11:01.100533   59176 main.go:141] libmachine: (NoKubernetes-267093)     </interface>
	I0917 18:11:01.100537   59176 main.go:141] libmachine: (NoKubernetes-267093)     <interface type='network'>
	I0917 18:11:01.100549   59176 main.go:141] libmachine: (NoKubernetes-267093)       <source network='default'/>
	I0917 18:11:01.100566   59176 main.go:141] libmachine: (NoKubernetes-267093)       <model type='virtio'/>
	I0917 18:11:01.100573   59176 main.go:141] libmachine: (NoKubernetes-267093)     </interface>
	I0917 18:11:01.100579   59176 main.go:141] libmachine: (NoKubernetes-267093)     <serial type='pty'>
	I0917 18:11:01.100585   59176 main.go:141] libmachine: (NoKubernetes-267093)       <target port='0'/>
	I0917 18:11:01.100590   59176 main.go:141] libmachine: (NoKubernetes-267093)     </serial>
	I0917 18:11:01.100597   59176 main.go:141] libmachine: (NoKubernetes-267093)     <console type='pty'>
	I0917 18:11:01.100604   59176 main.go:141] libmachine: (NoKubernetes-267093)       <target type='serial' port='0'/>
	I0917 18:11:01.100610   59176 main.go:141] libmachine: (NoKubernetes-267093)     </console>
	I0917 18:11:01.100615   59176 main.go:141] libmachine: (NoKubernetes-267093)     <rng model='virtio'>
	I0917 18:11:01.100629   59176 main.go:141] libmachine: (NoKubernetes-267093)       <backend model='random'>/dev/random</backend>
	I0917 18:11:01.100638   59176 main.go:141] libmachine: (NoKubernetes-267093)     </rng>
	I0917 18:11:01.100645   59176 main.go:141] libmachine: (NoKubernetes-267093)     
	I0917 18:11:01.100650   59176 main.go:141] libmachine: (NoKubernetes-267093)     
	I0917 18:11:01.100656   59176 main.go:141] libmachine: (NoKubernetes-267093)   </devices>
	I0917 18:11:01.100661   59176 main.go:141] libmachine: (NoKubernetes-267093) </domain>
	I0917 18:11:01.100672   59176 main.go:141] libmachine: (NoKubernetes-267093) 
	I0917 18:11:01.105137   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | domain NoKubernetes-267093 has defined MAC address 52:54:00:b3:33:9b in network default
	I0917 18:11:01.105718   59176 main.go:141] libmachine: (NoKubernetes-267093) Ensuring networks are active...
	I0917 18:11:01.105732   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | domain NoKubernetes-267093 has defined MAC address 52:54:00:0c:d2:bc in network mk-NoKubernetes-267093
	I0917 18:11:01.106367   59176 main.go:141] libmachine: (NoKubernetes-267093) Ensuring network default is active
	I0917 18:11:01.106635   59176 main.go:141] libmachine: (NoKubernetes-267093) Ensuring network mk-NoKubernetes-267093 is active
	I0917 18:11:01.107060   59176 main.go:141] libmachine: (NoKubernetes-267093) Getting domain xml...
	I0917 18:11:01.107723   59176 main.go:141] libmachine: (NoKubernetes-267093) Creating domain...
	I0917 18:11:02.501726   59176 main.go:141] libmachine: (NoKubernetes-267093) Waiting to get IP...
	I0917 18:11:02.502778   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | domain NoKubernetes-267093 has defined MAC address 52:54:00:0c:d2:bc in network mk-NoKubernetes-267093
	I0917 18:11:02.503309   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | unable to find current IP address of domain NoKubernetes-267093 in network mk-NoKubernetes-267093
	I0917 18:11:02.503329   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:02.503269   59262 retry.go:31] will retry after 194.277684ms: waiting for machine to come up
	I0917 18:11:02.699875   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | domain NoKubernetes-267093 has defined MAC address 52:54:00:0c:d2:bc in network mk-NoKubernetes-267093
	I0917 18:11:02.700401   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | unable to find current IP address of domain NoKubernetes-267093 in network mk-NoKubernetes-267093
	I0917 18:11:02.700421   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:02.700368   59262 retry.go:31] will retry after 248.553852ms: waiting for machine to come up
	I0917 18:11:02.950852   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | domain NoKubernetes-267093 has defined MAC address 52:54:00:0c:d2:bc in network mk-NoKubernetes-267093
	I0917 18:11:02.951591   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | unable to find current IP address of domain NoKubernetes-267093 in network mk-NoKubernetes-267093
	I0917 18:11:02.951632   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:02.951538   59262 retry.go:31] will retry after 352.682061ms: waiting for machine to come up
	I0917 18:11:03.306017   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | domain NoKubernetes-267093 has defined MAC address 52:54:00:0c:d2:bc in network mk-NoKubernetes-267093
	I0917 18:11:03.306635   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | unable to find current IP address of domain NoKubernetes-267093 in network mk-NoKubernetes-267093
	I0917 18:11:03.306658   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:03.306588   59262 retry.go:31] will retry after 430.231323ms: waiting for machine to come up
	I0917 18:11:03.738275   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | domain NoKubernetes-267093 has defined MAC address 52:54:00:0c:d2:bc in network mk-NoKubernetes-267093
	I0917 18:11:03.738825   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | unable to find current IP address of domain NoKubernetes-267093 in network mk-NoKubernetes-267093
	I0917 18:11:03.738845   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:03.738780   59262 retry.go:31] will retry after 535.135352ms: waiting for machine to come up
	I0917 18:11:04.275783   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | domain NoKubernetes-267093 has defined MAC address 52:54:00:0c:d2:bc in network mk-NoKubernetes-267093
	I0917 18:11:04.276348   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | unable to find current IP address of domain NoKubernetes-267093 in network mk-NoKubernetes-267093
	I0917 18:11:04.276381   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:04.276301   59262 retry.go:31] will retry after 828.941966ms: waiting for machine to come up
	I0917 18:11:02.045041   58866 main.go:141] libmachine: (force-systemd-flag-722424) Calling .GetIP
	I0917 18:11:02.048867   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:02.049487   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:eb:8c", ip: ""} in network mk-force-systemd-flag-722424: {Iface:virbr4 ExpiryTime:2024-09-17 19:10:50 +0000 UTC Type:0 Mac:52:54:00:a0:eb:8c Iaid: IPaddr:192.168.72.193 Prefix:24 Hostname:force-systemd-flag-722424 Clientid:01:52:54:00:a0:eb:8c}
	I0917 18:11:02.049517   58866 main.go:141] libmachine: (force-systemd-flag-722424) DBG | domain force-systemd-flag-722424 has defined IP address 192.168.72.193 and MAC address 52:54:00:a0:eb:8c in network mk-force-systemd-flag-722424
	I0917 18:11:02.049781   58866 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0917 18:11:02.055016   58866 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:11:02.068785   58866 kubeadm.go:883] updating cluster {Name:force-systemd-flag-722424 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:force-systemd-flag-722424 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.193 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:11:02.068894   58866 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 18:11:02.068933   58866 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:11:02.102203   58866 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0917 18:11:02.102275   58866 ssh_runner.go:195] Run: which lz4
	I0917 18:11:02.106336   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0917 18:11:02.106416   58866 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 18:11:02.110687   58866 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 18:11:02.110719   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0917 18:11:03.628332   58866 crio.go:462] duration metric: took 1.521935538s to copy over tarball
	I0917 18:11:03.628414   58866 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 18:11:02.058409   58375 pod_ready.go:103] pod "etcd-pause-246701" in "kube-system" namespace has status "Ready":"False"
	I0917 18:11:04.560652   58375 pod_ready.go:103] pod "etcd-pause-246701" in "kube-system" namespace has status "Ready":"False"
	I0917 18:11:05.059393   58375 pod_ready.go:93] pod "etcd-pause-246701" in "kube-system" namespace has status "Ready":"True"
	I0917 18:11:05.059429   58375 pod_ready.go:82] duration metric: took 10.008598103s for pod "etcd-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:05.059441   58375 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:05.853071   58866 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.224620276s)
	I0917 18:11:05.853103   58866 crio.go:469] duration metric: took 2.224740933s to extract the tarball
	I0917 18:11:05.853112   58866 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 18:11:05.893906   58866 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:11:05.946593   58866 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 18:11:05.946615   58866 cache_images.go:84] Images are preloaded, skipping loading
	I0917 18:11:05.946622   58866 kubeadm.go:934] updating node { 192.168.72.193 8443 v1.31.1 crio true true} ...
	I0917 18:11:05.946737   58866 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-722424 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.193
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-722424 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:11:05.946799   58866 ssh_runner.go:195] Run: crio config
	I0917 18:11:06.005144   58866 cni.go:84] Creating CNI manager for ""
	I0917 18:11:06.005164   58866 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:11:06.005173   58866 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:11:06.005197   58866 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.193 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-722424 NodeName:force-systemd-flag-722424 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.193"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.193 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 18:11:06.005385   58866 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.193
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-722424"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.193
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.193"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:11:06.005454   58866 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 18:11:06.019438   58866 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:11:06.019517   58866 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:11:06.032605   58866 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0917 18:11:06.054178   58866 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:11:06.073774   58866 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0917 18:11:06.093697   58866 ssh_runner.go:195] Run: grep 192.168.72.193	control-plane.minikube.internal$ /etc/hosts
	I0917 18:11:06.097762   58866 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.193	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:11:06.111777   58866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:11:06.229014   58866 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:11:06.246644   58866 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424 for IP: 192.168.72.193
	I0917 18:11:06.246670   58866 certs.go:194] generating shared ca certs ...
	I0917 18:11:06.246698   58866 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:11:06.246887   58866 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:11:06.246949   58866 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:11:06.246963   58866 certs.go:256] generating profile certs ...
	I0917 18:11:06.247040   58866 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/client.key
	I0917 18:11:06.247075   58866 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/client.crt with IP's: []
	I0917 18:11:06.381827   58866 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/client.crt ...
	I0917 18:11:06.381860   58866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/client.crt: {Name:mk86097a41c322095d29daf8e622b2b28a99e1a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:11:06.382049   58866 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/client.key ...
	I0917 18:11:06.382070   58866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/client.key: {Name:mkba77cf4011828199491b7a86203b715802cb3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:11:06.382202   58866 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/apiserver.key.7fbe9d66
	I0917 18:11:06.382228   58866 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/apiserver.crt.7fbe9d66 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.193]
	I0917 18:11:06.495300   58866 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/apiserver.crt.7fbe9d66 ...
	I0917 18:11:06.495331   58866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/apiserver.crt.7fbe9d66: {Name:mk0f8ed889397e7c2c4cba36a22642380e555e4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:11:06.495487   58866 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/apiserver.key.7fbe9d66 ...
	I0917 18:11:06.495499   58866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/apiserver.key.7fbe9d66: {Name:mk36f61736ce6e8c93dff79e54f21136bf5676d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:11:06.495573   58866 certs.go:381] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/apiserver.crt.7fbe9d66 -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/apiserver.crt
	I0917 18:11:06.495669   58866 certs.go:385] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/apiserver.key.7fbe9d66 -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/apiserver.key
	I0917 18:11:06.495763   58866 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/proxy-client.key
	I0917 18:11:06.495782   58866 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/proxy-client.crt with IP's: []
	I0917 18:11:06.597612   58866 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/proxy-client.crt ...
	I0917 18:11:06.597643   58866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/proxy-client.crt: {Name:mk19a59ef793535a031f97388f4542bcd4803fd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:11:06.597810   58866 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/proxy-client.key ...
	I0917 18:11:06.597825   58866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/proxy-client.key: {Name:mkc124e54c2c0ec78debdbff969152258da04920 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:11:06.597897   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 18:11:06.597916   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 18:11:06.597931   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 18:11:06.597945   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 18:11:06.597958   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 18:11:06.597971   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 18:11:06.597983   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 18:11:06.597995   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 18:11:06.598053   58866 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:11:06.598090   58866 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:11:06.598100   58866 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:11:06.598124   58866 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:11:06.598147   58866 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:11:06.598167   58866 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:11:06.598209   58866 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:11:06.598234   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem -> /usr/share/ca-certificates/18259.pem
	I0917 18:11:06.598263   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> /usr/share/ca-certificates/182592.pem
	I0917 18:11:06.598276   58866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:11:06.598808   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:11:06.628688   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:11:06.655202   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:11:06.684871   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:11:06.714276   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0917 18:11:06.743755   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 18:11:06.774263   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:11:06.804691   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/force-systemd-flag-722424/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 18:11:06.834822   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:11:06.863587   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:11:06.891087   58866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:11:06.917696   58866 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:11:06.937738   58866 ssh_runner.go:195] Run: openssl version
	I0917 18:11:06.944797   58866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:11:06.960105   58866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:11:06.966504   58866 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:11:06.966562   58866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:11:06.973218   58866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:11:06.985107   58866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:11:07.001874   58866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:11:07.008460   58866 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:11:07.008534   58866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:11:07.017107   58866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:11:07.036073   58866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:11:07.060162   58866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:11:07.066845   58866 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:11:07.066913   58866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:11:07.077271   58866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:11:07.096066   58866 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:11:07.101775   58866 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 18:11:07.101834   58866 kubeadm.go:392] StartCluster: {Name:force-systemd-flag-722424 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.1 ClusterName:force-systemd-flag-722424 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.193 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:11:07.101924   58866 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:11:07.101982   58866 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:11:07.143064   58866 cri.go:89] found id: ""
	I0917 18:11:07.143141   58866 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:11:07.154240   58866 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:11:07.165029   58866 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:11:07.175778   58866 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:11:07.175801   58866 kubeadm.go:157] found existing configuration files:
	
	I0917 18:11:07.175866   58866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:11:07.185744   58866 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:11:07.185823   58866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:11:07.196612   58866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:11:07.206772   58866 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:11:07.206840   58866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:11:07.217977   58866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:11:07.227882   58866 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:11:07.227949   58866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:11:07.238655   58866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:11:07.248418   58866 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:11:07.248493   58866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:11:07.258861   58866 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:11:07.377104   58866 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 18:11:07.377265   58866 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:11:07.499737   58866 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:11:07.499901   58866 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:11:07.500039   58866 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 18:11:07.511785   58866 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:11:07.564894   58866 out.go:235]   - Generating certificates and keys ...
	I0917 18:11:07.565071   58866 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:11:07.565180   58866 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:11:07.586663   58866 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 18:11:07.781422   58866 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 18:11:07.921986   58866 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 18:11:08.265062   58866 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 18:11:08.315340   58866 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 18:11:08.315529   58866 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-722424 localhost] and IPs [192.168.72.193 127.0.0.1 ::1]
	I0917 18:11:08.416208   58866 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 18:11:08.416434   58866 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-722424 localhost] and IPs [192.168.72.193 127.0.0.1 ::1]
	I0917 18:11:08.585541   58866 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 18:11:08.679195   58866 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 18:11:08.743216   58866 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 18:11:08.743542   58866 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:11:08.853752   58866 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:11:08.932756   58866 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 18:11:09.177843   58866 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:11:09.232425   58866 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:11:09.447110   58866 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:11:09.447679   58866 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:11:09.451184   58866 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:11:05.107465   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | domain NoKubernetes-267093 has defined MAC address 52:54:00:0c:d2:bc in network mk-NoKubernetes-267093
	I0917 18:11:05.107971   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | unable to find current IP address of domain NoKubernetes-267093 in network mk-NoKubernetes-267093
	I0917 18:11:05.107996   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:05.107933   59262 retry.go:31] will retry after 1.037870312s: waiting for machine to come up
	I0917 18:11:06.147244   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | domain NoKubernetes-267093 has defined MAC address 52:54:00:0c:d2:bc in network mk-NoKubernetes-267093
	I0917 18:11:06.147951   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | unable to find current IP address of domain NoKubernetes-267093 in network mk-NoKubernetes-267093
	I0917 18:11:06.147986   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:06.147926   59262 retry.go:31] will retry after 1.084729336s: waiting for machine to come up
	I0917 18:11:07.233974   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | domain NoKubernetes-267093 has defined MAC address 52:54:00:0c:d2:bc in network mk-NoKubernetes-267093
	I0917 18:11:07.234447   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | unable to find current IP address of domain NoKubernetes-267093 in network mk-NoKubernetes-267093
	I0917 18:11:07.234467   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:07.234381   59262 retry.go:31] will retry after 1.81974704s: waiting for machine to come up
	I0917 18:11:09.055788   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | domain NoKubernetes-267093 has defined MAC address 52:54:00:0c:d2:bc in network mk-NoKubernetes-267093
	I0917 18:11:09.056264   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | unable to find current IP address of domain NoKubernetes-267093 in network mk-NoKubernetes-267093
	I0917 18:11:09.056280   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:09.056205   59262 retry.go:31] will retry after 1.630185156s: waiting for machine to come up
	I0917 18:11:07.068277   58375 pod_ready.go:103] pod "kube-apiserver-pause-246701" in "kube-system" namespace has status "Ready":"False"
	I0917 18:11:09.569066   58375 pod_ready.go:103] pod "kube-apiserver-pause-246701" in "kube-system" namespace has status "Ready":"False"
	I0917 18:11:10.069777   58375 pod_ready.go:93] pod "kube-apiserver-pause-246701" in "kube-system" namespace has status "Ready":"True"
	I0917 18:11:10.069810   58375 pod_ready.go:82] duration metric: took 5.010359365s for pod "kube-apiserver-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:10.069831   58375 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:10.076290   58375 pod_ready.go:93] pod "kube-controller-manager-pause-246701" in "kube-system" namespace has status "Ready":"True"
	I0917 18:11:10.076316   58375 pod_ready.go:82] duration metric: took 6.475825ms for pod "kube-controller-manager-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:10.076327   58375 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-vxgcn" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:10.081977   58375 pod_ready.go:93] pod "kube-proxy-vxgcn" in "kube-system" namespace has status "Ready":"True"
	I0917 18:11:10.082006   58375 pod_ready.go:82] duration metric: took 5.671891ms for pod "kube-proxy-vxgcn" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:10.082019   58375 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:10.087808   58375 pod_ready.go:93] pod "kube-scheduler-pause-246701" in "kube-system" namespace has status "Ready":"True"
	I0917 18:11:10.087831   58375 pod_ready.go:82] duration metric: took 5.804937ms for pod "kube-scheduler-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:10.087838   58375 pod_ready.go:39] duration metric: took 15.046091552s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:11:10.087853   58375 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 18:11:10.107302   58375 ops.go:34] apiserver oom_adj: -16
	I0917 18:11:10.107328   58375 kubeadm.go:597] duration metric: took 42.159106129s to restartPrimaryControlPlane
	I0917 18:11:10.107343   58375 kubeadm.go:394] duration metric: took 42.535162008s to StartCluster
	I0917 18:11:10.107364   58375 settings.go:142] acquiring lock: {Name:mk34898861b3ed534da4bd8404feab95fe690001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:11:10.107442   58375 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:11:10.108104   58375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:11:10.108359   58375 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 18:11:10.108436   58375 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 18:11:10.108725   58375 config.go:182] Loaded profile config "pause-246701": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:11:10.111365   58375 out.go:177] * Enabled addons: 
	I0917 18:11:10.111385   58375 out.go:177] * Verifying Kubernetes components...
	I0917 18:11:09.453247   58866 out.go:235]   - Booting up control plane ...
	I0917 18:11:09.453378   58866 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:11:09.453508   58866 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:11:09.453615   58866 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:11:09.471050   58866 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:11:09.478679   58866 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:11:09.478776   58866 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:11:09.627956   58866 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 18:11:09.628115   58866 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 18:11:10.129019   58866 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.637536ms
	I0917 18:11:10.129161   58866 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 18:11:10.112960   58375 addons.go:510] duration metric: took 4.532285ms for enable addons: enabled=[]
	I0917 18:11:10.112994   58375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:11:10.279460   58375 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:11:10.302491   58375 node_ready.go:35] waiting up to 6m0s for node "pause-246701" to be "Ready" ...
	I0917 18:11:10.306062   58375 node_ready.go:49] node "pause-246701" has status "Ready":"True"
	I0917 18:11:10.306086   58375 node_ready.go:38] duration metric: took 3.560004ms for node "pause-246701" to be "Ready" ...
	I0917 18:11:10.306093   58375 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:11:10.311355   58375 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dkldh" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:10.464444   58375 pod_ready.go:93] pod "coredns-7c65d6cfc9-dkldh" in "kube-system" namespace has status "Ready":"True"
	I0917 18:11:10.464488   58375 pod_ready.go:82] duration metric: took 153.104082ms for pod "coredns-7c65d6cfc9-dkldh" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:10.464501   58375 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:10.864824   58375 pod_ready.go:93] pod "etcd-pause-246701" in "kube-system" namespace has status "Ready":"True"
	I0917 18:11:10.864849   58375 pod_ready.go:82] duration metric: took 400.340534ms for pod "etcd-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:10.864858   58375 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:11.264832   58375 pod_ready.go:93] pod "kube-apiserver-pause-246701" in "kube-system" namespace has status "Ready":"True"
	I0917 18:11:11.264858   58375 pod_ready.go:82] duration metric: took 399.993599ms for pod "kube-apiserver-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:11.264867   58375 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:11.665494   58375 pod_ready.go:93] pod "kube-controller-manager-pause-246701" in "kube-system" namespace has status "Ready":"True"
	I0917 18:11:11.665588   58375 pod_ready.go:82] duration metric: took 400.710991ms for pod "kube-controller-manager-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:11.665615   58375 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vxgcn" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:12.065539   58375 pod_ready.go:93] pod "kube-proxy-vxgcn" in "kube-system" namespace has status "Ready":"True"
	I0917 18:11:12.065575   58375 pod_ready.go:82] duration metric: took 399.941304ms for pod "kube-proxy-vxgcn" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:12.065589   58375 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:12.465103   58375 pod_ready.go:93] pod "kube-scheduler-pause-246701" in "kube-system" namespace has status "Ready":"True"
	I0917 18:11:12.465135   58375 pod_ready.go:82] duration metric: took 399.53748ms for pod "kube-scheduler-pause-246701" in "kube-system" namespace to be "Ready" ...
	I0917 18:11:12.465146   58375 pod_ready.go:39] duration metric: took 2.159042692s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:11:12.465163   58375 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:11:12.465242   58375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:11:12.487151   58375 api_server.go:72] duration metric: took 2.378745999s to wait for apiserver process to appear ...
	I0917 18:11:12.487187   58375 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:11:12.487234   58375 api_server.go:253] Checking apiserver healthz at https://192.168.39.167:8443/healthz ...
	I0917 18:11:12.494137   58375 api_server.go:279] https://192.168.39.167:8443/healthz returned 200:
	ok
	I0917 18:11:12.495390   58375 api_server.go:141] control plane version: v1.31.1
	I0917 18:11:12.495414   58375 api_server.go:131] duration metric: took 8.218787ms to wait for apiserver health ...
	I0917 18:11:12.495422   58375 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:11:12.668374   58375 system_pods.go:59] 6 kube-system pods found
	I0917 18:11:12.668428   58375 system_pods.go:61] "coredns-7c65d6cfc9-dkldh" [1c024a80-e613-48c2-b2c2-79bb05774a91] Running
	I0917 18:11:12.668435   58375 system_pods.go:61] "etcd-pause-246701" [d4796279-f272-4e02-a266-5da5f4aafec1] Running
	I0917 18:11:12.668442   58375 system_pods.go:61] "kube-apiserver-pause-246701" [6355bbd7-6301-4aaf-aa88-0951b3a578e1] Running
	I0917 18:11:12.668448   58375 system_pods.go:61] "kube-controller-manager-pause-246701" [6c4f553f-4689-45b2-8c50-c237f39bbe89] Running
	I0917 18:11:12.668454   58375 system_pods.go:61] "kube-proxy-vxgcn" [de638753-d03b-438c-8d1f-d43af2bbcce4] Running
	I0917 18:11:12.668459   58375 system_pods.go:61] "kube-scheduler-pause-246701" [836bc257-b131-4c83-add1-70edf6f7fb9b] Running
	I0917 18:11:12.668467   58375 system_pods.go:74] duration metric: took 173.038167ms to wait for pod list to return data ...
	I0917 18:11:12.668476   58375 default_sa.go:34] waiting for default service account to be created ...
	I0917 18:11:12.865188   58375 default_sa.go:45] found service account: "default"
	I0917 18:11:12.865223   58375 default_sa.go:55] duration metric: took 196.738943ms for default service account to be created ...
	I0917 18:11:12.865248   58375 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 18:11:13.067058   58375 system_pods.go:86] 6 kube-system pods found
	I0917 18:11:13.067097   58375 system_pods.go:89] "coredns-7c65d6cfc9-dkldh" [1c024a80-e613-48c2-b2c2-79bb05774a91] Running
	I0917 18:11:13.067105   58375 system_pods.go:89] "etcd-pause-246701" [d4796279-f272-4e02-a266-5da5f4aafec1] Running
	I0917 18:11:13.067111   58375 system_pods.go:89] "kube-apiserver-pause-246701" [6355bbd7-6301-4aaf-aa88-0951b3a578e1] Running
	I0917 18:11:13.067117   58375 system_pods.go:89] "kube-controller-manager-pause-246701" [6c4f553f-4689-45b2-8c50-c237f39bbe89] Running
	I0917 18:11:13.067123   58375 system_pods.go:89] "kube-proxy-vxgcn" [de638753-d03b-438c-8d1f-d43af2bbcce4] Running
	I0917 18:11:13.067128   58375 system_pods.go:89] "kube-scheduler-pause-246701" [836bc257-b131-4c83-add1-70edf6f7fb9b] Running
	I0917 18:11:13.067137   58375 system_pods.go:126] duration metric: took 201.881436ms to wait for k8s-apps to be running ...
	I0917 18:11:13.067146   58375 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 18:11:13.067202   58375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:11:13.083149   58375 system_svc.go:56] duration metric: took 15.991297ms WaitForService to wait for kubelet
	I0917 18:11:13.083184   58375 kubeadm.go:582] duration metric: took 2.974789811s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 18:11:13.083209   58375 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:11:13.264118   58375 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:11:13.264143   58375 node_conditions.go:123] node cpu capacity is 2
	I0917 18:11:13.264154   58375 node_conditions.go:105] duration metric: took 180.939596ms to run NodePressure ...
	I0917 18:11:13.264164   58375 start.go:241] waiting for startup goroutines ...
	I0917 18:11:13.264170   58375 start.go:246] waiting for cluster config update ...
	I0917 18:11:13.264177   58375 start.go:255] writing updated cluster config ...
	I0917 18:11:13.264455   58375 ssh_runner.go:195] Run: rm -f paused
	I0917 18:11:13.321760   58375 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 18:11:13.324003   58375 out.go:177] * Done! kubectl is now configured to use "pause-246701" cluster and "default" namespace by default
	I0917 18:11:10.843277   53861 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:11:10.843514   53861 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:11:10.688197   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | domain NoKubernetes-267093 has defined MAC address 52:54:00:0c:d2:bc in network mk-NoKubernetes-267093
	I0917 18:11:10.688689   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | unable to find current IP address of domain NoKubernetes-267093 in network mk-NoKubernetes-267093
	I0917 18:11:10.688709   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:10.688623   59262 retry.go:31] will retry after 2.794889426s: waiting for machine to come up
	I0917 18:11:13.486570   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | domain NoKubernetes-267093 has defined MAC address 52:54:00:0c:d2:bc in network mk-NoKubernetes-267093
	I0917 18:11:13.487095   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | unable to find current IP address of domain NoKubernetes-267093 in network mk-NoKubernetes-267093
	I0917 18:11:13.487108   59176 main.go:141] libmachine: (NoKubernetes-267093) DBG | I0917 18:11:13.486818   59262 retry.go:31] will retry after 3.440875408s: waiting for machine to come up
	
	
	==> CRI-O <==
	Sep 17 18:11:16 pause-246701 crio[2291]: time="2024-09-17 18:11:16.366050709Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c28bcea2-81ba-4069-98f1-d2695ee75dc2 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:11:16 pause-246701 crio[2291]: time="2024-09-17 18:11:16.367744074Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6458fe23-c30e-4c67-b8bb-d353e8d39e7d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:11:16 pause-246701 crio[2291]: time="2024-09-17 18:11:16.368294376Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726596676368261739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6458fe23-c30e-4c67-b8bb-d353e8d39e7d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:11:16 pause-246701 crio[2291]: time="2024-09-17 18:11:16.369064373Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6404de1b-642d-40c0-9c8c-8ae935ed91b2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:11:16 pause-246701 crio[2291]: time="2024-09-17 18:11:16.369256578Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6404de1b-642d-40c0-9c8c-8ae935ed91b2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:11:16 pause-246701 crio[2291]: time="2024-09-17 18:11:16.369626019Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e36ed07cad23ca7e45e0ef78137e5611776afa55b9815807f66fb0ed85a99556,PodSandboxId:c1dbcf329a3fe0baab44df5059f981aa8d3b6e9b4f1a59b6a973504a5e2f0818,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726596650657267029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ddff28aa8ccc0582f50b01e72762fef,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522b430ef1b3944822b390aa7972fbb4ca0d95838f2ed6bf233dd76563902419,PodSandboxId:f689143900b533ba6c8eb2ffb5f114af7284348cbfcd4b4651ecd85534cedc83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726596650660591404,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9be2266f3aff2f758a3717d99f55ddc,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c83fe770d5129c316fda477f179c5751dba22618f8024c50b1276ca370b52953,PodSandboxId:8b5b8a054b6606762ca430d911d7158c889cc2299325f43b02d79bbc0e81de14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726596650636678591,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b4c7e73ea292afa0be4ff4b4b13e840,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e941d039eb663b198187704a615d9351839eeb925965a890fc1e0308609ad47,PodSandboxId:9cd865f7d89ed38200b42a28237a798e5d64cdcfc14b59690cc3d614affd5aa8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726596647289091519,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e6fb720ecd1c1a6b1371e0c42ab7381,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5b65e38703d6deda573e2b1c3c3f9d60b44e65801baddd4b382fbef4eb1a8b5,PodSandboxId:21d37cdab9a8460580ca74bc40e89c1c7cabfbf05f9c9b1afdb7fd4e024787ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726596628219238122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dkldh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c024a80-e613-48c2-b2c2-79bb05774a91,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ee133b83897f5511ef614608a44582e929cfaf2d167679328a73e61846d31cf,PodSandboxId:dfc9b352c2c27ecf4ce9fc4d07ebbb5ed2d1960f862b6793290cc9505f82d0cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726596627478615128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxgcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de638753-d03b-438c-8d1f-d43af2bbcce4,},Annotations:map[string]string{io
.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59be282d82c994554051afe711c5e54f9956bb70bc58a9769c9785903771359e,PodSandboxId:f689143900b533ba6c8eb2ffb5f114af7284348cbfcd4b4651ecd85534cedc83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726596627346655721,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9be2266f3aff2f758a3717d99f55ddc,},Annotations:map[string]s
tring{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01807adcb2c6478e08477b688e9213108420bbd562e88f21e7708e3ecdfd077b,PodSandboxId:c1dbcf329a3fe0baab44df5059f981aa8d3b6e9b4f1a59b6a973504a5e2f0818,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726596627284480793,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ddff28aa8ccc0582f50b01e72762fef,},Annotations:map[string]string{io.kubernetes
.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb34f9ac3911759cc6a62e2d8b69a97435125e9bd1540f4f5182362fc105a31,PodSandboxId:9cd865f7d89ed38200b42a28237a798e5d64cdcfc14b59690cc3d614affd5aa8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726596627296774789,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e6fb720ecd1c1a6b1371e0c42ab7381,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0edc99624aa4f33a3454d0dc5a08cbdd7de87adc98e628385aac56e62708e0,PodSandboxId:8b5b8a054b6606762ca430d911d7158c889cc2299325f43b02d79bbc0e81de14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726596627115500381,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b4c7e73ea292afa0be4ff4b4b13e840,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f55176a7c1e4d61fe9e7fbe8d5b007636be692a07306798dea61b38f0124fbec,PodSandboxId:379e5671aca9ddead60548d48abf110a9f519d09c0e91e666a4735acaa648738,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726596561478944199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dkldh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c024a80-e613-48c2-b2c2-79bb05774a91,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde175b4dcf392df5e049450ec3eb0f92583c8a3d28d0147040470f4fac40487,PodSandboxId:5e3ccc0fc94188285738229cd5da6b3d4698b14ee94ed616abdd38177de4e7f5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726596560941560068,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxgcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: de638753-d03b-438c-8d1f-d43af2bbcce4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6404de1b-642d-40c0-9c8c-8ae935ed91b2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:11:16 pause-246701 crio[2291]: time="2024-09-17 18:11:16.400994291Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=1c264d24-d9c8-40c1-9089-cdf845deb475 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 17 18:11:16 pause-246701 crio[2291]: time="2024-09-17 18:11:16.401382511Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:21d37cdab9a8460580ca74bc40e89c1c7cabfbf05f9c9b1afdb7fd4e024787ee,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-dkldh,Uid:1c024a80-e613-48c2-b2c2-79bb05774a91,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726596626997199588,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-dkldh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c024a80-e613-48c2-b2c2-79bb05774a91,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-17T18:09:20.163994269Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dfc9b352c2c27ecf4ce9fc4d07ebbb5ed2d1960f862b6793290cc9505f82d0cc,Metadata:&PodSandboxMetadata{Name:kube-proxy-vxgcn,Uid:de638753-d03b-438c-8d1f-d43af2bbcce4,Namespace:kube-system,Attempt
:1,},State:SANDBOX_READY,CreatedAt:1726596626782074514,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-vxgcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de638753-d03b-438c-8d1f-d43af2bbcce4,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-17T18:09:20.028124972Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8b5b8a054b6606762ca430d911d7158c889cc2299325f43b02d79bbc0e81de14,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-246701,Uid:3b4c7e73ea292afa0be4ff4b4b13e840,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726596626751554591,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b4c7e73ea292afa0be4ff4b4b13e840,tier: control-plane,},Annotations:map[string
]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.167:8443,kubernetes.io/config.hash: 3b4c7e73ea292afa0be4ff4b4b13e840,kubernetes.io/config.seen: 2024-09-17T18:09:15.229372649Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9cd865f7d89ed38200b42a28237a798e5d64cdcfc14b59690cc3d614affd5aa8,Metadata:&PodSandboxMetadata{Name:etcd-pause-246701,Uid:9e6fb720ecd1c1a6b1371e0c42ab7381,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726596626748992029,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e6fb720ecd1c1a6b1371e0c42ab7381,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.167:2379,kubernetes.io/config.hash: 9e6fb720ecd1c1a6b1371e0c42ab7381,kubernetes.io/config.seen: 2024-09-17T18:09:15.229368880Z,kubernetes.io/config.source: file,},RuntimeH
andler:,},&PodSandbox{Id:c1dbcf329a3fe0baab44df5059f981aa8d3b6e9b4f1a59b6a973504a5e2f0818,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-246701,Uid:9ddff28aa8ccc0582f50b01e72762fef,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726596626717030175,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ddff28aa8ccc0582f50b01e72762fef,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9ddff28aa8ccc0582f50b01e72762fef,kubernetes.io/config.seen: 2024-09-17T18:09:15.229374833Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f689143900b533ba6c8eb2ffb5f114af7284348cbfcd4b4651ecd85534cedc83,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-246701,Uid:a9be2266f3aff2f758a3717d99f55ddc,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726596626710363176,Labels:map[string]str
ing{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9be2266f3aff2f758a3717d99f55ddc,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a9be2266f3aff2f758a3717d99f55ddc,kubernetes.io/config.seen: 2024-09-17T18:09:15.229373784Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e1dc01b854fe6bed588b3e005a92ee625f2af76a3a8e2e05ae9f50d3d338b61f,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-9hsph,Uid:75d9302f-cef7-4884-afa2-b0bf3db8aba1,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726596560652646648,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-9hsph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75d9302f-cef7-4884-afa2-b0bf3db8aba1,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io
/config.seen: 2024-09-17T18:09:20.324956180Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5e3ccc0fc94188285738229cd5da6b3d4698b14ee94ed616abdd38177de4e7f5,Metadata:&PodSandboxMetadata{Name:kube-proxy-vxgcn,Uid:de638753-d03b-438c-8d1f-d43af2bbcce4,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726596560643578894,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-vxgcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de638753-d03b-438c-8d1f-d43af2bbcce4,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-17T18:09:20.028124972Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:379e5671aca9ddead60548d48abf110a9f519d09c0e91e666a4735acaa648738,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-dkldh,Uid:1c024a80-e613-48c2-b2c2-79bb05774a91,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,Cr
eatedAt:1726596560498621432,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-dkldh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c024a80-e613-48c2-b2c2-79bb05774a91,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-17T18:09:20.163994269Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=1c264d24-d9c8-40c1-9089-cdf845deb475 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 17 18:11:16 pause-246701 crio[2291]: time="2024-09-17 18:11:16.402262828Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b8d8e222-84b7-49e4-8a89-f0f0fbcb740d name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:11:16 pause-246701 crio[2291]: time="2024-09-17 18:11:16.402341098Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b8d8e222-84b7-49e4-8a89-f0f0fbcb740d name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:11:16 pause-246701 crio[2291]: time="2024-09-17 18:11:16.402607942Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e36ed07cad23ca7e45e0ef78137e5611776afa55b9815807f66fb0ed85a99556,PodSandboxId:c1dbcf329a3fe0baab44df5059f981aa8d3b6e9b4f1a59b6a973504a5e2f0818,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726596650657267029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ddff28aa8ccc0582f50b01e72762fef,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522b430ef1b3944822b390aa7972fbb4ca0d95838f2ed6bf233dd76563902419,PodSandboxId:f689143900b533ba6c8eb2ffb5f114af7284348cbfcd4b4651ecd85534cedc83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726596650660591404,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9be2266f3aff2f758a3717d99f55ddc,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c83fe770d5129c316fda477f179c5751dba22618f8024c50b1276ca370b52953,PodSandboxId:8b5b8a054b6606762ca430d911d7158c889cc2299325f43b02d79bbc0e81de14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726596650636678591,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b4c7e73ea292afa0be4ff4b4b13e840,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e941d039eb663b198187704a615d9351839eeb925965a890fc1e0308609ad47,PodSandboxId:9cd865f7d89ed38200b42a28237a798e5d64cdcfc14b59690cc3d614affd5aa8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726596647289091519,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e6fb720ecd1c1a6b1371e0c42ab7381,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5b65e38703d6deda573e2b1c3c3f9d60b44e65801baddd4b382fbef4eb1a8b5,PodSandboxId:21d37cdab9a8460580ca74bc40e89c1c7cabfbf05f9c9b1afdb7fd4e024787ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726596628219238122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dkldh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c024a80-e613-48c2-b2c2-79bb05774a91,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ee133b83897f5511ef614608a44582e929cfaf2d167679328a73e61846d31cf,PodSandboxId:dfc9b352c2c27ecf4ce9fc4d07ebbb5ed2d1960f862b6793290cc9505f82d0cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726596627478615128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxgcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de638753-d03b-438c-8d1f-d43af2bbcce4,},Annotations:map[string]string{io
.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59be282d82c994554051afe711c5e54f9956bb70bc58a9769c9785903771359e,PodSandboxId:f689143900b533ba6c8eb2ffb5f114af7284348cbfcd4b4651ecd85534cedc83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726596627346655721,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9be2266f3aff2f758a3717d99f55ddc,},Annotations:map[string]s
tring{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01807adcb2c6478e08477b688e9213108420bbd562e88f21e7708e3ecdfd077b,PodSandboxId:c1dbcf329a3fe0baab44df5059f981aa8d3b6e9b4f1a59b6a973504a5e2f0818,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726596627284480793,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ddff28aa8ccc0582f50b01e72762fef,},Annotations:map[string]string{io.kubernetes
.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb34f9ac3911759cc6a62e2d8b69a97435125e9bd1540f4f5182362fc105a31,PodSandboxId:9cd865f7d89ed38200b42a28237a798e5d64cdcfc14b59690cc3d614affd5aa8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726596627296774789,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e6fb720ecd1c1a6b1371e0c42ab7381,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0edc99624aa4f33a3454d0dc5a08cbdd7de87adc98e628385aac56e62708e0,PodSandboxId:8b5b8a054b6606762ca430d911d7158c889cc2299325f43b02d79bbc0e81de14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726596627115500381,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b4c7e73ea292afa0be4ff4b4b13e840,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f55176a7c1e4d61fe9e7fbe8d5b007636be692a07306798dea61b38f0124fbec,PodSandboxId:379e5671aca9ddead60548d48abf110a9f519d09c0e91e666a4735acaa648738,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726596561478944199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dkldh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c024a80-e613-48c2-b2c2-79bb05774a91,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde175b4dcf392df5e049450ec3eb0f92583c8a3d28d0147040470f4fac40487,PodSandboxId:5e3ccc0fc94188285738229cd5da6b3d4698b14ee94ed616abdd38177de4e7f5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726596560941560068,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxgcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: de638753-d03b-438c-8d1f-d43af2bbcce4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b8d8e222-84b7-49e4-8a89-f0f0fbcb740d name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:11:16 pause-246701 crio[2291]: time="2024-09-17 18:11:16.427630399Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fff46165-f319-45a5-a953-d2da0cf8ab42 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:11:16 pause-246701 crio[2291]: time="2024-09-17 18:11:16.427743942Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fff46165-f319-45a5-a953-d2da0cf8ab42 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:11:16 pause-246701 crio[2291]: time="2024-09-17 18:11:16.429429764Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c279215b-6c8f-4591-bf77-579829a76cb0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:11:16 pause-246701 crio[2291]: time="2024-09-17 18:11:16.429825950Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726596676429798943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c279215b-6c8f-4591-bf77-579829a76cb0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:11:16 pause-246701 crio[2291]: time="2024-09-17 18:11:16.430565135Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7900b67a-ebcb-4f94-a5e9-e538c5ba5bb2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:11:16 pause-246701 crio[2291]: time="2024-09-17 18:11:16.430862542Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7900b67a-ebcb-4f94-a5e9-e538c5ba5bb2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:11:16 pause-246701 crio[2291]: time="2024-09-17 18:11:16.431415566Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e36ed07cad23ca7e45e0ef78137e5611776afa55b9815807f66fb0ed85a99556,PodSandboxId:c1dbcf329a3fe0baab44df5059f981aa8d3b6e9b4f1a59b6a973504a5e2f0818,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726596650657267029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ddff28aa8ccc0582f50b01e72762fef,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522b430ef1b3944822b390aa7972fbb4ca0d95838f2ed6bf233dd76563902419,PodSandboxId:f689143900b533ba6c8eb2ffb5f114af7284348cbfcd4b4651ecd85534cedc83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726596650660591404,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9be2266f3aff2f758a3717d99f55ddc,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c83fe770d5129c316fda477f179c5751dba22618f8024c50b1276ca370b52953,PodSandboxId:8b5b8a054b6606762ca430d911d7158c889cc2299325f43b02d79bbc0e81de14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726596650636678591,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b4c7e73ea292afa0be4ff4b4b13e840,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e941d039eb663b198187704a615d9351839eeb925965a890fc1e0308609ad47,PodSandboxId:9cd865f7d89ed38200b42a28237a798e5d64cdcfc14b59690cc3d614affd5aa8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726596647289091519,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e6fb720ecd1c1a6b1371e0c42ab7381,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5b65e38703d6deda573e2b1c3c3f9d60b44e65801baddd4b382fbef4eb1a8b5,PodSandboxId:21d37cdab9a8460580ca74bc40e89c1c7cabfbf05f9c9b1afdb7fd4e024787ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726596628219238122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dkldh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c024a80-e613-48c2-b2c2-79bb05774a91,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ee133b83897f5511ef614608a44582e929cfaf2d167679328a73e61846d31cf,PodSandboxId:dfc9b352c2c27ecf4ce9fc4d07ebbb5ed2d1960f862b6793290cc9505f82d0cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726596627478615128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxgcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de638753-d03b-438c-8d1f-d43af2bbcce4,},Annotations:map[string]string{io
.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59be282d82c994554051afe711c5e54f9956bb70bc58a9769c9785903771359e,PodSandboxId:f689143900b533ba6c8eb2ffb5f114af7284348cbfcd4b4651ecd85534cedc83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726596627346655721,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9be2266f3aff2f758a3717d99f55ddc,},Annotations:map[string]s
tring{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01807adcb2c6478e08477b688e9213108420bbd562e88f21e7708e3ecdfd077b,PodSandboxId:c1dbcf329a3fe0baab44df5059f981aa8d3b6e9b4f1a59b6a973504a5e2f0818,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726596627284480793,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ddff28aa8ccc0582f50b01e72762fef,},Annotations:map[string]string{io.kubernetes
.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb34f9ac3911759cc6a62e2d8b69a97435125e9bd1540f4f5182362fc105a31,PodSandboxId:9cd865f7d89ed38200b42a28237a798e5d64cdcfc14b59690cc3d614affd5aa8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726596627296774789,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e6fb720ecd1c1a6b1371e0c42ab7381,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0edc99624aa4f33a3454d0dc5a08cbdd7de87adc98e628385aac56e62708e0,PodSandboxId:8b5b8a054b6606762ca430d911d7158c889cc2299325f43b02d79bbc0e81de14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726596627115500381,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b4c7e73ea292afa0be4ff4b4b13e840,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f55176a7c1e4d61fe9e7fbe8d5b007636be692a07306798dea61b38f0124fbec,PodSandboxId:379e5671aca9ddead60548d48abf110a9f519d09c0e91e666a4735acaa648738,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726596561478944199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dkldh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c024a80-e613-48c2-b2c2-79bb05774a91,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde175b4dcf392df5e049450ec3eb0f92583c8a3d28d0147040470f4fac40487,PodSandboxId:5e3ccc0fc94188285738229cd5da6b3d4698b14ee94ed616abdd38177de4e7f5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726596560941560068,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxgcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: de638753-d03b-438c-8d1f-d43af2bbcce4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7900b67a-ebcb-4f94-a5e9-e538c5ba5bb2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:11:16 pause-246701 crio[2291]: time="2024-09-17 18:11:16.480621454Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a2efe391-86b8-41bf-a725-d5e3f38a2b75 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:11:16 pause-246701 crio[2291]: time="2024-09-17 18:11:16.480748670Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a2efe391-86b8-41bf-a725-d5e3f38a2b75 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:11:16 pause-246701 crio[2291]: time="2024-09-17 18:11:16.481965681Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=67d9004b-dfe0-44ad-8e50-e6eebe6e41ad name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:11:16 pause-246701 crio[2291]: time="2024-09-17 18:11:16.482786852Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726596676482751202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=67d9004b-dfe0-44ad-8e50-e6eebe6e41ad name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:11:16 pause-246701 crio[2291]: time="2024-09-17 18:11:16.483616445Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5e60f2ec-1507-449f-87e4-3c38fcd39c6f name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:11:16 pause-246701 crio[2291]: time="2024-09-17 18:11:16.483718291Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5e60f2ec-1507-449f-87e4-3c38fcd39c6f name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:11:16 pause-246701 crio[2291]: time="2024-09-17 18:11:16.484658184Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e36ed07cad23ca7e45e0ef78137e5611776afa55b9815807f66fb0ed85a99556,PodSandboxId:c1dbcf329a3fe0baab44df5059f981aa8d3b6e9b4f1a59b6a973504a5e2f0818,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726596650657267029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ddff28aa8ccc0582f50b01e72762fef,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522b430ef1b3944822b390aa7972fbb4ca0d95838f2ed6bf233dd76563902419,PodSandboxId:f689143900b533ba6c8eb2ffb5f114af7284348cbfcd4b4651ecd85534cedc83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726596650660591404,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9be2266f3aff2f758a3717d99f55ddc,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c83fe770d5129c316fda477f179c5751dba22618f8024c50b1276ca370b52953,PodSandboxId:8b5b8a054b6606762ca430d911d7158c889cc2299325f43b02d79bbc0e81de14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726596650636678591,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b4c7e73ea292afa0be4ff4b4b13e840,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e941d039eb663b198187704a615d9351839eeb925965a890fc1e0308609ad47,PodSandboxId:9cd865f7d89ed38200b42a28237a798e5d64cdcfc14b59690cc3d614affd5aa8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726596647289091519,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e6fb720ecd1c1a6b1371e0c42ab7381,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5b65e38703d6deda573e2b1c3c3f9d60b44e65801baddd4b382fbef4eb1a8b5,PodSandboxId:21d37cdab9a8460580ca74bc40e89c1c7cabfbf05f9c9b1afdb7fd4e024787ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726596628219238122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dkldh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c024a80-e613-48c2-b2c2-79bb05774a91,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ee133b83897f5511ef614608a44582e929cfaf2d167679328a73e61846d31cf,PodSandboxId:dfc9b352c2c27ecf4ce9fc4d07ebbb5ed2d1960f862b6793290cc9505f82d0cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726596627478615128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxgcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de638753-d03b-438c-8d1f-d43af2bbcce4,},Annotations:map[string]string{io
.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59be282d82c994554051afe711c5e54f9956bb70bc58a9769c9785903771359e,PodSandboxId:f689143900b533ba6c8eb2ffb5f114af7284348cbfcd4b4651ecd85534cedc83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726596627346655721,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9be2266f3aff2f758a3717d99f55ddc,},Annotations:map[string]s
tring{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01807adcb2c6478e08477b688e9213108420bbd562e88f21e7708e3ecdfd077b,PodSandboxId:c1dbcf329a3fe0baab44df5059f981aa8d3b6e9b4f1a59b6a973504a5e2f0818,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726596627284480793,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ddff28aa8ccc0582f50b01e72762fef,},Annotations:map[string]string{io.kubernetes
.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb34f9ac3911759cc6a62e2d8b69a97435125e9bd1540f4f5182362fc105a31,PodSandboxId:9cd865f7d89ed38200b42a28237a798e5d64cdcfc14b59690cc3d614affd5aa8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726596627296774789,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e6fb720ecd1c1a6b1371e0c42ab7381,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0edc99624aa4f33a3454d0dc5a08cbdd7de87adc98e628385aac56e62708e0,PodSandboxId:8b5b8a054b6606762ca430d911d7158c889cc2299325f43b02d79bbc0e81de14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726596627115500381,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-246701,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b4c7e73ea292afa0be4ff4b4b13e840,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f55176a7c1e4d61fe9e7fbe8d5b007636be692a07306798dea61b38f0124fbec,PodSandboxId:379e5671aca9ddead60548d48abf110a9f519d09c0e91e666a4735acaa648738,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726596561478944199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dkldh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c024a80-e613-48c2-b2c2-79bb05774a91,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde175b4dcf392df5e049450ec3eb0f92583c8a3d28d0147040470f4fac40487,PodSandboxId:5e3ccc0fc94188285738229cd5da6b3d4698b14ee94ed616abdd38177de4e7f5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726596560941560068,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxgcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: de638753-d03b-438c-8d1f-d43af2bbcce4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5e60f2ec-1507-449f-87e4-3c38fcd39c6f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	522b430ef1b39       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   25 seconds ago       Running             kube-controller-manager   2                   f689143900b53       kube-controller-manager-pause-246701
	e36ed07cad23c       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   25 seconds ago       Running             kube-scheduler            2                   c1dbcf329a3fe       kube-scheduler-pause-246701
	c83fe770d5129       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   25 seconds ago       Running             kube-apiserver            2                   8b5b8a054b660       kube-apiserver-pause-246701
	7e941d039eb66       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   29 seconds ago       Running             etcd                      2                   9cd865f7d89ed       etcd-pause-246701
	f5b65e38703d6       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   48 seconds ago       Running             coredns                   1                   21d37cdab9a84       coredns-7c65d6cfc9-dkldh
	1ee133b83897f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   49 seconds ago       Running             kube-proxy                1                   dfc9b352c2c27       kube-proxy-vxgcn
	59be282d82c99       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   49 seconds ago       Exited              kube-controller-manager   1                   f689143900b53       kube-controller-manager-pause-246701
	7eb34f9ac3911       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   49 seconds ago       Exited              etcd                      1                   9cd865f7d89ed       etcd-pause-246701
	01807adcb2c64       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   49 seconds ago       Exited              kube-scheduler            1                   c1dbcf329a3fe       kube-scheduler-pause-246701
	1f0edc99624aa       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   49 seconds ago       Exited              kube-apiserver            1                   8b5b8a054b660       kube-apiserver-pause-246701
	f55176a7c1e4d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   379e5671aca9d       coredns-7c65d6cfc9-dkldh
	fde175b4dcf39       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   About a minute ago   Exited              kube-proxy                0                   5e3ccc0fc9418       kube-proxy-vxgcn
	
	
	==> coredns [f55176a7c1e4d61fe9e7fbe8d5b007636be692a07306798dea61b38f0124fbec] <==
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1129796489]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 18:09:21.825) (total time: 30002ms):
	Trace[1129796489]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:09:51.826)
	Trace[1129796489]: [30.002034726s] [30.002034726s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1010701865]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 18:09:21.826) (total time: 30001ms):
	Trace[1010701865]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:09:51.827)
	Trace[1010701865]: [30.001009519s] [30.001009519s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1927893987]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 18:09:21.824) (total time: 30003ms):
	Trace[1927893987]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (18:09:51.827)
	Trace[1927893987]: [30.003262181s] [30.003262181s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f5b65e38703d6deda573e2b1c3c3f9d60b44e65801baddd4b382fbef4eb1a8b5] <==
	Trace[703433045]: [10.003919345s] [10.003919345s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1898540675]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 18:10:28.566) (total time: 10001ms):
	Trace[1898540675]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:10:38.567)
	Trace[1898540675]: [10.001201173s] [10.001201173s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1323431054]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 18:10:28.567) (total time: 10000ms):
	Trace[1323431054]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (18:10:38.568)
	Trace[1323431054]: [10.000517482s] [10.000517482s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:53742->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:39072->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:53742->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1870039339]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 18:10:39.454) (total time: 10103ms):
	Trace[1870039339]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:39072->10.96.0.1:443: read: connection reset by peer 10103ms (18:10:49.557)
	Trace[1870039339]: [10.103351813s] [10.103351813s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:39072->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:39066->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1983569036]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 18:10:39.423) (total time: 10134ms):
	Trace[1983569036]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:39066->10.96.0.1:443: read: connection reset by peer 10134ms (18:10:49.557)
	Trace[1983569036]: [10.134446713s] [10.134446713s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:39066->10.96.0.1:443: read: connection reset by peer
	
	
	==> describe nodes <==
	Name:               pause-246701
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-246701
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=pause-246701
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T18_09_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 18:09:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-246701
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 18:11:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 18:10:53 +0000   Tue, 17 Sep 2024 18:09:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 18:10:53 +0000   Tue, 17 Sep 2024 18:09:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 18:10:53 +0000   Tue, 17 Sep 2024 18:09:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 18:10:53 +0000   Tue, 17 Sep 2024 18:09:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.167
	  Hostname:    pause-246701
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad44cfe662c34575a4f4f46d3c15e2fa
	  System UUID:                ad44cfe6-62c3-4575-a4f4-f46d3c15e2fa
	  Boot ID:                    1d0abb0b-1c4d-4c00-9169-956572385348
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-dkldh                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     116s
	  kube-system                 etcd-pause-246701                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         2m1s
	  kube-system                 kube-apiserver-pause-246701             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-pause-246701    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-vxgcn                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-scheduler-pause-246701             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 114s               kube-proxy       
	  Normal  Starting                 23s                kube-proxy       
	  Normal  NodeHasSufficientPID     2m1s               kubelet          Node pause-246701 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m1s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m1s               kubelet          Node pause-246701 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s               kubelet          Node pause-246701 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m1s               kubelet          Starting kubelet.
	  Normal  NodeReady                2m                 kubelet          Node pause-246701 status is now: NodeReady
	  Normal  RegisteredNode           118s               node-controller  Node pause-246701 event: Registered Node pause-246701 in Controller
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)  kubelet          Node pause-246701 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)  kubelet          Node pause-246701 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 26s)  kubelet          Node pause-246701 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20s                node-controller  Node pause-246701 event: Registered Node pause-246701 in Controller
	
	
	==> dmesg <==
	[  +0.059171] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067362] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.221608] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.141388] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.335812] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[Sep17 18:09] systemd-fstab-generator[740]: Ignoring "noauto" option for root device
	[  +0.078350] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.715520] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.513517] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.424056] systemd-fstab-generator[1216]: Ignoring "noauto" option for root device
	[  +0.082611] kauditd_printk_skb: 41 callbacks suppressed
	[  +4.983147] systemd-fstab-generator[1344]: Ignoring "noauto" option for root device
	[  +0.205066] kauditd_printk_skb: 21 callbacks suppressed
	[ +11.707366] kauditd_printk_skb: 88 callbacks suppressed
	[Sep17 18:10] systemd-fstab-generator[2216]: Ignoring "noauto" option for root device
	[  +0.173721] systemd-fstab-generator[2228]: Ignoring "noauto" option for root device
	[  +0.195317] systemd-fstab-generator[2242]: Ignoring "noauto" option for root device
	[  +0.142634] systemd-fstab-generator[2254]: Ignoring "noauto" option for root device
	[  +0.339461] systemd-fstab-generator[2282]: Ignoring "noauto" option for root device
	[  +7.609088] systemd-fstab-generator[2404]: Ignoring "noauto" option for root device
	[  +0.081387] kauditd_printk_skb: 100 callbacks suppressed
	[ +12.690131] kauditd_printk_skb: 87 callbacks suppressed
	[ +11.012397] systemd-fstab-generator[3212]: Ignoring "noauto" option for root device
	[  +3.623387] kauditd_printk_skb: 37 callbacks suppressed
	[Sep17 18:11] systemd-fstab-generator[3517]: Ignoring "noauto" option for root device
	
	
	==> etcd [7e941d039eb663b198187704a615d9351839eeb925965a890fc1e0308609ad47] <==
	{"level":"info","ts":"2024-09-17T18:10:47.462568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac switched to configuration voters=(2366053629920448428)"}
	{"level":"info","ts":"2024-09-17T18:10:47.462654Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"31f708155da0e645","local-member-id":"20d5e93d92ee8fac","added-peer-id":"20d5e93d92ee8fac","added-peer-peer-urls":["https://192.168.39.167:2380"]}
	{"level":"info","ts":"2024-09-17T18:10:47.462868Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"31f708155da0e645","local-member-id":"20d5e93d92ee8fac","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:10:47.462967Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:10:47.465243Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-17T18:10:47.465569Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"20d5e93d92ee8fac","initial-advertise-peer-urls":["https://192.168.39.167:2380"],"listen-peer-urls":["https://192.168.39.167:2380"],"advertise-client-urls":["https://192.168.39.167:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.167:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-17T18:10:47.465635Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-17T18:10:47.465702Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.167:2380"}
	{"level":"info","ts":"2024-09-17T18:10:47.465744Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.167:2380"}
	{"level":"info","ts":"2024-09-17T18:10:48.446856Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-17T18:10:48.446966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-17T18:10:48.447022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac received MsgPreVoteResp from 20d5e93d92ee8fac at term 2"}
	{"level":"info","ts":"2024-09-17T18:10:48.447069Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac became candidate at term 3"}
	{"level":"info","ts":"2024-09-17T18:10:48.447093Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac received MsgVoteResp from 20d5e93d92ee8fac at term 3"}
	{"level":"info","ts":"2024-09-17T18:10:48.447121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac became leader at term 3"}
	{"level":"info","ts":"2024-09-17T18:10:48.447213Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 20d5e93d92ee8fac elected leader 20d5e93d92ee8fac at term 3"}
	{"level":"info","ts":"2024-09-17T18:10:48.451753Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"20d5e93d92ee8fac","local-member-attributes":"{Name:pause-246701 ClientURLs:[https://192.168.39.167:2379]}","request-path":"/0/members/20d5e93d92ee8fac/attributes","cluster-id":"31f708155da0e645","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T18:10:48.452027Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T18:10:48.452064Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T18:10:48.453267Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T18:10:48.453310Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T18:10:48.453571Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T18:10:48.453861Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T18:10:48.454517Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.167:2379"}
	{"level":"info","ts":"2024-09-17T18:10:48.454752Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [7eb34f9ac3911759cc6a62e2d8b69a97435125e9bd1540f4f5182362fc105a31] <==
	{"level":"info","ts":"2024-09-17T18:10:27.960909Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-09-17T18:10:27.983924Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"31f708155da0e645","local-member-id":"20d5e93d92ee8fac","commit-index":424}
	{"level":"info","ts":"2024-09-17T18:10:27.993395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac switched to configuration voters=()"}
	{"level":"info","ts":"2024-09-17T18:10:27.993571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac became follower at term 2"}
	{"level":"info","ts":"2024-09-17T18:10:27.993677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 20d5e93d92ee8fac [peers: [], term: 2, commit: 424, applied: 0, lastindex: 424, lastterm: 2]"}
	{"level":"warn","ts":"2024-09-17T18:10:28.000383Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-09-17T18:10:28.063701Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":403}
	{"level":"info","ts":"2024-09-17T18:10:28.079580Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-09-17T18:10:28.098610Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"20d5e93d92ee8fac","timeout":"7s"}
	{"level":"info","ts":"2024-09-17T18:10:28.102478Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"20d5e93d92ee8fac"}
	{"level":"info","ts":"2024-09-17T18:10:28.104197Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"20d5e93d92ee8fac","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-17T18:10:28.104829Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T18:10:28.106686Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-17T18:10:28.107348Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-17T18:10:28.125685Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-17T18:10:28.129500Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-17T18:10:28.111248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac switched to configuration voters=(2366053629920448428)"}
	{"level":"info","ts":"2024-09-17T18:10:28.134842Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"31f708155da0e645","local-member-id":"20d5e93d92ee8fac","added-peer-id":"20d5e93d92ee8fac","added-peer-peer-urls":["https://192.168.39.167:2380"]}
	{"level":"info","ts":"2024-09-17T18:10:28.137318Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"31f708155da0e645","local-member-id":"20d5e93d92ee8fac","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:10:28.139208Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:10:28.147316Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-17T18:10:28.154477Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"20d5e93d92ee8fac","initial-advertise-peer-urls":["https://192.168.39.167:2380"],"listen-peer-urls":["https://192.168.39.167:2380"],"advertise-client-urls":["https://192.168.39.167:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.167:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-17T18:10:28.154591Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-17T18:10:28.154724Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.167:2380"}
	{"level":"info","ts":"2024-09-17T18:10:28.154759Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.167:2380"}
	
	
	==> kernel <==
	 18:11:16 up 2 min,  0 users,  load average: 1.33, 0.56, 0.21
	Linux pause-246701 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1f0edc99624aa4f33a3454d0dc5a08cbdd7de87adc98e628385aac56e62708e0] <==
	I0917 18:10:27.783234       1 server.go:142] Version: v1.31.1
	I0917 18:10:27.783271       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 18:10:28.544764       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	W0917 18:10:28.545003       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:28.545095       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0917 18:10:28.568369       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0917 18:10:28.568421       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0917 18:10:28.568762       1 instance.go:232] Using reconciler: lease
	I0917 18:10:28.569440       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0917 18:10:28.570389       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:29.545802       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:29.545811       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:29.571738       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:30.838256       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:31.221899       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:31.240591       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:32.974643       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:33.337422       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:33.484252       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:37.481096       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:37.993103       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:38.343232       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:43.609074       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:44.880502       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:10:45.571901       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [c83fe770d5129c316fda477f179c5751dba22618f8024c50b1276ca370b52953] <==
	I0917 18:10:53.208736       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0917 18:10:53.209018       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0917 18:10:53.209101       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0917 18:10:53.219349       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0917 18:10:53.228210       1 aggregator.go:171] initial CRD sync complete...
	I0917 18:10:53.228290       1 autoregister_controller.go:144] Starting autoregister controller
	I0917 18:10:53.228315       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0917 18:10:53.228340       1 cache.go:39] Caches are synced for autoregister controller
	I0917 18:10:53.235724       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0917 18:10:53.235768       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0917 18:10:53.255866       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0917 18:10:53.275385       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0917 18:10:53.275446       1 shared_informer.go:320] Caches are synced for configmaps
	I0917 18:10:53.276872       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0917 18:10:53.276937       1 policy_source.go:224] refreshing policies
	E0917 18:10:53.279540       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0917 18:10:53.356436       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 18:10:54.078970       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0917 18:10:54.828087       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0917 18:10:54.845600       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0917 18:10:54.896452       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0917 18:10:54.930459       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0917 18:10:54.938507       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0917 18:10:56.827828       1 controller.go:615] quota admission added evaluator for: endpoints
	I0917 18:10:56.928056       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [522b430ef1b3944822b390aa7972fbb4ca0d95838f2ed6bf233dd76563902419] <==
	I0917 18:10:56.636460       1 shared_informer.go:320] Caches are synced for PV protection
	I0917 18:10:56.638659       1 shared_informer.go:320] Caches are synced for ephemeral
	I0917 18:10:56.639864       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0917 18:10:56.640330       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="352.904µs"
	I0917 18:10:56.658833       1 shared_informer.go:320] Caches are synced for node
	I0917 18:10:56.658903       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0917 18:10:56.658922       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0917 18:10:56.658927       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0917 18:10:56.658931       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0917 18:10:56.659019       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-246701"
	I0917 18:10:56.663946       1 shared_informer.go:320] Caches are synced for endpoint
	I0917 18:10:56.674438       1 shared_informer.go:320] Caches are synced for disruption
	I0917 18:10:56.678231       1 shared_informer.go:320] Caches are synced for stateful set
	I0917 18:10:56.775229       1 shared_informer.go:320] Caches are synced for HPA
	I0917 18:10:56.825228       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0917 18:10:56.829955       1 shared_informer.go:320] Caches are synced for resource quota
	I0917 18:10:56.841676       1 shared_informer.go:320] Caches are synced for resource quota
	I0917 18:10:56.869225       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0917 18:10:57.275400       1 shared_informer.go:320] Caches are synced for garbage collector
	I0917 18:10:57.323628       1 shared_informer.go:320] Caches are synced for garbage collector
	I0917 18:10:57.323673       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0917 18:10:59.175983       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="48.90258ms"
	I0917 18:10:59.176102       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="72.145µs"
	I0917 18:10:59.225993       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="48.249192ms"
	I0917 18:10:59.226103       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="58.066µs"
	
	
	==> kube-controller-manager [59be282d82c994554051afe711c5e54f9956bb70bc58a9769c9785903771359e] <==
	
	
	==> kube-proxy [1ee133b83897f5511ef614608a44582e929cfaf2d167679328a73e61846d31cf] <==
	 >
	E0917 18:10:28.701933       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 18:10:38.709824       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-246701\": net/http: TLS handshake timeout"
	E0917 18:10:49.558720       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-246701\": dial tcp 192.168.39.167:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.167:43038->192.168.39.167:8443: read: connection reset by peer"
	I0917 18:10:53.278767       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.167"]
	E0917 18:10:53.279014       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 18:10:53.347704       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 18:10:53.347835       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 18:10:53.347860       1 server_linux.go:169] "Using iptables Proxier"
	I0917 18:10:53.351635       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 18:10:53.352018       1 server.go:483] "Version info" version="v1.31.1"
	I0917 18:10:53.352048       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 18:10:53.353461       1 config.go:199] "Starting service config controller"
	I0917 18:10:53.353507       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 18:10:53.353544       1 config.go:105] "Starting endpoint slice config controller"
	I0917 18:10:53.353564       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 18:10:53.354351       1 config.go:328] "Starting node config controller"
	I0917 18:10:53.354378       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 18:10:53.454090       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 18:10:53.454102       1 shared_informer.go:320] Caches are synced for service config
	I0917 18:10:53.454607       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [fde175b4dcf392df5e049450ec3eb0f92583c8a3d28d0147040470f4fac40487] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 18:09:21.878467       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 18:09:21.894270       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.167"]
	E0917 18:09:21.894504       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 18:09:21.937592       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 18:09:21.937673       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 18:09:21.937710       1 server_linux.go:169] "Using iptables Proxier"
	I0917 18:09:21.940665       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 18:09:21.941014       1 server.go:483] "Version info" version="v1.31.1"
	I0917 18:09:21.941044       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 18:09:21.947478       1 config.go:199] "Starting service config controller"
	I0917 18:09:21.947513       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 18:09:21.947940       1 config.go:105] "Starting endpoint slice config controller"
	I0917 18:09:21.947975       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 18:09:21.948007       1 config.go:328] "Starting node config controller"
	I0917 18:09:21.948013       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 18:09:22.048413       1 shared_informer.go:320] Caches are synced for service config
	I0917 18:09:22.048537       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 18:09:22.048795       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [01807adcb2c6478e08477b688e9213108420bbd562e88f21e7708e3ecdfd077b] <==
	I0917 18:10:29.117060       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [e36ed07cad23ca7e45e0ef78137e5611776afa55b9815807f66fb0ed85a99556] <==
	I0917 18:10:52.246819       1 serving.go:386] Generated self-signed cert in-memory
	W0917 18:10:53.141026       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0917 18:10:53.141115       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0917 18:10:53.141125       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0917 18:10:53.141204       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0917 18:10:53.274374       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0917 18:10:53.274421       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 18:10:53.282317       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0917 18:10:53.284830       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 18:10:53.285238       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0917 18:10:53.284857       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 18:10:53.385758       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 17 18:10:50 pause-246701 kubelet[3219]: I0917 18:10:50.390532    3219 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b4c7e73ea292afa0be4ff4b4b13e840-ca-certs\") pod \"kube-apiserver-pause-246701\" (UID: \"3b4c7e73ea292afa0be4ff4b4b13e840\") " pod="kube-system/kube-apiserver-pause-246701"
	Sep 17 18:10:50 pause-246701 kubelet[3219]: I0917 18:10:50.390551    3219 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9be2266f3aff2f758a3717d99f55ddc-ca-certs\") pod \"kube-controller-manager-pause-246701\" (UID: \"a9be2266f3aff2f758a3717d99f55ddc\") " pod="kube-system/kube-controller-manager-pause-246701"
	Sep 17 18:10:50 pause-246701 kubelet[3219]: I0917 18:10:50.390566    3219 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9be2266f3aff2f758a3717d99f55ddc-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-246701\" (UID: \"a9be2266f3aff2f758a3717d99f55ddc\") " pod="kube-system/kube-controller-manager-pause-246701"
	Sep 17 18:10:50 pause-246701 kubelet[3219]: I0917 18:10:50.390582    3219 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9ddff28aa8ccc0582f50b01e72762fef-kubeconfig\") pod \"kube-scheduler-pause-246701\" (UID: \"9ddff28aa8ccc0582f50b01e72762fef\") " pod="kube-system/kube-scheduler-pause-246701"
	Sep 17 18:10:50 pause-246701 kubelet[3219]: E0917 18:10:50.391047    3219 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-246701?timeout=10s\": dial tcp 192.168.39.167:8443: connect: connection refused" interval="400ms"
	Sep 17 18:10:50 pause-246701 kubelet[3219]: I0917 18:10:50.526905    3219 kubelet_node_status.go:72] "Attempting to register node" node="pause-246701"
	Sep 17 18:10:50 pause-246701 kubelet[3219]: E0917 18:10:50.527724    3219 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.167:8443: connect: connection refused" node="pause-246701"
	Sep 17 18:10:50 pause-246701 kubelet[3219]: I0917 18:10:50.616458    3219 scope.go:117] "RemoveContainer" containerID="1f0edc99624aa4f33a3454d0dc5a08cbdd7de87adc98e628385aac56e62708e0"
	Sep 17 18:10:50 pause-246701 kubelet[3219]: I0917 18:10:50.619280    3219 scope.go:117] "RemoveContainer" containerID="01807adcb2c6478e08477b688e9213108420bbd562e88f21e7708e3ecdfd077b"
	Sep 17 18:10:50 pause-246701 kubelet[3219]: I0917 18:10:50.619663    3219 scope.go:117] "RemoveContainer" containerID="59be282d82c994554051afe711c5e54f9956bb70bc58a9769c9785903771359e"
	Sep 17 18:10:50 pause-246701 kubelet[3219]: E0917 18:10:50.792405    3219 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-246701?timeout=10s\": dial tcp 192.168.39.167:8443: connect: connection refused" interval="800ms"
	Sep 17 18:10:50 pause-246701 kubelet[3219]: I0917 18:10:50.929865    3219 kubelet_node_status.go:72] "Attempting to register node" node="pause-246701"
	Sep 17 18:10:53 pause-246701 kubelet[3219]: I0917 18:10:53.327088    3219 kubelet_node_status.go:111] "Node was previously registered" node="pause-246701"
	Sep 17 18:10:53 pause-246701 kubelet[3219]: I0917 18:10:53.327317    3219 kubelet_node_status.go:75] "Successfully registered node" node="pause-246701"
	Sep 17 18:10:53 pause-246701 kubelet[3219]: I0917 18:10:53.327345    3219 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 17 18:10:53 pause-246701 kubelet[3219]: I0917 18:10:53.328775    3219 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 17 18:10:53 pause-246701 kubelet[3219]: E0917 18:10:53.338030    3219 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-246701\" already exists" pod="kube-system/kube-apiserver-pause-246701"
	Sep 17 18:10:54 pause-246701 kubelet[3219]: I0917 18:10:54.114246    3219 apiserver.go:52] "Watching apiserver"
	Sep 17 18:10:54 pause-246701 kubelet[3219]: I0917 18:10:54.160047    3219 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 17 18:10:54 pause-246701 kubelet[3219]: I0917 18:10:54.173023    3219 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de638753-d03b-438c-8d1f-d43af2bbcce4-lib-modules\") pod \"kube-proxy-vxgcn\" (UID: \"de638753-d03b-438c-8d1f-d43af2bbcce4\") " pod="kube-system/kube-proxy-vxgcn"
	Sep 17 18:10:54 pause-246701 kubelet[3219]: I0917 18:10:54.173241    3219 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de638753-d03b-438c-8d1f-d43af2bbcce4-xtables-lock\") pod \"kube-proxy-vxgcn\" (UID: \"de638753-d03b-438c-8d1f-d43af2bbcce4\") " pod="kube-system/kube-proxy-vxgcn"
	Sep 17 18:11:00 pause-246701 kubelet[3219]: E0917 18:11:00.226252    3219 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726596660225817779,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:11:00 pause-246701 kubelet[3219]: E0917 18:11:00.226637    3219 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726596660225817779,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:11:10 pause-246701 kubelet[3219]: E0917 18:11:10.229495    3219 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726596670229019264,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:11:10 pause-246701 kubelet[3219]: E0917 18:11:10.229530    3219 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726596670229019264,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-246701 -n pause-246701
helpers_test.go:261: (dbg) Run:  kubectl --context pause-246701 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (81.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (299.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-190698 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-190698 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m59.520765796s)

                                                
                                                
-- stdout --
	* [old-k8s-version-190698] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-190698" primary control-plane node in "old-k8s-version-190698" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 18:17:24.511656   70593 out.go:345] Setting OutFile to fd 1 ...
	I0917 18:17:24.511811   70593 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:17:24.511823   70593 out.go:358] Setting ErrFile to fd 2...
	I0917 18:17:24.511833   70593 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:17:24.512010   70593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 18:17:24.512576   70593 out.go:352] Setting JSON to false
	I0917 18:17:24.513569   70593 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7159,"bootTime":1726589885,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 18:17:24.513656   70593 start.go:139] virtualization: kvm guest
	I0917 18:17:24.515895   70593 out.go:177] * [old-k8s-version-190698] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 18:17:24.518210   70593 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 18:17:24.518219   70593 notify.go:220] Checking for updates...
	I0917 18:17:24.519924   70593 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 18:17:24.521263   70593 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:17:24.522755   70593 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 18:17:24.524361   70593 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 18:17:24.527432   70593 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 18:17:24.529301   70593 config.go:182] Loaded profile config "bridge-639892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:17:24.529406   70593 config.go:182] Loaded profile config "enable-default-cni-639892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:17:24.529500   70593 config.go:182] Loaded profile config "flannel-639892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:17:24.529589   70593 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 18:17:24.569953   70593 out.go:177] * Using the kvm2 driver based on user configuration
	I0917 18:17:24.571377   70593 start.go:297] selected driver: kvm2
	I0917 18:17:24.571393   70593 start.go:901] validating driver "kvm2" against <nil>
	I0917 18:17:24.571404   70593 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 18:17:24.572222   70593 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:17:24.572305   70593 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19662-11085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0917 18:17:24.590718   70593 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0917 18:17:24.590771   70593 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 18:17:24.591051   70593 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 18:17:24.591089   70593 cni.go:84] Creating CNI manager for ""
	I0917 18:17:24.591145   70593 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:17:24.591214   70593 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 18:17:24.591331   70593 start.go:340] cluster config:
	{Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:17:24.591507   70593 iso.go:125] acquiring lock: {Name:mkee7131e09b1c5eb94d106bda884bcbdbf6f906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:17:24.593492   70593 out.go:177] * Starting "old-k8s-version-190698" primary control-plane node in "old-k8s-version-190698" cluster
	I0917 18:17:24.594829   70593 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0917 18:17:24.594876   70593 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0917 18:17:24.594896   70593 cache.go:56] Caching tarball of preloaded images
	I0917 18:17:24.594987   70593 preload.go:172] Found /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 18:17:24.595001   70593 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0917 18:17:24.595126   70593 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/config.json ...
	I0917 18:17:24.595147   70593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/config.json: {Name:mkb4215e2edc045dbb5befb53a79bf398f5477ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:17:24.595306   70593 start.go:360] acquireMachinesLock for old-k8s-version-190698: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 18:17:50.754692   70593 start.go:364] duration metric: took 26.159340611s to acquireMachinesLock for "old-k8s-version-190698"
	I0917 18:17:50.754763   70593 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 18:17:50.754902   70593 start.go:125] createHost starting for "" (driver="kvm2")
	I0917 18:17:50.756765   70593 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 18:17:50.756963   70593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:17:50.757012   70593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:17:50.778524   70593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35401
	I0917 18:17:50.778997   70593 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:17:50.779559   70593 main.go:141] libmachine: Using API Version  1
	I0917 18:17:50.779585   70593 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:17:50.779997   70593 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:17:50.780168   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetMachineName
	I0917 18:17:50.780304   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:17:50.780532   70593 start.go:159] libmachine.API.Create for "old-k8s-version-190698" (driver="kvm2")
	I0917 18:17:50.780575   70593 client.go:168] LocalClient.Create starting
	I0917 18:17:50.780613   70593 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem
	I0917 18:17:50.780656   70593 main.go:141] libmachine: Decoding PEM data...
	I0917 18:17:50.780676   70593 main.go:141] libmachine: Parsing certificate...
	I0917 18:17:50.780747   70593 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem
	I0917 18:17:50.780771   70593 main.go:141] libmachine: Decoding PEM data...
	I0917 18:17:50.780784   70593 main.go:141] libmachine: Parsing certificate...
	I0917 18:17:50.780814   70593 main.go:141] libmachine: Running pre-create checks...
	I0917 18:17:50.780850   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .PreCreateCheck
	I0917 18:17:50.781213   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetConfigRaw
	I0917 18:17:50.781704   70593 main.go:141] libmachine: Creating machine...
	I0917 18:17:50.781724   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .Create
	I0917 18:17:50.781865   70593 main.go:141] libmachine: (old-k8s-version-190698) Creating KVM machine...
	I0917 18:17:50.783336   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | found existing default KVM network
	I0917 18:17:50.784834   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:17:50.784634   71564 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:17:b5:b2} reservation:<nil>}
	I0917 18:17:50.785704   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:17:50.785603   71564 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f0:64:09} reservation:<nil>}
	I0917 18:17:50.786918   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:17:50.786837   71564 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00027cd50}
	I0917 18:17:50.786977   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | created network xml: 
	I0917 18:17:50.787003   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | <network>
	I0917 18:17:50.787031   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG |   <name>mk-old-k8s-version-190698</name>
	I0917 18:17:50.787070   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG |   <dns enable='no'/>
	I0917 18:17:50.787082   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG |   
	I0917 18:17:50.787091   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0917 18:17:50.787100   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG |     <dhcp>
	I0917 18:17:50.787112   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0917 18:17:50.787123   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG |     </dhcp>
	I0917 18:17:50.787143   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG |   </ip>
	I0917 18:17:50.787154   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG |   
	I0917 18:17:50.787163   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | </network>
	I0917 18:17:50.787185   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | 
	I0917 18:17:50.792643   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | trying to create private KVM network mk-old-k8s-version-190698 192.168.61.0/24...
	I0917 18:17:50.879512   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | private KVM network mk-old-k8s-version-190698 192.168.61.0/24 created
	I0917 18:17:50.879570   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:17:50.879474   71564 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 18:17:50.879602   70593 main.go:141] libmachine: (old-k8s-version-190698) Setting up store path in /home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698 ...
	I0917 18:17:50.879629   70593 main.go:141] libmachine: (old-k8s-version-190698) Building disk image from file:///home/jenkins/minikube-integration/19662-11085/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0917 18:17:50.879648   70593 main.go:141] libmachine: (old-k8s-version-190698) Downloading /home/jenkins/minikube-integration/19662-11085/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19662-11085/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0917 18:17:51.144366   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:17:51.144188   71564 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa...
	I0917 18:17:51.500499   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:17:51.500363   71564 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/old-k8s-version-190698.rawdisk...
	I0917 18:17:51.500532   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | Writing magic tar header
	I0917 18:17:51.500545   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | Writing SSH key tar header
	I0917 18:17:51.500553   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:17:51.500473   71564 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698 ...
	I0917 18:17:51.500604   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698
	I0917 18:17:51.500653   70593 main.go:141] libmachine: (old-k8s-version-190698) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698 (perms=drwx------)
	I0917 18:17:51.500667   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube/machines
	I0917 18:17:51.500687   70593 main.go:141] libmachine: (old-k8s-version-190698) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube/machines (perms=drwxr-xr-x)
	I0917 18:17:51.500721   70593 main.go:141] libmachine: (old-k8s-version-190698) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085/.minikube (perms=drwxr-xr-x)
	I0917 18:17:51.500734   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 18:17:51.500854   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19662-11085
	I0917 18:17:51.500883   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0917 18:17:51.500894   70593 main.go:141] libmachine: (old-k8s-version-190698) Setting executable bit set on /home/jenkins/minikube-integration/19662-11085 (perms=drwxrwxr-x)
	I0917 18:17:51.500906   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | Checking permissions on dir: /home/jenkins
	I0917 18:17:51.500919   70593 main.go:141] libmachine: (old-k8s-version-190698) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0917 18:17:51.500931   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | Checking permissions on dir: /home
	I0917 18:17:51.500953   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | Skipping /home - not owner
	I0917 18:17:51.500969   70593 main.go:141] libmachine: (old-k8s-version-190698) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0917 18:17:51.500980   70593 main.go:141] libmachine: (old-k8s-version-190698) Creating domain...
	I0917 18:17:51.502170   70593 main.go:141] libmachine: (old-k8s-version-190698) define libvirt domain using xml: 
	I0917 18:17:51.502194   70593 main.go:141] libmachine: (old-k8s-version-190698) <domain type='kvm'>
	I0917 18:17:51.502223   70593 main.go:141] libmachine: (old-k8s-version-190698)   <name>old-k8s-version-190698</name>
	I0917 18:17:51.502241   70593 main.go:141] libmachine: (old-k8s-version-190698)   <memory unit='MiB'>2200</memory>
	I0917 18:17:51.502253   70593 main.go:141] libmachine: (old-k8s-version-190698)   <vcpu>2</vcpu>
	I0917 18:17:51.502262   70593 main.go:141] libmachine: (old-k8s-version-190698)   <features>
	I0917 18:17:51.502269   70593 main.go:141] libmachine: (old-k8s-version-190698)     <acpi/>
	I0917 18:17:51.502279   70593 main.go:141] libmachine: (old-k8s-version-190698)     <apic/>
	I0917 18:17:51.502286   70593 main.go:141] libmachine: (old-k8s-version-190698)     <pae/>
	I0917 18:17:51.502294   70593 main.go:141] libmachine: (old-k8s-version-190698)     
	I0917 18:17:51.502301   70593 main.go:141] libmachine: (old-k8s-version-190698)   </features>
	I0917 18:17:51.502310   70593 main.go:141] libmachine: (old-k8s-version-190698)   <cpu mode='host-passthrough'>
	I0917 18:17:51.502316   70593 main.go:141] libmachine: (old-k8s-version-190698)   
	I0917 18:17:51.502321   70593 main.go:141] libmachine: (old-k8s-version-190698)   </cpu>
	I0917 18:17:51.502329   70593 main.go:141] libmachine: (old-k8s-version-190698)   <os>
	I0917 18:17:51.502343   70593 main.go:141] libmachine: (old-k8s-version-190698)     <type>hvm</type>
	I0917 18:17:51.502354   70593 main.go:141] libmachine: (old-k8s-version-190698)     <boot dev='cdrom'/>
	I0917 18:17:51.502360   70593 main.go:141] libmachine: (old-k8s-version-190698)     <boot dev='hd'/>
	I0917 18:17:51.502368   70593 main.go:141] libmachine: (old-k8s-version-190698)     <bootmenu enable='no'/>
	I0917 18:17:51.502375   70593 main.go:141] libmachine: (old-k8s-version-190698)   </os>
	I0917 18:17:51.502384   70593 main.go:141] libmachine: (old-k8s-version-190698)   <devices>
	I0917 18:17:51.502395   70593 main.go:141] libmachine: (old-k8s-version-190698)     <disk type='file' device='cdrom'>
	I0917 18:17:51.502442   70593 main.go:141] libmachine: (old-k8s-version-190698)       <source file='/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/boot2docker.iso'/>
	I0917 18:17:51.502478   70593 main.go:141] libmachine: (old-k8s-version-190698)       <target dev='hdc' bus='scsi'/>
	I0917 18:17:51.502511   70593 main.go:141] libmachine: (old-k8s-version-190698)       <readonly/>
	I0917 18:17:51.502535   70593 main.go:141] libmachine: (old-k8s-version-190698)     </disk>
	I0917 18:17:51.502549   70593 main.go:141] libmachine: (old-k8s-version-190698)     <disk type='file' device='disk'>
	I0917 18:17:51.502565   70593 main.go:141] libmachine: (old-k8s-version-190698)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0917 18:17:51.502582   70593 main.go:141] libmachine: (old-k8s-version-190698)       <source file='/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/old-k8s-version-190698.rawdisk'/>
	I0917 18:17:51.502593   70593 main.go:141] libmachine: (old-k8s-version-190698)       <target dev='hda' bus='virtio'/>
	I0917 18:17:51.502601   70593 main.go:141] libmachine: (old-k8s-version-190698)     </disk>
	I0917 18:17:51.502611   70593 main.go:141] libmachine: (old-k8s-version-190698)     <interface type='network'>
	I0917 18:17:51.502622   70593 main.go:141] libmachine: (old-k8s-version-190698)       <source network='mk-old-k8s-version-190698'/>
	I0917 18:17:51.502633   70593 main.go:141] libmachine: (old-k8s-version-190698)       <model type='virtio'/>
	I0917 18:17:51.502646   70593 main.go:141] libmachine: (old-k8s-version-190698)     </interface>
	I0917 18:17:51.502666   70593 main.go:141] libmachine: (old-k8s-version-190698)     <interface type='network'>
	I0917 18:17:51.502683   70593 main.go:141] libmachine: (old-k8s-version-190698)       <source network='default'/>
	I0917 18:17:51.502694   70593 main.go:141] libmachine: (old-k8s-version-190698)       <model type='virtio'/>
	I0917 18:17:51.502703   70593 main.go:141] libmachine: (old-k8s-version-190698)     </interface>
	I0917 18:17:51.502712   70593 main.go:141] libmachine: (old-k8s-version-190698)     <serial type='pty'>
	I0917 18:17:51.502718   70593 main.go:141] libmachine: (old-k8s-version-190698)       <target port='0'/>
	I0917 18:17:51.502724   70593 main.go:141] libmachine: (old-k8s-version-190698)     </serial>
	I0917 18:17:51.502729   70593 main.go:141] libmachine: (old-k8s-version-190698)     <console type='pty'>
	I0917 18:17:51.502740   70593 main.go:141] libmachine: (old-k8s-version-190698)       <target type='serial' port='0'/>
	I0917 18:17:51.502752   70593 main.go:141] libmachine: (old-k8s-version-190698)     </console>
	I0917 18:17:51.502761   70593 main.go:141] libmachine: (old-k8s-version-190698)     <rng model='virtio'>
	I0917 18:17:51.502772   70593 main.go:141] libmachine: (old-k8s-version-190698)       <backend model='random'>/dev/random</backend>
	I0917 18:17:51.502792   70593 main.go:141] libmachine: (old-k8s-version-190698)     </rng>
	I0917 18:17:51.502801   70593 main.go:141] libmachine: (old-k8s-version-190698)     
	I0917 18:17:51.502807   70593 main.go:141] libmachine: (old-k8s-version-190698)     
	I0917 18:17:51.502834   70593 main.go:141] libmachine: (old-k8s-version-190698)   </devices>
	I0917 18:17:51.502854   70593 main.go:141] libmachine: (old-k8s-version-190698) </domain>
	I0917 18:17:51.502867   70593 main.go:141] libmachine: (old-k8s-version-190698) 
	I0917 18:17:51.507303   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:c1:8f:72 in network default
	I0917 18:17:51.508073   70593 main.go:141] libmachine: (old-k8s-version-190698) Ensuring networks are active...
	I0917 18:17:51.508093   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:17:51.508847   70593 main.go:141] libmachine: (old-k8s-version-190698) Ensuring network default is active
	I0917 18:17:51.509205   70593 main.go:141] libmachine: (old-k8s-version-190698) Ensuring network mk-old-k8s-version-190698 is active
	I0917 18:17:51.509844   70593 main.go:141] libmachine: (old-k8s-version-190698) Getting domain xml...
	I0917 18:17:51.510733   70593 main.go:141] libmachine: (old-k8s-version-190698) Creating domain...
	I0917 18:17:53.017457   70593 main.go:141] libmachine: (old-k8s-version-190698) Waiting to get IP...
	I0917 18:17:53.018933   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:17:53.019456   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:17:53.019486   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:17:53.019433   71564 retry.go:31] will retry after 188.553748ms: waiting for machine to come up
	I0917 18:17:53.210246   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:17:53.211119   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:17:53.211146   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:17:53.211084   71564 retry.go:31] will retry after 270.443262ms: waiting for machine to come up
	I0917 18:17:53.483599   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:17:53.484429   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:17:53.484459   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:17:53.484389   71564 retry.go:31] will retry after 326.986895ms: waiting for machine to come up
	I0917 18:17:53.813033   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:17:53.813636   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:17:53.813663   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:17:53.813552   71564 retry.go:31] will retry after 523.659215ms: waiting for machine to come up
	I0917 18:17:54.339575   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:17:54.340063   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:17:54.340091   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:17:54.340018   71564 retry.go:31] will retry after 743.56114ms: waiting for machine to come up
	I0917 18:17:55.084861   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:17:55.085487   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:17:55.085510   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:17:55.085433   71564 retry.go:31] will retry after 675.387131ms: waiting for machine to come up
	I0917 18:17:55.763023   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:17:55.763528   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:17:55.763550   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:17:55.763462   71564 retry.go:31] will retry after 1.005682325s: waiting for machine to come up
	I0917 18:17:56.770961   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:17:56.771505   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:17:56.771531   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:17:56.771463   71564 retry.go:31] will retry after 1.469441853s: waiting for machine to come up
	I0917 18:17:58.242545   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:17:58.243142   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:17:58.243171   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:17:58.243097   71564 retry.go:31] will retry after 1.692812044s: waiting for machine to come up
	I0917 18:17:59.937145   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:17:59.937667   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:17:59.937691   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:17:59.937636   71564 retry.go:31] will retry after 1.633473652s: waiting for machine to come up
	I0917 18:18:01.573443   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:01.573939   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:18:01.573965   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:18:01.573892   71564 retry.go:31] will retry after 2.166322593s: waiting for machine to come up
	I0917 18:18:03.743470   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:03.743927   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:18:03.743956   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:18:03.743877   71564 retry.go:31] will retry after 3.042266102s: waiting for machine to come up
	I0917 18:18:06.787565   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:06.787976   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:18:06.788001   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:18:06.787927   71564 retry.go:31] will retry after 3.352376008s: waiting for machine to come up
	I0917 18:18:10.142479   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:10.142988   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:18:10.143017   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:18:10.142943   71564 retry.go:31] will retry after 3.52800113s: waiting for machine to come up
	I0917 18:18:13.674260   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:13.674819   70593 main.go:141] libmachine: (old-k8s-version-190698) Found IP for machine: 192.168.61.143
	I0917 18:18:13.674871   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has current primary IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:13.674884   70593 main.go:141] libmachine: (old-k8s-version-190698) Reserving static IP address...
	I0917 18:18:13.675176   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-190698", mac: "52:54:00:72:8a:43", ip: "192.168.61.143"} in network mk-old-k8s-version-190698
	I0917 18:18:13.761607   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | Getting to WaitForSSH function...
	I0917 18:18:13.761642   70593 main.go:141] libmachine: (old-k8s-version-190698) Reserved static IP address: 192.168.61.143
	I0917 18:18:13.761657   70593 main.go:141] libmachine: (old-k8s-version-190698) Waiting for SSH to be available...
	I0917 18:18:13.765040   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:13.765517   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:18:07 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:minikube Clientid:01:52:54:00:72:8a:43}
	I0917 18:18:13.765568   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:13.765704   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | Using SSH client type: external
	I0917 18:18:13.765732   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa (-rw-------)
	I0917 18:18:13.765762   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.143 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:18:13.765777   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | About to run SSH command:
	I0917 18:18:13.765790   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | exit 0
	I0917 18:18:13.893692   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | SSH cmd err, output: <nil>: 
	I0917 18:18:13.894011   70593 main.go:141] libmachine: (old-k8s-version-190698) KVM machine creation complete!
	I0917 18:18:13.894386   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetConfigRaw
	I0917 18:18:13.895088   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:18:13.895296   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:18:13.895520   70593 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0917 18:18:13.895540   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetState
	I0917 18:18:13.897022   70593 main.go:141] libmachine: Detecting operating system of created instance...
	I0917 18:18:13.897038   70593 main.go:141] libmachine: Waiting for SSH to be available...
	I0917 18:18:13.897045   70593 main.go:141] libmachine: Getting to WaitForSSH function...
	I0917 18:18:13.897052   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:18:13.899799   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:13.900217   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:18:07 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:18:13.900251   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:13.900407   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:18:13.900666   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:18:13.900857   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:18:13.900964   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:18:13.901171   70593 main.go:141] libmachine: Using SSH client type: native
	I0917 18:18:13.901467   70593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:18:13.901485   70593 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0917 18:18:14.012676   70593 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:18:14.012697   70593 main.go:141] libmachine: Detecting the provisioner...
	I0917 18:18:14.012705   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:18:14.015528   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:14.015911   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:18:07 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:18:14.015942   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:14.016093   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:18:14.016362   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:18:14.016542   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:18:14.016737   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:18:14.016916   70593 main.go:141] libmachine: Using SSH client type: native
	I0917 18:18:14.017159   70593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:18:14.017177   70593 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0917 18:18:14.134866   70593 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0917 18:18:14.134943   70593 main.go:141] libmachine: found compatible host: buildroot
	I0917 18:18:14.134969   70593 main.go:141] libmachine: Provisioning with buildroot...
	I0917 18:18:14.134982   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetMachineName
	I0917 18:18:14.135273   70593 buildroot.go:166] provisioning hostname "old-k8s-version-190698"
	I0917 18:18:14.135305   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetMachineName
	I0917 18:18:14.135528   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:18:14.138744   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:14.139144   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:18:07 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:18:14.139173   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:14.139317   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:18:14.139504   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:18:14.139673   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:18:14.139827   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:18:14.140021   70593 main.go:141] libmachine: Using SSH client type: native
	I0917 18:18:14.140235   70593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:18:14.140251   70593 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-190698 && echo "old-k8s-version-190698" | sudo tee /etc/hostname
	I0917 18:18:14.271238   70593 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-190698
	
	I0917 18:18:14.271277   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:18:14.274329   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:14.274693   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:18:07 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:18:14.274738   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:14.274905   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:18:14.275138   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:18:14.275294   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:18:14.275467   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:18:14.275624   70593 main.go:141] libmachine: Using SSH client type: native
	I0917 18:18:14.275799   70593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:18:14.275838   70593 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-190698' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-190698/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-190698' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:18:14.425676   70593 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:18:14.425719   70593 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:18:14.425747   70593 buildroot.go:174] setting up certificates
	I0917 18:18:14.425761   70593 provision.go:84] configureAuth start
	I0917 18:18:14.425780   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetMachineName
	I0917 18:18:14.426085   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:18:14.430152   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:14.430615   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:18:07 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:18:14.430645   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:14.431132   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:18:14.434396   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:14.434818   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:18:07 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:18:14.434845   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:14.435038   70593 provision.go:143] copyHostCerts
	I0917 18:18:14.435142   70593 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:18:14.435158   70593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:18:14.435236   70593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:18:14.435387   70593 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:18:14.435404   70593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:18:14.435463   70593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:18:14.435558   70593 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:18:14.435571   70593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:18:14.435600   70593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:18:14.435672   70593 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-190698 san=[127.0.0.1 192.168.61.143 localhost minikube old-k8s-version-190698]
	I0917 18:18:14.744899   70593 provision.go:177] copyRemoteCerts
	I0917 18:18:14.744959   70593 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:18:14.744984   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:18:14.747977   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:14.748400   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:18:07 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:18:14.748425   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:14.748758   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:18:14.748970   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:18:14.749158   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:18:14.749344   70593 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:18:14.852179   70593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:18:14.884295   70593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0917 18:18:14.919676   70593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 18:18:14.955449   70593 provision.go:87] duration metric: took 529.667957ms to configureAuth
	I0917 18:18:14.955482   70593 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:18:14.955690   70593 config.go:182] Loaded profile config "old-k8s-version-190698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0917 18:18:14.955785   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:18:14.958863   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:14.959141   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:18:07 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:18:14.959165   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:14.959394   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:18:14.959604   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:18:14.959801   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:18:14.959957   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:18:14.960125   70593 main.go:141] libmachine: Using SSH client type: native
	I0917 18:18:14.960316   70593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:18:14.960335   70593 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:18:15.219522   70593 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:18:15.219587   70593 main.go:141] libmachine: Checking connection to Docker...
	I0917 18:18:15.219600   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetURL
	I0917 18:18:15.220918   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | Using libvirt version 6000000
	I0917 18:18:15.223964   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:15.224344   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:18:07 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:18:15.224375   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:15.224569   70593 main.go:141] libmachine: Docker is up and running!
	I0917 18:18:15.224584   70593 main.go:141] libmachine: Reticulating splines...
	I0917 18:18:15.224593   70593 client.go:171] duration metric: took 24.444006269s to LocalClient.Create
	I0917 18:18:15.224617   70593 start.go:167] duration metric: took 24.44408572s to libmachine.API.Create "old-k8s-version-190698"
	I0917 18:18:15.224630   70593 start.go:293] postStartSetup for "old-k8s-version-190698" (driver="kvm2")
	I0917 18:18:15.224644   70593 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:18:15.224668   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:18:15.224904   70593 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:18:15.224929   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:18:15.227367   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:15.227751   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:18:07 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:18:15.227776   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:15.227983   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:18:15.228166   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:18:15.228310   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:18:15.228446   70593 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:18:15.324063   70593 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:18:15.330081   70593 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:18:15.330181   70593 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:18:15.330263   70593 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:18:15.330390   70593 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:18:15.330541   70593 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:18:15.341670   70593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:18:15.374445   70593 start.go:296] duration metric: took 149.800611ms for postStartSetup
	I0917 18:18:15.374509   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetConfigRaw
	I0917 18:18:15.375201   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:18:15.378402   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:15.378860   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:18:07 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:18:15.378896   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:15.379161   70593 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/config.json ...
	I0917 18:18:15.379397   70593 start.go:128] duration metric: took 24.624474998s to createHost
	I0917 18:18:15.379447   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:18:15.382105   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:15.382487   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:18:07 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:18:15.382520   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:15.382811   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:18:15.383015   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:18:15.383202   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:18:15.383320   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:18:15.383481   70593 main.go:141] libmachine: Using SSH client type: native
	I0917 18:18:15.383706   70593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:18:15.383721   70593 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:18:15.502396   70593 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726597095.440363257
	
	I0917 18:18:15.502424   70593 fix.go:216] guest clock: 1726597095.440363257
	I0917 18:18:15.502435   70593 fix.go:229] Guest: 2024-09-17 18:18:15.440363257 +0000 UTC Remote: 2024-09-17 18:18:15.379416052 +0000 UTC m=+50.909753838 (delta=60.947205ms)
	I0917 18:18:15.502498   70593 fix.go:200] guest clock delta is within tolerance: 60.947205ms
	I0917 18:18:15.502509   70593 start.go:83] releasing machines lock for "old-k8s-version-190698", held for 24.747790223s
	I0917 18:18:15.502542   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:18:15.502903   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:18:15.506524   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:15.507063   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:18:07 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:18:15.507088   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:15.507351   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:18:15.507913   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:18:15.508156   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:18:15.508239   70593 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:18:15.508293   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:18:15.508415   70593 ssh_runner.go:195] Run: cat /version.json
	I0917 18:18:15.508451   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:18:15.511886   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:15.512434   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:18:07 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:18:15.512481   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:15.512666   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:15.512704   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:18:15.512959   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:18:15.513094   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:18:07 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:18:15.513129   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:15.513178   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:18:15.513331   70593 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:18:15.513715   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:18:15.513906   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:18:15.514068   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:18:15.514216   70593 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:18:15.613635   70593 ssh_runner.go:195] Run: systemctl --version
	I0917 18:18:15.653367   70593 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:18:15.837806   70593 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:18:15.846602   70593 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:18:15.846676   70593 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:18:15.872049   70593 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:18:15.872144   70593 start.go:495] detecting cgroup driver to use...
	I0917 18:18:15.872265   70593 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:18:15.896112   70593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:18:15.917038   70593 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:18:15.917117   70593 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:18:15.937694   70593 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:18:15.958121   70593 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:18:16.139763   70593 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:18:16.314611   70593 docker.go:233] disabling docker service ...
	I0917 18:18:16.314686   70593 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:18:16.331970   70593 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:18:16.347463   70593 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:18:16.493043   70593 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:18:16.660104   70593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:18:16.681183   70593 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:18:16.717735   70593 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0917 18:18:16.717805   70593 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:18:16.732960   70593 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:18:16.733050   70593 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:18:16.748556   70593 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:18:16.764368   70593 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:18:16.779487   70593 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:18:16.796491   70593 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:18:16.811635   70593 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:18:16.811697   70593 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:18:16.833257   70593 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:18:16.851593   70593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:18:17.041401   70593 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:18:17.169763   70593 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:18:17.169868   70593 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:18:17.178691   70593 start.go:563] Will wait 60s for crictl version
	I0917 18:18:17.178766   70593 ssh_runner.go:195] Run: which crictl
	I0917 18:18:17.185962   70593 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:18:17.257631   70593 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:18:17.257734   70593 ssh_runner.go:195] Run: crio --version
	I0917 18:18:17.300685   70593 ssh_runner.go:195] Run: crio --version
	I0917 18:18:17.349062   70593 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0917 18:18:17.350567   70593 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:18:17.354315   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:17.354793   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:18:07 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:18:17.354825   70593 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:18:17.355217   70593 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0917 18:18:17.364152   70593 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:18:17.384667   70593 kubeadm.go:883] updating cluster {Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:18:17.384792   70593 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0917 18:18:17.384833   70593 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:18:17.437319   70593 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0917 18:18:17.437396   70593 ssh_runner.go:195] Run: which lz4
	I0917 18:18:17.442692   70593 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 18:18:17.449161   70593 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 18:18:17.449190   70593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0917 18:18:19.788230   70593 crio.go:462] duration metric: took 2.345575192s to copy over tarball
	I0917 18:18:19.788307   70593 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 18:18:23.201894   70593 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.41355318s)
	I0917 18:18:23.201926   70593 crio.go:469] duration metric: took 3.413668173s to extract the tarball
	I0917 18:18:23.201933   70593 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 18:18:23.248699   70593 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:18:23.327076   70593 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0917 18:18:23.327101   70593 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0917 18:18:23.327145   70593 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:18:23.327203   70593 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0917 18:18:23.327222   70593 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0917 18:18:23.327261   70593 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:18:23.327202   70593 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0917 18:18:23.327327   70593 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:18:23.327335   70593 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:18:23.327153   70593 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:18:23.328957   70593 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0917 18:18:23.329021   70593 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:18:23.329025   70593 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:18:23.329057   70593 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:18:23.328961   70593 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:18:23.328963   70593 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:18:23.329613   70593 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0917 18:18:23.329613   70593 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0917 18:18:23.504449   70593 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:18:23.508675   70593 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:18:23.509452   70593 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0917 18:18:23.512825   70593 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0917 18:18:23.516814   70593 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:18:23.527550   70593 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0917 18:18:23.556416   70593 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:18:23.610612   70593 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0917 18:18:23.610801   70593 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:18:23.610891   70593 ssh_runner.go:195] Run: which crictl
	I0917 18:18:23.728697   70593 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0917 18:18:23.728728   70593 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0917 18:18:23.728742   70593 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:18:23.728762   70593 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0917 18:18:23.728792   70593 ssh_runner.go:195] Run: which crictl
	I0917 18:18:23.728799   70593 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0917 18:18:23.728851   70593 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0917 18:18:23.728863   70593 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0917 18:18:23.728882   70593 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:18:23.728920   70593 ssh_runner.go:195] Run: which crictl
	I0917 18:18:23.728887   70593 ssh_runner.go:195] Run: which crictl
	I0917 18:18:23.728810   70593 ssh_runner.go:195] Run: which crictl
	I0917 18:18:23.736183   70593 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0917 18:18:23.736227   70593 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0917 18:18:23.736279   70593 ssh_runner.go:195] Run: which crictl
	I0917 18:18:23.736292   70593 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0917 18:18:23.736326   70593 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:18:23.736336   70593 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:18:23.736372   70593 ssh_runner.go:195] Run: which crictl
	I0917 18:18:23.744990   70593 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0917 18:18:23.745066   70593 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:18:23.745094   70593 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0917 18:18:23.745156   70593 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:18:23.842998   70593 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:18:23.843033   70593 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:18:23.843100   70593 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0917 18:18:23.894127   70593 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:18:23.894216   70593 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:18:23.894228   70593 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0917 18:18:23.912936   70593 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0917 18:18:23.998214   70593 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:18:23.998419   70593 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:18:23.998473   70593 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0917 18:18:24.074567   70593 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:18:24.080215   70593 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:18:24.080314   70593 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0917 18:18:24.080275   70593 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0917 18:18:24.114618   70593 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0917 18:18:24.189714   70593 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0917 18:18:24.189802   70593 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:18:24.206544   70593 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0917 18:18:24.255179   70593 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0917 18:18:24.255304   70593 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0917 18:18:24.255306   70593 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0917 18:18:24.262889   70593 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:18:24.306066   70593 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0917 18:18:24.306197   70593 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0917 18:18:24.425107   70593 cache_images.go:92] duration metric: took 1.097987878s to LoadCachedImages
	W0917 18:18:24.425216   70593 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0917 18:18:24.425256   70593 kubeadm.go:934] updating node { 192.168.61.143 8443 v1.20.0 crio true true} ...
	I0917 18:18:24.425371   70593 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-190698 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.143
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:18:24.425457   70593 ssh_runner.go:195] Run: crio config
	I0917 18:18:24.485417   70593 cni.go:84] Creating CNI manager for ""
	I0917 18:18:24.485446   70593 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:18:24.485457   70593 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:18:24.485481   70593 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.143 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-190698 NodeName:old-k8s-version-190698 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.143"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.143 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0917 18:18:24.485664   70593 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.143
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-190698"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.143
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.143"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:18:24.485773   70593 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0917 18:18:24.499662   70593 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:18:24.499734   70593 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:18:24.510974   70593 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0917 18:18:24.533518   70593 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:18:24.558750   70593 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0917 18:18:24.581691   70593 ssh_runner.go:195] Run: grep 192.168.61.143	control-plane.minikube.internal$ /etc/hosts
	I0917 18:18:24.586198   70593 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.143	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:18:24.601180   70593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:18:24.745970   70593 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:18:24.766624   70593 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698 for IP: 192.168.61.143
	I0917 18:18:24.766652   70593 certs.go:194] generating shared ca certs ...
	I0917 18:18:24.766676   70593 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:18:24.766938   70593 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:18:24.767012   70593 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:18:24.767032   70593 certs.go:256] generating profile certs ...
	I0917 18:18:24.767114   70593 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/client.key
	I0917 18:18:24.767139   70593 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/client.crt with IP's: []
	I0917 18:18:24.935581   70593 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/client.crt ...
	I0917 18:18:24.935612   70593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/client.crt: {Name:mk46ff34d0ddc6064bec71d49662d8284e7fcf7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:18:24.935825   70593 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/client.key ...
	I0917 18:18:24.935845   70593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/client.key: {Name:mke056c719416246146b088311af7be9ed6ce6bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:18:24.935961   70593 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.key.8ffdb4af
	I0917 18:18:24.935980   70593 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.crt.8ffdb4af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.143]
	I0917 18:18:25.102645   70593 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.crt.8ffdb4af ...
	I0917 18:18:25.102683   70593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.crt.8ffdb4af: {Name:mk52c0128422e8e9851e498745319d7e336d6960 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:18:25.102874   70593 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.key.8ffdb4af ...
	I0917 18:18:25.102891   70593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.key.8ffdb4af: {Name:mkc14c2bd610f7c23ac66e9417c65836ae872b01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:18:25.102991   70593 certs.go:381] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.crt.8ffdb4af -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.crt
	I0917 18:18:25.103103   70593 certs.go:385] copying /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.key.8ffdb4af -> /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.key
	I0917 18:18:25.103186   70593 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/proxy-client.key
	I0917 18:18:25.103213   70593 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/proxy-client.crt with IP's: []
	I0917 18:18:25.349210   70593 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/proxy-client.crt ...
	I0917 18:18:25.349269   70593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/proxy-client.crt: {Name:mkbf07b294f95eac28091fd87f2652f6b3f079c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:18:25.349480   70593 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/proxy-client.key ...
	I0917 18:18:25.349498   70593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/proxy-client.key: {Name:mkf30eb1427481fd3c74411423c3b36f8e366273 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:18:25.349735   70593 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:18:25.349790   70593 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:18:25.349806   70593 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:18:25.349839   70593 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:18:25.349872   70593 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:18:25.349905   70593 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:18:25.349979   70593 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:18:25.350549   70593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:18:25.382467   70593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:18:25.411152   70593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:18:25.439729   70593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:18:25.466632   70593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 18:18:25.496783   70593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 18:18:25.527161   70593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:18:25.556092   70593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 18:18:25.586429   70593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:18:25.615401   70593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:18:25.641878   70593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:18:25.670727   70593 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:18:25.701186   70593 ssh_runner.go:195] Run: openssl version
	I0917 18:18:25.708076   70593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:18:25.727872   70593 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:18:25.739453   70593 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:18:25.739523   70593 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:18:25.747047   70593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:18:25.765921   70593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:18:25.779595   70593 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:18:25.785780   70593 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:18:25.785856   70593 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:18:25.792575   70593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:18:25.805699   70593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:18:25.819325   70593 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:18:25.824466   70593 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:18:25.824551   70593 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:18:25.830890   70593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:18:25.848216   70593 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:18:25.853020   70593 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 18:18:25.853087   70593 kubeadm.go:392] StartCluster: {Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:18:25.853156   70593 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:18:25.853211   70593 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:18:25.894872   70593 cri.go:89] found id: ""
	I0917 18:18:25.894950   70593 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:18:25.907112   70593 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:18:25.918883   70593 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:18:25.930372   70593 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:18:25.930395   70593 kubeadm.go:157] found existing configuration files:
	
	I0917 18:18:25.930473   70593 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:18:25.941089   70593 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:18:25.941145   70593 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:18:25.952179   70593 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:18:25.962858   70593 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:18:25.962924   70593 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:18:25.974519   70593 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:18:25.985334   70593 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:18:25.985406   70593 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:18:25.995876   70593 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:18:26.005976   70593 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:18:26.006044   70593 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:18:26.018194   70593 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:18:26.159906   70593 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0917 18:18:26.159990   70593 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:18:26.323223   70593 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:18:26.323397   70593 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:18:26.323561   70593 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0917 18:18:26.583609   70593 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:18:26.586440   70593 out.go:235]   - Generating certificates and keys ...
	I0917 18:18:26.586547   70593 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:18:26.586631   70593 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:18:26.789804   70593 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 18:18:27.322523   70593 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 18:18:27.472139   70593 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 18:18:27.678822   70593 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 18:18:27.779537   70593 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 18:18:27.779945   70593 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-190698] and IPs [192.168.61.143 127.0.0.1 ::1]
	I0917 18:18:27.919175   70593 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 18:18:27.919505   70593 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-190698] and IPs [192.168.61.143 127.0.0.1 ::1]
	I0917 18:18:28.621495   70593 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 18:18:28.809018   70593 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 18:18:29.069249   70593 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 18:18:29.069673   70593 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:18:29.231216   70593 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:18:29.332620   70593 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:18:29.582345   70593 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:18:29.645105   70593 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:18:29.667576   70593 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:18:29.667987   70593 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:18:29.668063   70593 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:18:29.814080   70593 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:18:29.815980   70593 out.go:235]   - Booting up control plane ...
	I0917 18:18:29.816100   70593 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:18:29.825865   70593 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:18:29.826789   70593 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:18:29.827647   70593 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:18:29.832262   70593 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0917 18:19:09.774796   70593 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0917 18:19:09.775641   70593 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:19:09.775924   70593 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:19:14.775168   70593 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:19:14.775496   70593 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:19:24.774789   70593 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:19:24.775031   70593 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:19:44.775306   70593 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:19:44.775518   70593 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:20:24.777276   70593 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:20:24.777519   70593 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:20:24.777558   70593 kubeadm.go:310] 
	I0917 18:20:24.777633   70593 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0917 18:20:24.777689   70593 kubeadm.go:310] 		timed out waiting for the condition
	I0917 18:20:24.777712   70593 kubeadm.go:310] 
	I0917 18:20:24.777753   70593 kubeadm.go:310] 	This error is likely caused by:
	I0917 18:20:24.777785   70593 kubeadm.go:310] 		- The kubelet is not running
	I0917 18:20:24.777873   70593 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0917 18:20:24.777887   70593 kubeadm.go:310] 
	I0917 18:20:24.777974   70593 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0917 18:20:24.778023   70593 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0917 18:20:24.778073   70593 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0917 18:20:24.778083   70593 kubeadm.go:310] 
	I0917 18:20:24.778175   70593 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0917 18:20:24.778250   70593 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0917 18:20:24.778265   70593 kubeadm.go:310] 
	I0917 18:20:24.778411   70593 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0917 18:20:24.778518   70593 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0917 18:20:24.778612   70593 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0917 18:20:24.778711   70593 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0917 18:20:24.778726   70593 kubeadm.go:310] 
	I0917 18:20:24.779031   70593 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:20:24.779155   70593 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0917 18:20:24.779320   70593 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0917 18:20:24.779376   70593 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-190698] and IPs [192.168.61.143 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-190698] and IPs [192.168.61.143 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-190698] and IPs [192.168.61.143 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-190698] and IPs [192.168.61.143 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0917 18:20:24.779425   70593 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:20:27.026490   70593 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.247031505s)
	I0917 18:20:27.026588   70593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:20:27.043924   70593 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:20:27.057170   70593 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:20:27.057202   70593 kubeadm.go:157] found existing configuration files:
	
	I0917 18:20:27.057274   70593 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:20:27.067392   70593 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:20:27.067451   70593 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:20:27.077819   70593 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:20:27.087660   70593 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:20:27.087729   70593 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:20:27.097708   70593 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:20:27.108695   70593 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:20:27.108764   70593 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:20:27.120109   70593 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:20:27.130243   70593 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:20:27.130315   70593 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:20:27.140657   70593 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:20:27.217630   70593 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0917 18:20:27.217714   70593 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:20:27.363268   70593 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:20:27.363471   70593 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:20:27.363622   70593 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0917 18:20:27.549735   70593 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:20:27.552035   70593 out.go:235]   - Generating certificates and keys ...
	I0917 18:20:27.552155   70593 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:20:27.552247   70593 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:20:27.552352   70593 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:20:27.552447   70593 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:20:27.552574   70593 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:20:27.552658   70593 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:20:27.552754   70593 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:20:27.552841   70593 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:20:27.552928   70593 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:20:27.553339   70593 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:20:27.553486   70593 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:20:27.553581   70593 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:20:27.659199   70593 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:20:27.762376   70593 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:20:27.915552   70593 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:20:28.162320   70593 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:20:28.176962   70593 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:20:28.178023   70593 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:20:28.178095   70593 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:20:28.326382   70593 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:20:28.328571   70593 out.go:235]   - Booting up control plane ...
	I0917 18:20:28.328695   70593 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:20:28.332956   70593 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:20:28.333929   70593 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:20:28.342652   70593 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:20:28.351400   70593 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0917 18:21:08.351777   70593 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0917 18:21:08.351904   70593 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:21:08.352161   70593 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:21:13.352555   70593 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:21:13.352793   70593 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:21:23.353352   70593 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:21:23.353635   70593 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:21:43.354736   70593 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:21:43.354939   70593 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:22:23.357704   70593 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:22:23.357975   70593 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:22:23.358021   70593 kubeadm.go:310] 
	I0917 18:22:23.358089   70593 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0917 18:22:23.358148   70593 kubeadm.go:310] 		timed out waiting for the condition
	I0917 18:22:23.358158   70593 kubeadm.go:310] 
	I0917 18:22:23.358202   70593 kubeadm.go:310] 	This error is likely caused by:
	I0917 18:22:23.358242   70593 kubeadm.go:310] 		- The kubelet is not running
	I0917 18:22:23.358385   70593 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0917 18:22:23.358400   70593 kubeadm.go:310] 
	I0917 18:22:23.358533   70593 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0917 18:22:23.358577   70593 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0917 18:22:23.358622   70593 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0917 18:22:23.358642   70593 kubeadm.go:310] 
	I0917 18:22:23.358748   70593 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0917 18:22:23.358854   70593 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0917 18:22:23.358871   70593 kubeadm.go:310] 
	I0917 18:22:23.359003   70593 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0917 18:22:23.359125   70593 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0917 18:22:23.359240   70593 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0917 18:22:23.359342   70593 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0917 18:22:23.359351   70593 kubeadm.go:310] 
	I0917 18:22:23.359959   70593 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:22:23.360046   70593 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0917 18:22:23.360161   70593 kubeadm.go:394] duration metric: took 3m57.507077246s to StartCluster
	I0917 18:22:23.360193   70593 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0917 18:22:23.360206   70593 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:22:23.360294   70593 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:22:23.404415   70593 cri.go:89] found id: ""
	I0917 18:22:23.404445   70593 logs.go:276] 0 containers: []
	W0917 18:22:23.404456   70593 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:22:23.404463   70593 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:22:23.404544   70593 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:22:23.444237   70593 cri.go:89] found id: ""
	I0917 18:22:23.444267   70593 logs.go:276] 0 containers: []
	W0917 18:22:23.444278   70593 logs.go:278] No container was found matching "etcd"
	I0917 18:22:23.444286   70593 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:22:23.444351   70593 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:22:23.480425   70593 cri.go:89] found id: ""
	I0917 18:22:23.480452   70593 logs.go:276] 0 containers: []
	W0917 18:22:23.480463   70593 logs.go:278] No container was found matching "coredns"
	I0917 18:22:23.480470   70593 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:22:23.480546   70593 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:22:23.516036   70593 cri.go:89] found id: ""
	I0917 18:22:23.516060   70593 logs.go:276] 0 containers: []
	W0917 18:22:23.516068   70593 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:22:23.516074   70593 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:22:23.516134   70593 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:22:23.550042   70593 cri.go:89] found id: ""
	I0917 18:22:23.550071   70593 logs.go:276] 0 containers: []
	W0917 18:22:23.550083   70593 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:22:23.550091   70593 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:22:23.550147   70593 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:22:23.584479   70593 cri.go:89] found id: ""
	I0917 18:22:23.584512   70593 logs.go:276] 0 containers: []
	W0917 18:22:23.584522   70593 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:22:23.584528   70593 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:22:23.584590   70593 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:22:23.621780   70593 cri.go:89] found id: ""
	I0917 18:22:23.621808   70593 logs.go:276] 0 containers: []
	W0917 18:22:23.621817   70593 logs.go:278] No container was found matching "kindnet"
	I0917 18:22:23.621826   70593 logs.go:123] Gathering logs for kubelet ...
	I0917 18:22:23.621837   70593 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:22:23.673807   70593 logs.go:123] Gathering logs for dmesg ...
	I0917 18:22:23.673853   70593 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:22:23.687707   70593 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:22:23.687734   70593 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:22:23.813402   70593 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:22:23.813427   70593 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:22:23.813438   70593 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:22:23.916282   70593 logs.go:123] Gathering logs for container status ...
	I0917 18:22:23.916322   70593 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0917 18:22:23.974624   70593 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0917 18:22:23.974692   70593 out.go:270] * 
	* 
	W0917 18:22:23.974760   70593 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0917 18:22:23.974781   70593 out.go:270] * 
	* 
	W0917 18:22:23.975677   70593 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 18:22:23.978846   70593 out.go:201] 
	W0917 18:22:23.980017   70593 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0917 18:22:23.980060   70593 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0917 18:22:23.980078   70593 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0917 18:22:23.981415   70593 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-190698 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-190698 -n old-k8s-version-190698
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-190698 -n old-k8s-version-190698: exit status 6 (233.344522ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 18:22:24.258796   77135 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-190698" does not appear in /home/jenkins/minikube-integration/19662-11085/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-190698" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (299.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-081863 --alsologtostderr -v=3
E0917 18:19:56.205295   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:19:56.211690   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:19:56.223182   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:19:56.244697   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:19:56.286160   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:19:56.368399   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:19:56.530084   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:19:56.851971   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:19:57.494336   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:19:58.776006   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:19:59.149661   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-081863 --alsologtostderr -v=3: exit status 82 (2m0.544072483s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-081863"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 18:19:54.188687   76146 out.go:345] Setting OutFile to fd 1 ...
	I0917 18:19:54.188799   76146 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:19:54.188804   76146 out.go:358] Setting ErrFile to fd 2...
	I0917 18:19:54.188808   76146 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:19:54.188976   76146 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 18:19:54.189188   76146 out.go:352] Setting JSON to false
	I0917 18:19:54.189312   76146 mustload.go:65] Loading cluster: embed-certs-081863
	I0917 18:19:54.189661   76146 config.go:182] Loaded profile config "embed-certs-081863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:19:54.189719   76146 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/config.json ...
	I0917 18:19:54.189888   76146 mustload.go:65] Loading cluster: embed-certs-081863
	I0917 18:19:54.189993   76146 config.go:182] Loaded profile config "embed-certs-081863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:19:54.190016   76146 stop.go:39] StopHost: embed-certs-081863
	I0917 18:19:54.190426   76146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:19:54.190467   76146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:19:54.206239   76146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33501
	I0917 18:19:54.206745   76146 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:19:54.207346   76146 main.go:141] libmachine: Using API Version  1
	I0917 18:19:54.207374   76146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:19:54.207764   76146 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:19:54.210373   76146 out.go:177] * Stopping node "embed-certs-081863"  ...
	I0917 18:19:54.211768   76146 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0917 18:19:54.211825   76146 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:19:54.212105   76146 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0917 18:19:54.212130   76146 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:19:54.215539   76146 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:19:54.216070   76146 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:19:03 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:19:54.216103   76146 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:19:54.216300   76146 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:19:54.216485   76146 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:19:54.216639   76146 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:19:54.216791   76146 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:19:54.325736   76146 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0917 18:19:54.393445   76146 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0917 18:19:54.465072   76146 main.go:141] libmachine: Stopping "embed-certs-081863"...
	I0917 18:19:54.465118   76146 main.go:141] libmachine: (embed-certs-081863) Calling .GetState
	I0917 18:19:54.466990   76146 main.go:141] libmachine: (embed-certs-081863) Calling .Stop
	I0917 18:19:54.471419   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 0/120
	I0917 18:19:55.473060   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 1/120
	I0917 18:19:56.474644   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 2/120
	I0917 18:19:57.475969   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 3/120
	I0917 18:19:58.477853   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 4/120
	I0917 18:19:59.479701   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 5/120
	I0917 18:20:00.482191   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 6/120
	I0917 18:20:01.484050   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 7/120
	I0917 18:20:02.485654   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 8/120
	I0917 18:20:03.487749   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 9/120
	I0917 18:20:04.489853   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 10/120
	I0917 18:20:05.491440   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 11/120
	I0917 18:20:06.493010   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 12/120
	I0917 18:20:07.494600   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 13/120
	I0917 18:20:08.496455   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 14/120
	I0917 18:20:09.498474   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 15/120
	I0917 18:20:10.500901   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 16/120
	I0917 18:20:11.502383   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 17/120
	I0917 18:20:12.503648   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 18/120
	I0917 18:20:13.505037   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 19/120
	I0917 18:20:14.507233   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 20/120
	I0917 18:20:15.508995   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 21/120
	I0917 18:20:16.510633   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 22/120
	I0917 18:20:17.512274   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 23/120
	I0917 18:20:18.513957   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 24/120
	I0917 18:20:19.516137   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 25/120
	I0917 18:20:20.517532   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 26/120
	I0917 18:20:21.519874   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 27/120
	I0917 18:20:22.521329   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 28/120
	I0917 18:20:23.522971   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 29/120
	I0917 18:20:24.524272   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 30/120
	I0917 18:20:25.525591   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 31/120
	I0917 18:20:26.527075   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 32/120
	I0917 18:20:27.528690   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 33/120
	I0917 18:20:28.530292   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 34/120
	I0917 18:20:29.532483   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 35/120
	I0917 18:20:30.534058   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 36/120
	I0917 18:20:31.535508   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 37/120
	I0917 18:20:32.537012   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 38/120
	I0917 18:20:33.538384   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 39/120
	I0917 18:20:34.540673   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 40/120
	I0917 18:20:35.542377   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 41/120
	I0917 18:20:36.543719   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 42/120
	I0917 18:20:37.546015   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 43/120
	I0917 18:20:38.547762   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 44/120
	I0917 18:20:39.550159   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 45/120
	I0917 18:20:40.551617   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 46/120
	I0917 18:20:41.553394   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 47/120
	I0917 18:20:42.554841   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 48/120
	I0917 18:20:43.556571   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 49/120
	I0917 18:20:44.558830   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 50/120
	I0917 18:20:45.560351   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 51/120
	I0917 18:20:46.562237   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 52/120
	I0917 18:20:47.563777   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 53/120
	I0917 18:20:48.565283   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 54/120
	I0917 18:20:49.567347   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 55/120
	I0917 18:20:50.568789   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 56/120
	I0917 18:20:51.570230   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 57/120
	I0917 18:20:52.572336   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 58/120
	I0917 18:20:53.573631   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 59/120
	I0917 18:20:54.574950   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 60/120
	I0917 18:20:55.576435   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 61/120
	I0917 18:20:56.578077   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 62/120
	I0917 18:20:57.579704   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 63/120
	I0917 18:20:58.580939   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 64/120
	I0917 18:20:59.583007   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 65/120
	I0917 18:21:00.584604   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 66/120
	I0917 18:21:01.585963   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 67/120
	I0917 18:21:02.587347   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 68/120
	I0917 18:21:03.588643   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 69/120
	I0917 18:21:04.591116   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 70/120
	I0917 18:21:05.592393   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 71/120
	I0917 18:21:06.594378   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 72/120
	I0917 18:21:07.595739   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 73/120
	I0917 18:21:08.597315   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 74/120
	I0917 18:21:09.599374   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 75/120
	I0917 18:21:10.601157   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 76/120
	I0917 18:21:11.602572   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 77/120
	I0917 18:21:12.604028   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 78/120
	I0917 18:21:13.605646   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 79/120
	I0917 18:21:14.608036   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 80/120
	I0917 18:21:15.609514   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 81/120
	I0917 18:21:16.610855   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 82/120
	I0917 18:21:17.612225   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 83/120
	I0917 18:21:18.613602   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 84/120
	I0917 18:21:19.615862   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 85/120
	I0917 18:21:20.617453   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 86/120
	I0917 18:21:21.619809   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 87/120
	I0917 18:21:22.621083   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 88/120
	I0917 18:21:23.623304   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 89/120
	I0917 18:21:24.625627   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 90/120
	I0917 18:21:25.627117   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 91/120
	I0917 18:21:26.628580   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 92/120
	I0917 18:21:27.630303   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 93/120
	I0917 18:21:28.631657   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 94/120
	I0917 18:21:29.633764   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 95/120
	I0917 18:21:30.635336   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 96/120
	I0917 18:21:31.636665   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 97/120
	I0917 18:21:32.638197   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 98/120
	I0917 18:21:33.639862   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 99/120
	I0917 18:21:34.642264   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 100/120
	I0917 18:21:35.643575   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 101/120
	I0917 18:21:36.645005   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 102/120
	I0917 18:21:37.646557   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 103/120
	I0917 18:21:38.647909   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 104/120
	I0917 18:21:39.650035   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 105/120
	I0917 18:21:40.651479   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 106/120
	I0917 18:21:41.653062   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 107/120
	I0917 18:21:42.654393   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 108/120
	I0917 18:21:43.655762   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 109/120
	I0917 18:21:44.658146   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 110/120
	I0917 18:21:45.659613   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 111/120
	I0917 18:21:46.661060   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 112/120
	I0917 18:21:47.662350   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 113/120
	I0917 18:21:48.663716   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 114/120
	I0917 18:21:49.665760   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 115/120
	I0917 18:21:50.667665   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 116/120
	I0917 18:21:51.669270   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 117/120
	I0917 18:21:52.670801   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 118/120
	I0917 18:21:53.672364   76146 main.go:141] libmachine: (embed-certs-081863) Waiting for machine to stop 119/120
	I0917 18:21:54.673359   76146 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0917 18:21:54.673436   76146 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0917 18:21:54.675715   76146 out.go:201] 
	W0917 18:21:54.677137   76146 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0917 18:21:54.677153   76146 out.go:270] * 
	* 
	W0917 18:21:54.680562   76146 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 18:21:54.681933   76146 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-081863 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-081863 -n embed-certs-081863
E0917 18:21:54.798191   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/custom-flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:22:02.206812   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/bridge-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:22:02.213172   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/bridge-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:22:02.224572   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/bridge-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:22:02.245953   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/bridge-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:22:02.287462   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/bridge-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:22:02.369263   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/bridge-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:22:02.530794   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/bridge-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:22:02.852469   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/bridge-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:22:02.865799   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/calico-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:22:03.493887   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/bridge-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:22:04.775873   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/bridge-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:22:05.039735   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/custom-flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:22:07.337405   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/bridge-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-081863 -n embed-certs-081863: exit status 3 (18.674116691s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 18:22:13.357539   76894 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.61:22: connect: no route to host
	E0917 18:22:13.357558   76894 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.61:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-081863" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-328741 --alsologtostderr -v=3
E0917 18:20:13.607441   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:20:16.700988   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:20:29.874102   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:20:37.183100   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-328741 --alsologtostderr -v=3: exit status 82 (2m0.524294589s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-328741"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 18:20:10.726154   76310 out.go:345] Setting OutFile to fd 1 ...
	I0917 18:20:10.726620   76310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:20:10.726630   76310 out.go:358] Setting ErrFile to fd 2...
	I0917 18:20:10.726635   76310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:20:10.726901   76310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 18:20:10.727224   76310 out.go:352] Setting JSON to false
	I0917 18:20:10.727334   76310 mustload.go:65] Loading cluster: no-preload-328741
	I0917 18:20:10.727828   76310 config.go:182] Loaded profile config "no-preload-328741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:20:10.727899   76310 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/config.json ...
	I0917 18:20:10.728074   76310 mustload.go:65] Loading cluster: no-preload-328741
	I0917 18:20:10.728172   76310 config.go:182] Loaded profile config "no-preload-328741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:20:10.728193   76310 stop.go:39] StopHost: no-preload-328741
	I0917 18:20:10.728559   76310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:20:10.728594   76310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:20:10.746011   76310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45359
	I0917 18:20:10.746589   76310 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:20:10.747345   76310 main.go:141] libmachine: Using API Version  1
	I0917 18:20:10.747379   76310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:20:10.747743   76310 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:20:10.750400   76310 out.go:177] * Stopping node "no-preload-328741"  ...
	I0917 18:20:10.751877   76310 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0917 18:20:10.751921   76310 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:20:10.752170   76310 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0917 18:20:10.752193   76310 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:20:10.755515   76310 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:20:10.755901   76310 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:18:32 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:20:10.755917   76310 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:20:10.756176   76310 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:20:10.756352   76310 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:20:10.756510   76310 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:20:10.756651   76310 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:20:10.891859   76310 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0917 18:20:10.936426   76310 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0917 18:20:10.998556   76310 main.go:141] libmachine: Stopping "no-preload-328741"...
	I0917 18:20:10.998599   76310 main.go:141] libmachine: (no-preload-328741) Calling .GetState
	I0917 18:20:11.000450   76310 main.go:141] libmachine: (no-preload-328741) Calling .Stop
	I0917 18:20:11.004101   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 0/120
	I0917 18:20:12.005491   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 1/120
	I0917 18:20:13.006942   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 2/120
	I0917 18:20:14.008689   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 3/120
	I0917 18:20:15.010077   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 4/120
	I0917 18:20:16.012344   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 5/120
	I0917 18:20:17.013885   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 6/120
	I0917 18:20:18.015252   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 7/120
	I0917 18:20:19.016865   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 8/120
	I0917 18:20:20.018241   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 9/120
	I0917 18:20:21.019539   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 10/120
	I0917 18:20:22.020893   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 11/120
	I0917 18:20:23.022537   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 12/120
	I0917 18:20:24.023863   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 13/120
	I0917 18:20:25.025591   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 14/120
	I0917 18:20:26.027706   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 15/120
	I0917 18:20:27.030069   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 16/120
	I0917 18:20:28.031327   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 17/120
	I0917 18:20:29.032957   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 18/120
	I0917 18:20:30.034557   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 19/120
	I0917 18:20:31.035978   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 20/120
	I0917 18:20:32.037578   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 21/120
	I0917 18:20:33.039997   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 22/120
	I0917 18:20:34.041787   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 23/120
	I0917 18:20:35.043144   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 24/120
	I0917 18:20:36.045265   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 25/120
	I0917 18:20:37.046702   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 26/120
	I0917 18:20:38.048154   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 27/120
	I0917 18:20:39.049374   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 28/120
	I0917 18:20:40.051046   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 29/120
	I0917 18:20:41.052368   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 30/120
	I0917 18:20:42.053734   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 31/120
	I0917 18:20:43.055055   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 32/120
	I0917 18:20:44.056330   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 33/120
	I0917 18:20:45.057629   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 34/120
	I0917 18:20:46.059840   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 35/120
	I0917 18:20:47.061065   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 36/120
	I0917 18:20:48.062364   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 37/120
	I0917 18:20:49.063909   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 38/120
	I0917 18:20:50.065432   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 39/120
	I0917 18:20:51.066797   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 40/120
	I0917 18:20:52.068273   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 41/120
	I0917 18:20:53.069622   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 42/120
	I0917 18:20:54.071302   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 43/120
	I0917 18:20:55.072617   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 44/120
	I0917 18:20:56.074574   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 45/120
	I0917 18:20:57.075900   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 46/120
	I0917 18:20:58.078233   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 47/120
	I0917 18:20:59.079938   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 48/120
	I0917 18:21:00.081368   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 49/120
	I0917 18:21:01.083429   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 50/120
	I0917 18:21:02.084888   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 51/120
	I0917 18:21:03.086238   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 52/120
	I0917 18:21:04.087661   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 53/120
	I0917 18:21:05.089035   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 54/120
	I0917 18:21:06.091112   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 55/120
	I0917 18:21:07.092479   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 56/120
	I0917 18:21:08.093858   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 57/120
	I0917 18:21:09.095376   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 58/120
	I0917 18:21:10.096826   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 59/120
	I0917 18:21:11.099391   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 60/120
	I0917 18:21:12.101026   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 61/120
	I0917 18:21:13.102304   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 62/120
	I0917 18:21:14.103967   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 63/120
	I0917 18:21:15.105371   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 64/120
	I0917 18:21:16.107481   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 65/120
	I0917 18:21:17.109003   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 66/120
	I0917 18:21:18.110377   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 67/120
	I0917 18:21:19.111878   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 68/120
	I0917 18:21:20.113303   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 69/120
	I0917 18:21:21.114764   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 70/120
	I0917 18:21:22.116502   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 71/120
	I0917 18:21:23.117837   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 72/120
	I0917 18:21:24.119162   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 73/120
	I0917 18:21:25.120683   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 74/120
	I0917 18:21:26.123086   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 75/120
	I0917 18:21:27.124532   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 76/120
	I0917 18:21:28.126034   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 77/120
	I0917 18:21:29.127715   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 78/120
	I0917 18:21:30.129300   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 79/120
	I0917 18:21:31.131921   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 80/120
	I0917 18:21:32.133512   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 81/120
	I0917 18:21:33.135189   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 82/120
	I0917 18:21:34.136514   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 83/120
	I0917 18:21:35.138072   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 84/120
	I0917 18:21:36.140016   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 85/120
	I0917 18:21:37.141550   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 86/120
	I0917 18:21:38.143025   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 87/120
	I0917 18:21:39.144562   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 88/120
	I0917 18:21:40.146022   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 89/120
	I0917 18:21:41.148270   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 90/120
	I0917 18:21:42.149748   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 91/120
	I0917 18:21:43.152208   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 92/120
	I0917 18:21:44.153719   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 93/120
	I0917 18:21:45.155303   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 94/120
	I0917 18:21:46.157882   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 95/120
	I0917 18:21:47.159239   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 96/120
	I0917 18:21:48.160807   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 97/120
	I0917 18:21:49.162506   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 98/120
	I0917 18:21:50.164168   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 99/120
	I0917 18:21:51.166539   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 100/120
	I0917 18:21:52.168075   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 101/120
	I0917 18:21:53.169539   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 102/120
	I0917 18:21:54.171807   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 103/120
	I0917 18:21:55.173288   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 104/120
	I0917 18:21:56.175265   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 105/120
	I0917 18:21:57.176745   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 106/120
	I0917 18:21:58.178231   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 107/120
	I0917 18:21:59.180057   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 108/120
	I0917 18:22:00.181479   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 109/120
	I0917 18:22:01.182970   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 110/120
	I0917 18:22:02.184414   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 111/120
	I0917 18:22:03.185809   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 112/120
	I0917 18:22:04.187636   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 113/120
	I0917 18:22:05.188998   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 114/120
	I0917 18:22:06.191031   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 115/120
	I0917 18:22:07.192479   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 116/120
	I0917 18:22:08.193912   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 117/120
	I0917 18:22:09.195365   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 118/120
	I0917 18:22:10.196888   76310 main.go:141] libmachine: (no-preload-328741) Waiting for machine to stop 119/120
	I0917 18:22:11.197587   76310 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0917 18:22:11.197646   76310 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0917 18:22:11.200137   76310 out.go:201] 
	W0917 18:22:11.201624   76310 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0917 18:22:11.201638   76310 out.go:270] * 
	* 
	W0917 18:22:11.204862   76310 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 18:22:11.206442   76310 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-328741 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-328741 -n no-preload-328741
E0917 18:22:12.459331   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/bridge-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-328741 -n no-preload-328741: exit status 3 (18.534089637s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 18:22:29.741544   76973 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.182:22: connect: no route to host
	E0917 18:22:29.741565   76973 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.182:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-328741" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-438836 --alsologtostderr -v=3
E0917 18:21:10.836082   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:21:18.144791   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:21:21.889688   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/calico-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:21:21.896106   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/calico-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:21:21.907824   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/calico-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:21:21.929339   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/calico-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:21:21.970892   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/calico-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:21:22.052376   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/calico-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:21:22.214073   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/calico-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:21:22.536259   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/calico-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:21:23.178264   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/calico-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:21:24.460119   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/calico-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:21:24.983371   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:21:27.021444   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/calico-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:21:32.142868   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/calico-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:21:42.384521   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/calico-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:21:44.544657   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/custom-flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:21:44.551063   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/custom-flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:21:44.562531   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/custom-flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:21:44.583961   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/custom-flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:21:44.625424   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/custom-flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:21:44.707508   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/custom-flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:21:44.869153   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/custom-flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:21:45.190659   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/custom-flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:21:45.832274   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/custom-flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:21:47.113992   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/custom-flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:21:49.676089   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/custom-flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-438836 --alsologtostderr -v=3: exit status 82 (2m0.502932197s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-438836"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 18:21:06.979394   76688 out.go:345] Setting OutFile to fd 1 ...
	I0917 18:21:06.979692   76688 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:21:06.979706   76688 out.go:358] Setting ErrFile to fd 2...
	I0917 18:21:06.979713   76688 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:21:06.980188   76688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 18:21:06.980751   76688 out.go:352] Setting JSON to false
	I0917 18:21:06.980975   76688 mustload.go:65] Loading cluster: default-k8s-diff-port-438836
	I0917 18:21:06.981366   76688 config.go:182] Loaded profile config "default-k8s-diff-port-438836": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:21:06.981432   76688 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/config.json ...
	I0917 18:21:06.981616   76688 mustload.go:65] Loading cluster: default-k8s-diff-port-438836
	I0917 18:21:06.981715   76688 config.go:182] Loaded profile config "default-k8s-diff-port-438836": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:21:06.981738   76688 stop.go:39] StopHost: default-k8s-diff-port-438836
	I0917 18:21:06.982095   76688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:21:06.982136   76688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:21:06.998267   76688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45731
	I0917 18:21:06.998770   76688 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:21:06.999390   76688 main.go:141] libmachine: Using API Version  1
	I0917 18:21:06.999432   76688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:21:06.999855   76688 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:21:07.002207   76688 out.go:177] * Stopping node "default-k8s-diff-port-438836"  ...
	I0917 18:21:07.003510   76688 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0917 18:21:07.003540   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:21:07.003786   76688 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0917 18:21:07.003813   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:21:07.006676   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:21:07.007086   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:19:44 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:21:07.007124   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:21:07.007227   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:21:07.007418   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:21:07.007558   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:21:07.007696   76688 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:21:07.107311   76688 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0917 18:21:07.169639   76688 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0917 18:21:07.231160   76688 main.go:141] libmachine: Stopping "default-k8s-diff-port-438836"...
	I0917 18:21:07.231215   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetState
	I0917 18:21:07.232849   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Stop
	I0917 18:21:07.236654   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 0/120
	I0917 18:21:08.238404   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 1/120
	I0917 18:21:09.240098   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 2/120
	I0917 18:21:10.241701   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 3/120
	I0917 18:21:11.243135   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 4/120
	I0917 18:21:12.245191   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 5/120
	I0917 18:21:13.246562   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 6/120
	I0917 18:21:14.247866   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 7/120
	I0917 18:21:15.249133   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 8/120
	I0917 18:21:16.250366   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 9/120
	I0917 18:21:17.251887   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 10/120
	I0917 18:21:18.253160   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 11/120
	I0917 18:21:19.254645   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 12/120
	I0917 18:21:20.256112   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 13/120
	I0917 18:21:21.257575   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 14/120
	I0917 18:21:22.259651   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 15/120
	I0917 18:21:23.261100   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 16/120
	I0917 18:21:24.262568   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 17/120
	I0917 18:21:25.263902   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 18/120
	I0917 18:21:26.265481   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 19/120
	I0917 18:21:27.268161   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 20/120
	I0917 18:21:28.269760   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 21/120
	I0917 18:21:29.271534   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 22/120
	I0917 18:21:30.273438   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 23/120
	I0917 18:21:31.274999   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 24/120
	I0917 18:21:32.277088   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 25/120
	I0917 18:21:33.278468   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 26/120
	I0917 18:21:34.280015   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 27/120
	I0917 18:21:35.281431   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 28/120
	I0917 18:21:36.282864   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 29/120
	I0917 18:21:37.285327   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 30/120
	I0917 18:21:38.286846   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 31/120
	I0917 18:21:39.288329   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 32/120
	I0917 18:21:40.289782   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 33/120
	I0917 18:21:41.291406   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 34/120
	I0917 18:21:42.293096   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 35/120
	I0917 18:21:43.294524   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 36/120
	I0917 18:21:44.296049   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 37/120
	I0917 18:21:45.297704   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 38/120
	I0917 18:21:46.299191   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 39/120
	I0917 18:21:47.301406   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 40/120
	I0917 18:21:48.302811   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 41/120
	I0917 18:21:49.304195   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 42/120
	I0917 18:21:50.305816   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 43/120
	I0917 18:21:51.307315   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 44/120
	I0917 18:21:52.309554   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 45/120
	I0917 18:21:53.310980   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 46/120
	I0917 18:21:54.312493   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 47/120
	I0917 18:21:55.314011   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 48/120
	I0917 18:21:56.315586   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 49/120
	I0917 18:21:57.317986   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 50/120
	I0917 18:21:58.319434   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 51/120
	I0917 18:21:59.320944   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 52/120
	I0917 18:22:00.322358   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 53/120
	I0917 18:22:01.323804   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 54/120
	I0917 18:22:02.325940   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 55/120
	I0917 18:22:03.327433   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 56/120
	I0917 18:22:04.328888   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 57/120
	I0917 18:22:05.330411   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 58/120
	I0917 18:22:06.331725   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 59/120
	I0917 18:22:07.334115   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 60/120
	I0917 18:22:08.335527   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 61/120
	I0917 18:22:09.337065   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 62/120
	I0917 18:22:10.338464   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 63/120
	I0917 18:22:11.339758   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 64/120
	I0917 18:22:12.341913   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 65/120
	I0917 18:22:13.343344   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 66/120
	I0917 18:22:14.344680   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 67/120
	I0917 18:22:15.346071   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 68/120
	I0917 18:22:16.347370   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 69/120
	I0917 18:22:17.348782   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 70/120
	I0917 18:22:18.350333   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 71/120
	I0917 18:22:19.351877   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 72/120
	I0917 18:22:20.353573   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 73/120
	I0917 18:22:21.355141   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 74/120
	I0917 18:22:22.357305   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 75/120
	I0917 18:22:23.358915   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 76/120
	I0917 18:22:24.360272   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 77/120
	I0917 18:22:25.361744   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 78/120
	I0917 18:22:26.363502   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 79/120
	I0917 18:22:27.365829   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 80/120
	I0917 18:22:28.367306   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 81/120
	I0917 18:22:29.368735   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 82/120
	I0917 18:22:30.370288   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 83/120
	I0917 18:22:31.371561   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 84/120
	I0917 18:22:32.373759   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 85/120
	I0917 18:22:33.375149   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 86/120
	I0917 18:22:34.376575   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 87/120
	I0917 18:22:35.378131   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 88/120
	I0917 18:22:36.379418   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 89/120
	I0917 18:22:37.381747   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 90/120
	I0917 18:22:38.383123   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 91/120
	I0917 18:22:39.384413   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 92/120
	I0917 18:22:40.385911   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 93/120
	I0917 18:22:41.387571   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 94/120
	I0917 18:22:42.389725   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 95/120
	I0917 18:22:43.391161   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 96/120
	I0917 18:22:44.392627   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 97/120
	I0917 18:22:45.393937   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 98/120
	I0917 18:22:46.395479   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 99/120
	I0917 18:22:47.397941   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 100/120
	I0917 18:22:48.399419   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 101/120
	I0917 18:22:49.401337   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 102/120
	I0917 18:22:50.402538   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 103/120
	I0917 18:22:51.404021   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 104/120
	I0917 18:22:52.406087   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 105/120
	I0917 18:22:53.407671   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 106/120
	I0917 18:22:54.408882   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 107/120
	I0917 18:22:55.410479   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 108/120
	I0917 18:22:56.412367   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 109/120
	I0917 18:22:57.415055   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 110/120
	I0917 18:22:58.416677   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 111/120
	I0917 18:22:59.418306   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 112/120
	I0917 18:23:00.419841   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 113/120
	I0917 18:23:01.421577   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 114/120
	I0917 18:23:02.423754   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 115/120
	I0917 18:23:03.425384   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 116/120
	I0917 18:23:04.426981   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 117/120
	I0917 18:23:05.428667   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 118/120
	I0917 18:23:06.430257   76688 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for machine to stop 119/120
	I0917 18:23:07.431292   76688 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0917 18:23:07.431386   76688 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0917 18:23:07.433580   76688 out.go:201] 
	W0917 18:23:07.434828   76688 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0917 18:23:07.434842   76688 out.go:270] * 
	* 
	W0917 18:23:07.437942   76688 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 18:23:07.439415   76688 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-438836 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-438836 -n default-k8s-diff-port-438836
E0917 18:23:10.821166   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:23:10.827580   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:23:10.839024   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:23:10.860469   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:23:10.901881   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:23:10.983650   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:23:11.145245   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:23:11.466936   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:23:12.109031   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:23:13.390980   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:23:15.952384   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:23:21.073681   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:23:24.145605   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/bridge-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-438836 -n default-k8s-diff-port-438836: exit status 3 (18.619874257s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 18:23:26.061573   77569 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.58:22: connect: no route to host
	E0917 18:23:26.061596   77569 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.58:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-438836" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-081863 -n embed-certs-081863
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-081863 -n embed-certs-081863: exit status 3 (3.167773176s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 18:22:16.525635   77020 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.61:22: connect: no route to host
	E0917 18:22:16.525672   77020 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.61:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-081863 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-081863 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152270238s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.61:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-081863 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-081863 -n embed-certs-081863
E0917 18:22:22.700796   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/bridge-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-081863 -n embed-certs-081863: exit status 3 (3.063484131s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 18:22:25.741685   77086 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.61:22: connect: no route to host
	E0917 18:22:25.741707   77086 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.61:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-081863" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-190698 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-190698 create -f testdata/busybox.yaml: exit status 1 (43.863547ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-190698" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-190698 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-190698 -n old-k8s-version-190698
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-190698 -n old-k8s-version-190698: exit status 6 (221.415761ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 18:22:24.526049   77175 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-190698" does not appear in /home/jenkins/minikube-integration/19662-11085/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-190698" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-190698 -n old-k8s-version-190698
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-190698 -n old-k8s-version-190698: exit status 6 (217.827433ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 18:22:24.744775   77205 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-190698" does not appear in /home/jenkins/minikube-integration/19662-11085/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-190698" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (80.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-190698 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0917 18:22:25.521168   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/custom-flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-190698 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m19.840084775s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-190698 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-190698 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-190698 describe deploy/metrics-server -n kube-system: exit status 1 (45.865107ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-190698" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-190698 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-190698 -n old-k8s-version-190698
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-190698 -n old-k8s-version-190698: exit status 6 (220.077886ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 18:23:44.850261   77881 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-190698" does not appear in /home/jenkins/minikube-integration/19662-11085/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-190698" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (80.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-328741 -n no-preload-328741
E0917 18:22:32.758437   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-328741 -n no-preload-328741: exit status 3 (3.167705375s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 18:22:32.909567   77322 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.182:22: connect: no route to host
	E0917 18:22:32.909587   77322 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.182:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-328741 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-328741 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.158313967s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.182:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-328741 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-328741 -n no-preload-328741
E0917 18:22:40.066622   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-328741 -n no-preload-328741: exit status 3 (3.057529647s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 18:22:42.125684   77403 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.182:22: connect: no route to host
	E0917 18:22:42.125702   77403 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.182:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-328741" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-438836 -n default-k8s-diff-port-438836
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-438836 -n default-k8s-diff-port-438836: exit status 3 (3.16788058s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 18:23:29.229652   77693 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.58:22: connect: no route to host
	E0917 18:23:29.229671   77693 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.58:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-438836 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0917 18:23:31.315038   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-438836 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153727482s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.58:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-438836 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-438836 -n default-k8s-diff-port-438836
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-438836 -n default-k8s-diff-port-438836: exit status 3 (3.062229975s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 18:23:38.445709   77773 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.58:22: connect: no route to host
	E0917 18:23:38.445731   77773 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.58:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-438836" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (739.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-190698 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0917 18:23:51.797220   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:23:58.729335   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/enable-default-cni-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:23:58.735752   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/enable-default-cni-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:23:58.747209   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/enable-default-cni-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:23:58.768627   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/enable-default-cni-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:23:58.810103   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/enable-default-cni-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:23:58.891608   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/enable-default-cni-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:23:59.053368   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/enable-default-cni-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:23:59.375159   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/enable-default-cni-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:24:00.016523   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/enable-default-cni-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:24:01.297879   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/enable-default-cni-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:24:03.859294   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/enable-default-cni-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:24:05.750039   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/calico-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:24:08.981153   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/enable-default-cni-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:24:19.222481   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/enable-default-cni-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:24:28.405064   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/custom-flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:24:32.758568   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:24:39.704104   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/enable-default-cni-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:24:46.067542   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/bridge-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:24:48.896599   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:24:56.205839   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:25:16.599736   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:25:20.666125   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/enable-default-cni-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:25:23.908797   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:25:54.680476   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:26:21.889761   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/calico-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:26:24.983434   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:26:42.587605   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/enable-default-cni-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:26:44.545129   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/custom-flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:26:49.591995   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/calico-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:27:02.207013   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/bridge-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:27:12.246760   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/custom-flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:27:29.909566   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/bridge-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:27:48.053466   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:28:10.821354   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:28:38.522775   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:28:50.531919   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:28:58.728151   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/enable-default-cni-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:29:26.429991   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/enable-default-cni-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:29:48.896887   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:29:56.205839   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:31:21.889762   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/calico-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:31:24.983559   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:31:44.544660   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/custom-flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:32:02.206872   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/bridge-639892/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-190698 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m16.237925521s)

                                                
                                                
-- stdout --
	* [old-k8s-version-190698] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-190698" primary control-plane node in "old-k8s-version-190698" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-190698" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 18:23:50.674050   78008 out.go:345] Setting OutFile to fd 1 ...
	I0917 18:23:50.674338   78008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:23:50.674349   78008 out.go:358] Setting ErrFile to fd 2...
	I0917 18:23:50.674356   78008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:23:50.674556   78008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 18:23:50.675161   78008 out.go:352] Setting JSON to false
	I0917 18:23:50.676159   78008 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7546,"bootTime":1726589885,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 18:23:50.676252   78008 start.go:139] virtualization: kvm guest
	I0917 18:23:50.678551   78008 out.go:177] * [old-k8s-version-190698] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 18:23:50.679898   78008 notify.go:220] Checking for updates...
	I0917 18:23:50.679923   78008 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 18:23:50.681520   78008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 18:23:50.683062   78008 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:23:50.684494   78008 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 18:23:50.685988   78008 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 18:23:50.687372   78008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 18:23:50.689066   78008 config.go:182] Loaded profile config "old-k8s-version-190698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0917 18:23:50.689526   78008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:23:50.689604   78008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:23:50.704879   78008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41839
	I0917 18:23:50.705416   78008 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:23:50.705985   78008 main.go:141] libmachine: Using API Version  1
	I0917 18:23:50.706014   78008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:23:50.706318   78008 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:23:50.706508   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:23:50.708560   78008 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0917 18:23:50.709804   78008 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 18:23:50.710139   78008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:23:50.710185   78008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:23:50.725466   78008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41935
	I0917 18:23:50.725978   78008 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:23:50.726521   78008 main.go:141] libmachine: Using API Version  1
	I0917 18:23:50.726552   78008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:23:50.726874   78008 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:23:50.727047   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:23:50.764769   78008 out.go:177] * Using the kvm2 driver based on existing profile
	I0917 18:23:50.766378   78008 start.go:297] selected driver: kvm2
	I0917 18:23:50.766396   78008 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:23:50.766522   78008 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 18:23:50.767254   78008 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:23:50.767323   78008 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19662-11085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0917 18:23:50.783226   78008 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0917 18:23:50.783619   78008 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 18:23:50.783658   78008 cni.go:84] Creating CNI manager for ""
	I0917 18:23:50.783697   78008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:23:50.783745   78008 start.go:340] cluster config:
	{Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:23:50.783859   78008 iso.go:125] acquiring lock: {Name:mkee7131e09b1c5eb94d106bda884bcbdbf6f906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:23:50.786173   78008 out.go:177] * Starting "old-k8s-version-190698" primary control-plane node in "old-k8s-version-190698" cluster
	I0917 18:23:50.787985   78008 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0917 18:23:50.788036   78008 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0917 18:23:50.788046   78008 cache.go:56] Caching tarball of preloaded images
	I0917 18:23:50.788122   78008 preload.go:172] Found /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 18:23:50.788132   78008 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0917 18:23:50.788236   78008 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/config.json ...
	I0917 18:23:50.788409   78008 start.go:360] acquireMachinesLock for old-k8s-version-190698: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 18:27:40.422552   78008 start.go:364] duration metric: took 3m49.634084682s to acquireMachinesLock for "old-k8s-version-190698"
	I0917 18:27:40.422631   78008 start.go:96] Skipping create...Using existing machine configuration
	I0917 18:27:40.422641   78008 fix.go:54] fixHost starting: 
	I0917 18:27:40.423075   78008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:27:40.423129   78008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:27:40.444791   78008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38471
	I0917 18:27:40.445363   78008 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:27:40.446028   78008 main.go:141] libmachine: Using API Version  1
	I0917 18:27:40.446063   78008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:27:40.446445   78008 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:27:40.446690   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:27:40.446844   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetState
	I0917 18:27:40.448698   78008 fix.go:112] recreateIfNeeded on old-k8s-version-190698: state=Stopped err=<nil>
	I0917 18:27:40.448743   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	W0917 18:27:40.448912   78008 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 18:27:40.451316   78008 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-190698" ...
	I0917 18:27:40.452694   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .Start
	I0917 18:27:40.452899   78008 main.go:141] libmachine: (old-k8s-version-190698) Ensuring networks are active...
	I0917 18:27:40.453913   78008 main.go:141] libmachine: (old-k8s-version-190698) Ensuring network default is active
	I0917 18:27:40.454353   78008 main.go:141] libmachine: (old-k8s-version-190698) Ensuring network mk-old-k8s-version-190698 is active
	I0917 18:27:40.454806   78008 main.go:141] libmachine: (old-k8s-version-190698) Getting domain xml...
	I0917 18:27:40.455606   78008 main.go:141] libmachine: (old-k8s-version-190698) Creating domain...
	I0917 18:27:41.874915   78008 main.go:141] libmachine: (old-k8s-version-190698) Waiting to get IP...
	I0917 18:27:41.875882   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:41.876350   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:41.876438   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:41.876337   78975 retry.go:31] will retry after 221.467702ms: waiting for machine to come up
	I0917 18:27:42.100196   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:42.100848   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:42.100869   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:42.100798   78975 retry.go:31] will retry after 339.25287ms: waiting for machine to come up
	I0917 18:27:42.441407   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:42.442029   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:42.442057   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:42.441987   78975 retry.go:31] will retry after 471.576193ms: waiting for machine to come up
	I0917 18:27:42.915529   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:42.916159   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:42.916187   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:42.916123   78975 retry.go:31] will retry after 502.97146ms: waiting for machine to come up
	I0917 18:27:43.420795   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:43.421214   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:43.421256   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:43.421163   78975 retry.go:31] will retry after 660.138027ms: waiting for machine to come up
	I0917 18:27:44.082653   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:44.083225   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:44.083255   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:44.083166   78975 retry.go:31] will retry after 656.142121ms: waiting for machine to come up
	I0917 18:27:44.740700   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:44.741167   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:44.741193   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:44.741129   78975 retry.go:31] will retry after 928.613341ms: waiting for machine to come up
	I0917 18:27:45.671934   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:45.672452   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:45.672489   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:45.672370   78975 retry.go:31] will retry after 980.051509ms: waiting for machine to come up
	I0917 18:27:46.654519   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:46.654962   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:46.655001   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:46.654927   78975 retry.go:31] will retry after 1.346541235s: waiting for machine to come up
	I0917 18:27:48.003569   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:48.004084   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:48.004118   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:48.004017   78975 retry.go:31] will retry after 2.098571627s: waiting for machine to come up
	I0917 18:27:50.105422   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:50.106073   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:50.106096   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:50.105998   78975 retry.go:31] will retry after 1.995584656s: waiting for machine to come up
	I0917 18:27:52.103970   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:52.104598   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:52.104668   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:52.104610   78975 retry.go:31] will retry after 3.302824s: waiting for machine to come up
	I0917 18:27:55.410506   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:55.410967   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:55.410993   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:55.410917   78975 retry.go:31] will retry after 3.790367729s: waiting for machine to come up
	I0917 18:27:59.203551   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.204119   78008 main.go:141] libmachine: (old-k8s-version-190698) Found IP for machine: 192.168.61.143
	I0917 18:27:59.204145   78008 main.go:141] libmachine: (old-k8s-version-190698) Reserving static IP address...
	I0917 18:27:59.204160   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has current primary IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.204580   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "old-k8s-version-190698", mac: "52:54:00:72:8a:43", ip: "192.168.61.143"} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.204623   78008 main.go:141] libmachine: (old-k8s-version-190698) Reserved static IP address: 192.168.61.143
	I0917 18:27:59.204642   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | skip adding static IP to network mk-old-k8s-version-190698 - found existing host DHCP lease matching {name: "old-k8s-version-190698", mac: "52:54:00:72:8a:43", ip: "192.168.61.143"}
	I0917 18:27:59.204660   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | Getting to WaitForSSH function...
	I0917 18:27:59.204675   78008 main.go:141] libmachine: (old-k8s-version-190698) Waiting for SSH to be available...
	I0917 18:27:59.206831   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.207248   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.207277   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.207563   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | Using SSH client type: external
	I0917 18:27:59.207591   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa (-rw-------)
	I0917 18:27:59.207628   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.143 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:27:59.207648   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | About to run SSH command:
	I0917 18:27:59.207660   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | exit 0
	I0917 18:27:59.334284   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | SSH cmd err, output: <nil>: 
	I0917 18:27:59.334712   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetConfigRaw
	I0917 18:27:59.335400   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:27:59.337795   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.338175   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.338199   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.338448   78008 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/config.json ...
	I0917 18:27:59.338675   78008 machine.go:93] provisionDockerMachine start ...
	I0917 18:27:59.338696   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:27:59.338932   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.340943   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.341313   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.341338   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.341517   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:27:59.341695   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.341821   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.341953   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:27:59.342138   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:59.342349   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:27:59.342366   78008 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 18:27:59.449958   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 18:27:59.449986   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetMachineName
	I0917 18:27:59.450245   78008 buildroot.go:166] provisioning hostname "old-k8s-version-190698"
	I0917 18:27:59.450275   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetMachineName
	I0917 18:27:59.450449   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.453653   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.454015   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.454044   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.454246   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:27:59.454451   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.454608   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.454777   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:27:59.454978   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:59.455195   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:27:59.455212   78008 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-190698 && echo "old-k8s-version-190698" | sudo tee /etc/hostname
	I0917 18:27:59.576721   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-190698
	
	I0917 18:27:59.576758   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.579821   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.580176   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.580211   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.580420   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:27:59.580601   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.580774   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.580920   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:27:59.581097   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:59.581292   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:27:59.581313   78008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-190698' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-190698/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-190698' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:27:59.696335   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:27:59.696366   78008 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:27:59.696387   78008 buildroot.go:174] setting up certificates
	I0917 18:27:59.696396   78008 provision.go:84] configureAuth start
	I0917 18:27:59.696405   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetMachineName
	I0917 18:27:59.696689   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:27:59.699694   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.700052   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.700079   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.700251   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.702492   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.702870   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.702897   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.703098   78008 provision.go:143] copyHostCerts
	I0917 18:27:59.703211   78008 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:27:59.703228   78008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:27:59.703308   78008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:27:59.703494   78008 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:27:59.703511   78008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:27:59.703557   78008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:27:59.703696   78008 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:27:59.703711   78008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:27:59.703743   78008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:27:59.703843   78008 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-190698 san=[127.0.0.1 192.168.61.143 localhost minikube old-k8s-version-190698]
	I0917 18:27:59.881199   78008 provision.go:177] copyRemoteCerts
	I0917 18:27:59.881281   78008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:27:59.881319   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.884199   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.884526   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.884559   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.884808   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:27:59.885004   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.885174   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:27:59.885311   78008 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:27:59.972021   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:27:59.999996   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0917 18:28:00.028759   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 18:28:00.062167   78008 provision.go:87] duration metric: took 365.752983ms to configureAuth
	I0917 18:28:00.062224   78008 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:28:00.062431   78008 config.go:182] Loaded profile config "old-k8s-version-190698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0917 18:28:00.062530   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.065903   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.066354   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.066387   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.066851   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.067080   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.067272   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.067551   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.067782   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:00.068031   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:28:00.068058   78008 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:28:00.310378   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:28:00.310410   78008 machine.go:96] duration metric: took 971.72114ms to provisionDockerMachine
	I0917 18:28:00.310424   78008 start.go:293] postStartSetup for "old-k8s-version-190698" (driver="kvm2")
	I0917 18:28:00.310444   78008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:28:00.310465   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.310788   78008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:28:00.310822   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.313609   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.313975   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.314004   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.314158   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.314364   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.314518   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.314672   78008 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:28:00.402352   78008 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:28:00.407061   78008 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:28:00.407091   78008 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:28:00.407183   78008 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:28:00.407295   78008 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:28:00.407435   78008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:28:00.419527   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:28:00.449686   78008 start.go:296] duration metric: took 139.247596ms for postStartSetup
	I0917 18:28:00.449739   78008 fix.go:56] duration metric: took 20.027097941s for fixHost
	I0917 18:28:00.449764   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.452672   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.453033   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.453080   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.453218   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.453433   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.453637   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.453793   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.454001   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:00.454175   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:28:00.454185   78008 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:28:00.566377   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726597680.523257617
	
	I0917 18:28:00.566403   78008 fix.go:216] guest clock: 1726597680.523257617
	I0917 18:28:00.566413   78008 fix.go:229] Guest: 2024-09-17 18:28:00.523257617 +0000 UTC Remote: 2024-09-17 18:28:00.449744487 +0000 UTC m=+249.811602656 (delta=73.51313ms)
	I0917 18:28:00.566439   78008 fix.go:200] guest clock delta is within tolerance: 73.51313ms
	I0917 18:28:00.566445   78008 start.go:83] releasing machines lock for "old-k8s-version-190698", held for 20.143843614s
	I0917 18:28:00.566478   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.566748   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:28:00.570065   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.570491   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.570520   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.570731   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.571320   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.571497   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.571584   78008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:28:00.571649   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.571803   78008 ssh_runner.go:195] Run: cat /version.json
	I0917 18:28:00.571830   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.574802   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.575083   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.575343   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.575382   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.575506   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.575574   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.575600   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.575664   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.575881   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.575941   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.576030   78008 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:28:00.576082   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.576278   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.576430   78008 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:28:00.685146   78008 ssh_runner.go:195] Run: systemctl --version
	I0917 18:28:00.692059   78008 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:28:00.844888   78008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:28:00.852326   78008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:28:00.852438   78008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:28:00.869907   78008 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:28:00.869934   78008 start.go:495] detecting cgroup driver to use...
	I0917 18:28:00.870010   78008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:28:00.888992   78008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:28:00.905438   78008 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:28:00.905495   78008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:28:00.920872   78008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:28:00.939154   78008 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:28:01.067061   78008 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:28:01.220976   78008 docker.go:233] disabling docker service ...
	I0917 18:28:01.221068   78008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:28:01.240350   78008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:28:01.257396   78008 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:28:01.407317   78008 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:28:01.552256   78008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:28:01.567151   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:28:01.589401   78008 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0917 18:28:01.589465   78008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:01.604462   78008 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:28:01.604527   78008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:01.617293   78008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:01.629766   78008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:01.643336   78008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:28:01.656308   78008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:28:01.667116   78008 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:28:01.667187   78008 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:28:01.683837   78008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:28:01.697438   78008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:28:01.843288   78008 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:28:01.951590   78008 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:28:01.951666   78008 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:28:01.957158   78008 start.go:563] Will wait 60s for crictl version
	I0917 18:28:01.957240   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:01.961218   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:28:02.001679   78008 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:28:02.001772   78008 ssh_runner.go:195] Run: crio --version
	I0917 18:28:02.032619   78008 ssh_runner.go:195] Run: crio --version
	I0917 18:28:02.064108   78008 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0917 18:28:02.065336   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:28:02.068703   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:02.069066   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:02.069094   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:02.069321   78008 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0917 18:28:02.074550   78008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:28:02.091863   78008 kubeadm.go:883] updating cluster {Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:28:02.092006   78008 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0917 18:28:02.092069   78008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:28:02.152944   78008 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0917 18:28:02.153024   78008 ssh_runner.go:195] Run: which lz4
	I0917 18:28:02.157664   78008 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 18:28:02.162231   78008 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 18:28:02.162290   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0917 18:28:04.015315   78008 crio.go:462] duration metric: took 1.857697544s to copy over tarball
	I0917 18:28:04.015398   78008 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 18:28:07.199571   78008 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.184141166s)
	I0917 18:28:07.199605   78008 crio.go:469] duration metric: took 3.184259546s to extract the tarball
	I0917 18:28:07.199625   78008 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 18:28:07.247308   78008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:28:07.290580   78008 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0917 18:28:07.290605   78008 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0917 18:28:07.290641   78008 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:28:07.290664   78008 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.290685   78008 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.290705   78008 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.290772   78008 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.290865   78008 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.290898   78008 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0917 18:28:07.290896   78008 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.292426   78008 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.292473   78008 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.292479   78008 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.292525   78008 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:28:07.292555   78008 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.292544   78008 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.292594   78008 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.292796   78008 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0917 18:28:07.460802   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.466278   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.466439   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.473442   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.484306   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.490062   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.517285   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0917 18:28:07.550668   78008 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0917 18:28:07.550730   78008 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.550779   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.598383   78008 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0917 18:28:07.598426   78008 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.598468   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.627615   78008 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0917 18:28:07.627665   78008 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.627737   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.675687   78008 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0917 18:28:07.675733   78008 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.675769   78008 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0917 18:28:07.675806   78008 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.675848   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.675809   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.689052   78008 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0917 18:28:07.689106   78008 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.689141   78008 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0917 18:28:07.689169   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.689186   78008 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0917 18:28:07.689200   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.689224   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.689252   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.689296   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.689336   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.689374   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.782923   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0917 18:28:07.783204   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.833121   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.833205   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.833278   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.833316   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.833343   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.880054   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0917 18:28:07.885156   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.982007   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.990252   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:08.005351   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:08.008118   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0917 18:28:08.008319   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:08.066339   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0917 18:28:08.066388   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0917 18:28:08.173842   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0917 18:28:08.173884   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0917 18:28:08.173951   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:08.181801   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0917 18:28:08.181832   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0917 18:28:08.181952   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0917 18:28:08.196666   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:28:08.219844   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0917 18:28:08.351645   78008 cache_images.go:92] duration metric: took 1.061022994s to LoadCachedImages
	W0917 18:28:08.351739   78008 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0917 18:28:08.351760   78008 kubeadm.go:934] updating node { 192.168.61.143 8443 v1.20.0 crio true true} ...
	I0917 18:28:08.351869   78008 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-190698 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.143
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:28:08.351947   78008 ssh_runner.go:195] Run: crio config
	I0917 18:28:08.404304   78008 cni.go:84] Creating CNI manager for ""
	I0917 18:28:08.404333   78008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:28:08.404347   78008 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:28:08.404369   78008 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.143 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-190698 NodeName:old-k8s-version-190698 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.143"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.143 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0917 18:28:08.404554   78008 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.143
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-190698"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.143
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.143"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:28:08.404636   78008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0917 18:28:08.415712   78008 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:28:08.415788   78008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:28:08.426074   78008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0917 18:28:08.446765   78008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:28:08.467884   78008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0917 18:28:08.489565   78008 ssh_runner.go:195] Run: grep 192.168.61.143	control-plane.minikube.internal$ /etc/hosts
	I0917 18:28:08.494030   78008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.143	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:28:08.510100   78008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:28:08.667598   78008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:28:08.686416   78008 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698 for IP: 192.168.61.143
	I0917 18:28:08.686453   78008 certs.go:194] generating shared ca certs ...
	I0917 18:28:08.686477   78008 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:28:08.686680   78008 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:28:08.686743   78008 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:28:08.686762   78008 certs.go:256] generating profile certs ...
	I0917 18:28:08.686886   78008 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/client.key
	I0917 18:28:08.686962   78008 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.key.8ffdb4af
	I0917 18:28:08.687069   78008 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/proxy-client.key
	I0917 18:28:08.687256   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:28:08.687302   78008 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:28:08.687318   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:28:08.687360   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:28:08.687397   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:28:08.687441   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:28:08.687511   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:28:08.688412   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:28:08.729318   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:28:08.772932   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:28:08.815329   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:28:08.866305   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 18:28:08.910004   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 18:28:08.950902   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:28:08.993679   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 18:28:09.021272   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:28:09.046848   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:28:09.078938   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:28:09.110919   78008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:28:09.134493   78008 ssh_runner.go:195] Run: openssl version
	I0917 18:28:09.142920   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:28:09.157440   78008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:28:09.163382   78008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:28:09.163460   78008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:28:09.170446   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:28:09.182690   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:28:09.195144   78008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:28:09.200544   78008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:28:09.200612   78008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:28:09.207418   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:28:09.219931   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:28:09.234765   78008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:09.240859   78008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:09.240930   78008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:09.249168   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:28:09.262225   78008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:28:09.267923   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 18:28:09.276136   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 18:28:09.284356   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 18:28:09.292809   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 18:28:09.301175   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 18:28:09.309486   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 18:28:09.317652   78008 kubeadm.go:392] StartCluster: {Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:28:09.317788   78008 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:28:09.317862   78008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:28:09.367633   78008 cri.go:89] found id: ""
	I0917 18:28:09.367714   78008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:28:09.378721   78008 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 18:28:09.378751   78008 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 18:28:09.378823   78008 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 18:28:09.389949   78008 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 18:28:09.391438   78008 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-190698" does not appear in /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:28:09.392494   78008 kubeconfig.go:62] /home/jenkins/minikube-integration/19662-11085/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-190698" cluster setting kubeconfig missing "old-k8s-version-190698" context setting]
	I0917 18:28:09.393951   78008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:28:09.396482   78008 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 18:28:09.407488   78008 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.143
	I0917 18:28:09.407541   78008 kubeadm.go:1160] stopping kube-system containers ...
	I0917 18:28:09.407555   78008 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0917 18:28:09.407617   78008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:28:09.454529   78008 cri.go:89] found id: ""
	I0917 18:28:09.454609   78008 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 18:28:09.473001   78008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:28:09.483455   78008 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:28:09.483478   78008 kubeadm.go:157] found existing configuration files:
	
	I0917 18:28:09.483524   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:28:09.492941   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:28:09.493015   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:28:09.503733   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:28:09.513646   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:28:09.513744   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:28:09.523852   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:28:09.533964   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:28:09.534023   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:28:09.544196   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:28:09.554778   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:28:09.554867   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:28:09.565305   78008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:28:09.576177   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:09.717093   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:10.376689   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:10.619407   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:10.743928   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:10.832172   78008 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:28:10.832275   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:11.333365   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:11.832631   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:12.332364   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:12.832978   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:13.333348   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:13.833325   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:14.333130   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:14.833200   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:15.333019   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:15.832326   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:16.333353   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:16.833183   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:17.332967   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:17.833315   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:18.333025   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:18.832727   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:19.333388   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:19.833387   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:20.332777   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:20.832698   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:21.332644   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:21.832955   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:22.332859   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:22.832393   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:23.333067   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:23.833266   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:24.332837   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:24.832669   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:25.332772   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:25.832772   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:26.332949   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:26.833016   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:27.332604   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:27.833127   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:28.332337   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:28.832430   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.332564   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.833193   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:30.333057   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:30.832853   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:31.332521   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:31.832513   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:32.332347   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:32.833201   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:33.332485   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:33.833002   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:34.333150   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:34.832985   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:35.332584   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:35.833375   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:36.332518   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:36.833057   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:37.333093   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:37.832449   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:38.333260   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:38.832592   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:39.332352   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:39.833094   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:40.332401   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:40.832609   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:41.332438   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:41.832456   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:42.332846   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:42.832374   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:43.332703   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:43.832502   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:44.332845   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:44.832341   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:45.333377   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:45.832541   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:46.332842   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:46.832446   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:47.333344   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:47.833087   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:48.332527   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:48.832377   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:49.332937   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:49.833254   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:50.332394   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:50.833049   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:51.333314   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:51.832959   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:52.332830   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:52.832394   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:53.333004   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:53.832841   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:54.333310   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:54.832648   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:55.332487   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:55.832339   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:56.333257   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:56.833293   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:57.332665   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:57.833189   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:58.332409   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:58.833030   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:59.333251   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:59.832903   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:00.333365   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:00.833018   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:01.332976   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:01.832860   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:02.332401   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:02.832409   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:03.333273   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:03.832435   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:04.332572   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:04.832618   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:05.333051   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:05.833109   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:06.332870   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:06.833248   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:07.332856   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:07.832795   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:08.332779   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:08.832356   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:09.333340   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:09.832899   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:10.332646   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:10.833153   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:10.833224   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:10.877318   78008 cri.go:89] found id: ""
	I0917 18:29:10.877347   78008 logs.go:276] 0 containers: []
	W0917 18:29:10.877356   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:10.877363   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:10.877433   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:10.913506   78008 cri.go:89] found id: ""
	I0917 18:29:10.913532   78008 logs.go:276] 0 containers: []
	W0917 18:29:10.913540   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:10.913546   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:10.913607   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:10.952648   78008 cri.go:89] found id: ""
	I0917 18:29:10.952679   78008 logs.go:276] 0 containers: []
	W0917 18:29:10.952689   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:10.952699   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:10.952761   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:10.992819   78008 cri.go:89] found id: ""
	I0917 18:29:10.992851   78008 logs.go:276] 0 containers: []
	W0917 18:29:10.992863   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:10.992870   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:10.992923   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:11.032717   78008 cri.go:89] found id: ""
	I0917 18:29:11.032752   78008 logs.go:276] 0 containers: []
	W0917 18:29:11.032764   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:11.032772   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:11.032831   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:11.070909   78008 cri.go:89] found id: ""
	I0917 18:29:11.070934   78008 logs.go:276] 0 containers: []
	W0917 18:29:11.070944   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:11.070953   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:11.071005   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:11.111115   78008 cri.go:89] found id: ""
	I0917 18:29:11.111146   78008 logs.go:276] 0 containers: []
	W0917 18:29:11.111157   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:11.111164   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:11.111233   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:11.147704   78008 cri.go:89] found id: ""
	I0917 18:29:11.147738   78008 logs.go:276] 0 containers: []
	W0917 18:29:11.147751   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:11.147770   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:11.147783   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:11.222086   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:11.222131   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:11.268572   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:11.268598   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:11.320140   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:11.320179   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:11.336820   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:11.336862   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:11.476726   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:13.977359   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:13.991780   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:13.991861   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:14.029657   78008 cri.go:89] found id: ""
	I0917 18:29:14.029686   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.029697   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:14.029703   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:14.029761   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:14.070673   78008 cri.go:89] found id: ""
	I0917 18:29:14.070707   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.070716   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:14.070722   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:14.070781   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:14.109826   78008 cri.go:89] found id: ""
	I0917 18:29:14.109862   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.109872   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:14.109880   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:14.109938   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:14.156812   78008 cri.go:89] found id: ""
	I0917 18:29:14.156839   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.156848   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:14.156853   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:14.156909   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:14.203877   78008 cri.go:89] found id: ""
	I0917 18:29:14.203906   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.203915   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:14.203921   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:14.203973   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:14.263366   78008 cri.go:89] found id: ""
	I0917 18:29:14.263395   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.263403   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:14.263408   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:14.263469   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:14.305300   78008 cri.go:89] found id: ""
	I0917 18:29:14.305324   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.305331   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:14.305337   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:14.305393   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:14.342838   78008 cri.go:89] found id: ""
	I0917 18:29:14.342874   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.342888   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:14.342900   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:14.342915   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:14.394814   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:14.394864   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:14.410058   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:14.410084   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:14.497503   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:14.497532   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:14.497547   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:14.578545   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:14.578582   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:17.119953   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:17.134019   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:17.134078   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:17.174236   78008 cri.go:89] found id: ""
	I0917 18:29:17.174259   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.174268   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:17.174273   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:17.174317   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:17.208678   78008 cri.go:89] found id: ""
	I0917 18:29:17.208738   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.208749   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:17.208757   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:17.208820   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:17.242890   78008 cri.go:89] found id: ""
	I0917 18:29:17.242915   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.242923   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:17.242929   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:17.242983   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:17.281990   78008 cri.go:89] found id: ""
	I0917 18:29:17.282013   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.282038   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:17.282046   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:17.282105   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:17.320104   78008 cri.go:89] found id: ""
	I0917 18:29:17.320140   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.320153   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:17.320160   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:17.320220   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:17.361959   78008 cri.go:89] found id: ""
	I0917 18:29:17.361993   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.362004   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:17.362012   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:17.362120   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:17.400493   78008 cri.go:89] found id: ""
	I0917 18:29:17.400531   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.400543   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:17.400550   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:17.400611   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:17.435549   78008 cri.go:89] found id: ""
	I0917 18:29:17.435574   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.435582   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:17.435590   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:17.435605   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:17.483883   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:17.483919   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:17.498771   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:17.498801   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:17.583654   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:17.583680   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:17.583695   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:17.670903   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:17.670935   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:20.213963   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:20.228410   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:20.228487   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:20.268252   78008 cri.go:89] found id: ""
	I0917 18:29:20.268290   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.268301   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:20.268308   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:20.268385   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:20.307725   78008 cri.go:89] found id: ""
	I0917 18:29:20.307765   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.307774   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:20.307779   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:20.307840   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:20.350112   78008 cri.go:89] found id: ""
	I0917 18:29:20.350138   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.350146   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:20.350151   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:20.350209   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:20.386658   78008 cri.go:89] found id: ""
	I0917 18:29:20.386683   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.386692   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:20.386697   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:20.386758   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:20.427135   78008 cri.go:89] found id: ""
	I0917 18:29:20.427168   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.427180   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:20.427186   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:20.427253   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:20.464054   78008 cri.go:89] found id: ""
	I0917 18:29:20.464081   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.464091   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:20.464098   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:20.464162   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:20.503008   78008 cri.go:89] found id: ""
	I0917 18:29:20.503034   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.503043   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:20.503048   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:20.503107   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:20.539095   78008 cri.go:89] found id: ""
	I0917 18:29:20.539125   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.539137   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:20.539149   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:20.539165   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:20.552429   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:20.552457   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:20.631977   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:20.632000   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:20.632012   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:20.709917   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:20.709950   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:20.752312   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:20.752349   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:23.310520   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:23.327230   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:23.327296   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:23.369648   78008 cri.go:89] found id: ""
	I0917 18:29:23.369677   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.369687   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:23.369692   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:23.369756   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:23.406968   78008 cri.go:89] found id: ""
	I0917 18:29:23.407002   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.407010   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:23.407017   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:23.407079   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:23.448246   78008 cri.go:89] found id: ""
	I0917 18:29:23.448275   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.448285   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:23.448290   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:23.448350   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:23.486975   78008 cri.go:89] found id: ""
	I0917 18:29:23.487006   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.487016   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:23.487024   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:23.487077   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:23.523614   78008 cri.go:89] found id: ""
	I0917 18:29:23.523645   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.523656   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:23.523672   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:23.523751   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:23.567735   78008 cri.go:89] found id: ""
	I0917 18:29:23.567763   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.567774   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:23.567781   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:23.567846   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:23.610952   78008 cri.go:89] found id: ""
	I0917 18:29:23.610985   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.610995   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:23.611002   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:23.611063   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:23.647601   78008 cri.go:89] found id: ""
	I0917 18:29:23.647633   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.647645   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:23.647657   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:23.647674   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:23.720969   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:23.720998   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:23.721014   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:23.802089   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:23.802124   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:23.847641   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:23.847673   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:23.901447   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:23.901488   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:26.416524   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:26.432087   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:26.432148   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:26.473403   78008 cri.go:89] found id: ""
	I0917 18:29:26.473435   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.473446   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:26.473453   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:26.473516   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:26.510736   78008 cri.go:89] found id: ""
	I0917 18:29:26.510764   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.510774   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:26.510780   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:26.510847   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:26.549732   78008 cri.go:89] found id: ""
	I0917 18:29:26.549766   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.549779   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:26.549789   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:26.549857   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:26.586548   78008 cri.go:89] found id: ""
	I0917 18:29:26.586580   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.586592   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:26.586599   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:26.586664   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:26.624246   78008 cri.go:89] found id: ""
	I0917 18:29:26.624276   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.624286   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:26.624294   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:26.624353   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:26.662535   78008 cri.go:89] found id: ""
	I0917 18:29:26.662565   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.662576   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:26.662584   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:26.662648   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:26.697775   78008 cri.go:89] found id: ""
	I0917 18:29:26.697810   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.697820   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:26.697826   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:26.697885   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:26.734181   78008 cri.go:89] found id: ""
	I0917 18:29:26.734209   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.734218   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:26.734228   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:26.734239   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:26.783128   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:26.783163   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:26.797674   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:26.797713   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:26.873548   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:26.873570   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:26.873581   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:26.954031   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:26.954066   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:29.494364   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:29.508545   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:29.508616   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:29.545854   78008 cri.go:89] found id: ""
	I0917 18:29:29.545880   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.545888   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:29.545893   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:29.545941   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:29.581646   78008 cri.go:89] found id: ""
	I0917 18:29:29.581680   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.581691   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:29.581698   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:29.581770   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:29.627071   78008 cri.go:89] found id: ""
	I0917 18:29:29.627101   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.627112   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:29.627119   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:29.627176   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:29.662514   78008 cri.go:89] found id: ""
	I0917 18:29:29.662544   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.662555   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:29.662562   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:29.662622   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:29.699246   78008 cri.go:89] found id: ""
	I0917 18:29:29.699278   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.699291   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:29.699299   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:29.699359   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:29.736018   78008 cri.go:89] found id: ""
	I0917 18:29:29.736057   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.736070   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:29.736077   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:29.736138   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:29.773420   78008 cri.go:89] found id: ""
	I0917 18:29:29.773449   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.773459   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:29.773467   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:29.773527   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:29.811530   78008 cri.go:89] found id: ""
	I0917 18:29:29.811556   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.811568   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:29.811578   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:29.811592   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:29.870083   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:29.870123   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:29.885471   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:29.885500   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:29.964699   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:29.964730   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:29.964754   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:30.048858   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:30.048899   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:32.597013   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:32.611613   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:32.611691   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:32.648043   78008 cri.go:89] found id: ""
	I0917 18:29:32.648074   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.648086   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:32.648093   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:32.648159   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:32.686471   78008 cri.go:89] found id: ""
	I0917 18:29:32.686514   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.686526   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:32.686533   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:32.686594   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:32.721495   78008 cri.go:89] found id: ""
	I0917 18:29:32.721521   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.721530   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:32.721536   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:32.721595   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:32.757916   78008 cri.go:89] found id: ""
	I0917 18:29:32.757949   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.757960   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:32.757968   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:32.758035   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:32.793880   78008 cri.go:89] found id: ""
	I0917 18:29:32.793913   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.793925   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:32.793933   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:32.794006   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:32.834944   78008 cri.go:89] found id: ""
	I0917 18:29:32.834965   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.834973   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:32.834983   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:32.835044   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:32.872852   78008 cri.go:89] found id: ""
	I0917 18:29:32.872875   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.872883   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:32.872888   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:32.872939   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:32.913506   78008 cri.go:89] found id: ""
	I0917 18:29:32.913530   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.913538   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:32.913547   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:32.913562   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:32.928726   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:32.928751   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:33.001220   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:33.001259   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:33.001274   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:33.080268   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:33.080304   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:33.123977   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:33.124008   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:35.678936   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:35.692953   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:35.693036   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:35.736947   78008 cri.go:89] found id: ""
	I0917 18:29:35.736984   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.737004   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:35.737012   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:35.737076   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:35.776148   78008 cri.go:89] found id: ""
	I0917 18:29:35.776173   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.776184   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:35.776191   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:35.776253   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:35.814136   78008 cri.go:89] found id: ""
	I0917 18:29:35.814167   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.814179   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:35.814189   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:35.814252   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:35.854451   78008 cri.go:89] found id: ""
	I0917 18:29:35.854480   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.854492   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:35.854505   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:35.854573   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:35.893068   78008 cri.go:89] found id: ""
	I0917 18:29:35.893091   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.893102   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:35.893108   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:35.893174   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:35.929116   78008 cri.go:89] found id: ""
	I0917 18:29:35.929140   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.929148   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:35.929153   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:35.929211   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:35.964253   78008 cri.go:89] found id: ""
	I0917 18:29:35.964284   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.964294   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:35.964300   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:35.964364   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:36.002761   78008 cri.go:89] found id: ""
	I0917 18:29:36.002790   78008 logs.go:276] 0 containers: []
	W0917 18:29:36.002800   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:36.002810   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:36.002825   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:36.017581   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:36.017614   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:36.086982   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:36.087008   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:36.087024   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:36.169886   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:36.169919   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:36.215327   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:36.215355   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:38.768619   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:38.781979   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:38.782049   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:38.818874   78008 cri.go:89] found id: ""
	I0917 18:29:38.818903   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.818911   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:38.818918   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:38.818967   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:38.857619   78008 cri.go:89] found id: ""
	I0917 18:29:38.857648   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.857664   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:38.857670   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:38.857747   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:38.896861   78008 cri.go:89] found id: ""
	I0917 18:29:38.896896   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.896907   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:38.896914   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:38.896977   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:38.934593   78008 cri.go:89] found id: ""
	I0917 18:29:38.934616   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.934625   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:38.934632   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:38.934707   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:38.972359   78008 cri.go:89] found id: ""
	I0917 18:29:38.972383   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.972394   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:38.972400   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:38.972468   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:39.007529   78008 cri.go:89] found id: ""
	I0917 18:29:39.007554   78008 logs.go:276] 0 containers: []
	W0917 18:29:39.007561   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:39.007567   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:39.007613   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:39.042646   78008 cri.go:89] found id: ""
	I0917 18:29:39.042679   78008 logs.go:276] 0 containers: []
	W0917 18:29:39.042690   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:39.042697   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:39.042758   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:39.080077   78008 cri.go:89] found id: ""
	I0917 18:29:39.080106   78008 logs.go:276] 0 containers: []
	W0917 18:29:39.080118   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:39.080128   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:39.080144   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:39.094785   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:39.094812   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:39.168149   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:39.168173   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:39.168184   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:39.258912   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:39.258958   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:39.303103   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:39.303133   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:41.860904   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:41.875574   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:41.875644   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:41.916576   78008 cri.go:89] found id: ""
	I0917 18:29:41.916603   78008 logs.go:276] 0 containers: []
	W0917 18:29:41.916615   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:41.916623   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:41.916674   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:41.952222   78008 cri.go:89] found id: ""
	I0917 18:29:41.952284   78008 logs.go:276] 0 containers: []
	W0917 18:29:41.952298   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:41.952307   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:41.952374   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:41.992584   78008 cri.go:89] found id: ""
	I0917 18:29:41.992611   78008 logs.go:276] 0 containers: []
	W0917 18:29:41.992621   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:41.992627   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:41.992689   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:42.030490   78008 cri.go:89] found id: ""
	I0917 18:29:42.030522   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.030534   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:42.030542   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:42.030621   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:42.067240   78008 cri.go:89] found id: ""
	I0917 18:29:42.067274   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.067287   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:42.067312   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:42.067394   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:42.106093   78008 cri.go:89] found id: ""
	I0917 18:29:42.106124   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.106137   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:42.106148   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:42.106227   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:42.148581   78008 cri.go:89] found id: ""
	I0917 18:29:42.148623   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.148635   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:42.148643   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:42.148729   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:42.188248   78008 cri.go:89] found id: ""
	I0917 18:29:42.188277   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.188286   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:42.188294   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:42.188308   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:42.276866   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:42.276906   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:42.325636   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:42.325671   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:42.379370   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:42.379406   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:42.396321   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:42.396357   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:42.481770   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:44.982800   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:44.996898   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:44.997053   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:45.036594   78008 cri.go:89] found id: ""
	I0917 18:29:45.036623   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.036632   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:45.036638   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:45.036699   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:45.073760   78008 cri.go:89] found id: ""
	I0917 18:29:45.073788   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.073799   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:45.073807   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:45.073868   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:45.111080   78008 cri.go:89] found id: ""
	I0917 18:29:45.111106   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.111116   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:45.111127   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:45.111196   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:45.149986   78008 cri.go:89] found id: ""
	I0917 18:29:45.150017   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.150027   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:45.150035   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:45.150099   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:45.187597   78008 cri.go:89] found id: ""
	I0917 18:29:45.187620   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.187629   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:45.187635   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:45.187701   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:45.234149   78008 cri.go:89] found id: ""
	I0917 18:29:45.234174   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.234182   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:45.234188   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:45.234236   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:45.269840   78008 cri.go:89] found id: ""
	I0917 18:29:45.269867   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.269875   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:45.269882   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:45.269944   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:45.306377   78008 cri.go:89] found id: ""
	I0917 18:29:45.306407   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.306418   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:45.306427   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:45.306441   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:45.388767   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:45.388788   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:45.388799   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:45.470114   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:45.470147   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:45.516157   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:45.516185   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:45.573857   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:45.573895   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:48.090706   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:48.105691   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:48.105776   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:48.150986   78008 cri.go:89] found id: ""
	I0917 18:29:48.151013   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.151024   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:48.151032   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:48.151100   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:48.192061   78008 cri.go:89] found id: ""
	I0917 18:29:48.192090   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.192099   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:48.192104   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:48.192161   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:48.229101   78008 cri.go:89] found id: ""
	I0917 18:29:48.229131   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.229148   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:48.229157   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:48.229220   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:48.265986   78008 cri.go:89] found id: ""
	I0917 18:29:48.266016   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.266027   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:48.266034   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:48.266095   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:48.303726   78008 cri.go:89] found id: ""
	I0917 18:29:48.303766   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.303776   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:48.303781   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:48.303830   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:48.339658   78008 cri.go:89] found id: ""
	I0917 18:29:48.339686   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.339696   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:48.339704   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:48.339774   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:48.379115   78008 cri.go:89] found id: ""
	I0917 18:29:48.379140   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.379157   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:48.379164   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:48.379218   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:48.414414   78008 cri.go:89] found id: ""
	I0917 18:29:48.414449   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.414461   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:48.414472   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:48.414488   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:48.428450   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:48.428477   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:48.514098   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:48.514125   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:48.514140   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:48.593472   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:48.593505   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:48.644071   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:48.644108   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:51.202414   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:51.216803   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:51.216880   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:51.258947   78008 cri.go:89] found id: ""
	I0917 18:29:51.258982   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.259000   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:51.259009   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:51.259075   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:51.298904   78008 cri.go:89] found id: ""
	I0917 18:29:51.298937   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.298949   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:51.298957   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:51.299019   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:51.340714   78008 cri.go:89] found id: ""
	I0917 18:29:51.340743   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.340755   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:51.340761   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:51.340823   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:51.382480   78008 cri.go:89] found id: ""
	I0917 18:29:51.382518   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.382527   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:51.382532   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:51.382584   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:51.423788   78008 cri.go:89] found id: ""
	I0917 18:29:51.423818   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.423829   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:51.423836   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:51.423905   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:51.459714   78008 cri.go:89] found id: ""
	I0917 18:29:51.459740   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.459755   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:51.459762   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:51.459810   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:51.495817   78008 cri.go:89] found id: ""
	I0917 18:29:51.495850   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.495862   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:51.495870   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:51.495926   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:51.531481   78008 cri.go:89] found id: ""
	I0917 18:29:51.531521   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.531538   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:51.531550   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:51.531566   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:51.547085   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:51.547120   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:51.622717   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:51.622743   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:51.622758   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:51.701363   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:51.701404   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:51.749746   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:51.749779   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:54.306208   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:54.320659   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:54.320737   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:54.365488   78008 cri.go:89] found id: ""
	I0917 18:29:54.365513   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.365521   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:54.365527   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:54.365588   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:54.417659   78008 cri.go:89] found id: ""
	I0917 18:29:54.417689   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.417700   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:54.417706   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:54.417773   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:54.460760   78008 cri.go:89] found id: ""
	I0917 18:29:54.460795   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.460806   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:54.460814   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:54.460865   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:54.501371   78008 cri.go:89] found id: ""
	I0917 18:29:54.501405   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.501419   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:54.501428   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:54.501501   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:54.549810   78008 cri.go:89] found id: ""
	I0917 18:29:54.549844   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.549853   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:54.549859   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:54.549910   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:54.586837   78008 cri.go:89] found id: ""
	I0917 18:29:54.586860   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.586867   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:54.586881   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:54.586942   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:54.623858   78008 cri.go:89] found id: ""
	I0917 18:29:54.623887   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.623898   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:54.623905   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:54.623967   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:54.660913   78008 cri.go:89] found id: ""
	I0917 18:29:54.660945   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.660955   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:54.660965   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:54.660979   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:54.716523   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:54.716560   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:54.731846   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:54.731877   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:54.812288   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:54.812311   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:54.812323   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:54.892779   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:54.892819   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:57.440435   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:57.454886   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:57.454964   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:57.491408   78008 cri.go:89] found id: ""
	I0917 18:29:57.491440   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.491453   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:57.491461   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:57.491523   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:57.535786   78008 cri.go:89] found id: ""
	I0917 18:29:57.535814   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.535829   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:57.535837   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:57.535904   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:57.578014   78008 cri.go:89] found id: ""
	I0917 18:29:57.578043   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.578051   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:57.578057   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:57.578108   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:57.615580   78008 cri.go:89] found id: ""
	I0917 18:29:57.615615   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.615626   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:57.615634   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:57.615699   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:57.660250   78008 cri.go:89] found id: ""
	I0917 18:29:57.660285   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.660296   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:57.660305   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:57.660366   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:57.700495   78008 cri.go:89] found id: ""
	I0917 18:29:57.700526   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.700536   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:57.700542   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:57.700600   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:57.740580   78008 cri.go:89] found id: ""
	I0917 18:29:57.740616   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.740627   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:57.740635   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:57.740694   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:57.776982   78008 cri.go:89] found id: ""
	I0917 18:29:57.777012   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.777024   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:57.777035   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:57.777049   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:57.877144   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:57.877184   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:57.923875   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:57.923912   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:57.976988   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:57.977025   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:57.992196   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:57.992223   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:58.071161   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:00.571930   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:00.586999   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:00.587083   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:00.625833   78008 cri.go:89] found id: ""
	I0917 18:30:00.625856   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.625864   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:00.625869   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:00.625924   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:00.669976   78008 cri.go:89] found id: ""
	I0917 18:30:00.669999   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.670007   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:00.670012   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:00.670072   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:00.708223   78008 cri.go:89] found id: ""
	I0917 18:30:00.708249   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.708257   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:00.708263   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:00.708315   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:00.743322   78008 cri.go:89] found id: ""
	I0917 18:30:00.743352   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.743364   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:00.743371   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:00.743508   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:00.778595   78008 cri.go:89] found id: ""
	I0917 18:30:00.778625   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.778635   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:00.778643   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:00.778706   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:00.816878   78008 cri.go:89] found id: ""
	I0917 18:30:00.816911   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.816923   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:00.816930   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:00.816983   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:00.855841   78008 cri.go:89] found id: ""
	I0917 18:30:00.855876   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.855889   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:00.855898   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:00.855974   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:00.897170   78008 cri.go:89] found id: ""
	I0917 18:30:00.897195   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.897203   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:00.897210   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:00.897236   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:00.949640   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:00.949680   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:00.963799   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:00.963825   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:01.050102   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:01.050123   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:01.050135   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:01.129012   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:01.129061   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:03.672160   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:03.687572   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:03.687648   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:03.729586   78008 cri.go:89] found id: ""
	I0917 18:30:03.729615   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.729626   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:03.729632   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:03.729692   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:03.766993   78008 cri.go:89] found id: ""
	I0917 18:30:03.767022   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.767032   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:03.767039   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:03.767104   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:03.804340   78008 cri.go:89] found id: ""
	I0917 18:30:03.804368   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.804378   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:03.804385   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:03.804451   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:03.847020   78008 cri.go:89] found id: ""
	I0917 18:30:03.847050   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.847061   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:03.847068   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:03.847158   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:03.885900   78008 cri.go:89] found id: ""
	I0917 18:30:03.885927   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.885938   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:03.885946   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:03.886009   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:03.925137   78008 cri.go:89] found id: ""
	I0917 18:30:03.925167   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.925178   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:03.925184   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:03.925259   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:03.962225   78008 cri.go:89] found id: ""
	I0917 18:30:03.962261   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.962275   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:03.962283   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:03.962352   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:04.005866   78008 cri.go:89] found id: ""
	I0917 18:30:04.005892   78008 logs.go:276] 0 containers: []
	W0917 18:30:04.005902   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:04.005909   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:04.005921   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:04.057578   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:04.057615   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:04.072178   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:04.072213   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:04.145219   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:04.145251   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:04.145285   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:04.234230   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:04.234282   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:06.777988   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:06.793426   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:06.793500   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:06.833313   78008 cri.go:89] found id: ""
	I0917 18:30:06.833352   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.833360   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:06.833365   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:06.833424   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:06.870020   78008 cri.go:89] found id: ""
	I0917 18:30:06.870047   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.870056   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:06.870062   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:06.870124   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:06.906682   78008 cri.go:89] found id: ""
	I0917 18:30:06.906716   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.906728   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:06.906735   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:06.906810   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:06.946328   78008 cri.go:89] found id: ""
	I0917 18:30:06.946356   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.946365   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:06.946371   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:06.946418   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:06.983832   78008 cri.go:89] found id: ""
	I0917 18:30:06.983856   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.983865   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:06.983871   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:06.983918   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:07.024526   78008 cri.go:89] found id: ""
	I0917 18:30:07.024560   78008 logs.go:276] 0 containers: []
	W0917 18:30:07.024571   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:07.024579   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:07.024637   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:07.066891   78008 cri.go:89] found id: ""
	I0917 18:30:07.066917   78008 logs.go:276] 0 containers: []
	W0917 18:30:07.066928   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:07.066935   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:07.066997   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:07.105669   78008 cri.go:89] found id: ""
	I0917 18:30:07.105709   78008 logs.go:276] 0 containers: []
	W0917 18:30:07.105721   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:07.105732   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:07.105754   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:07.120771   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:07.120802   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:07.195243   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:07.195272   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:07.195287   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:07.284377   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:07.284428   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:07.326894   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:07.326924   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:09.886998   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:09.900710   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:09.900773   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:09.943198   78008 cri.go:89] found id: ""
	I0917 18:30:09.943225   78008 logs.go:276] 0 containers: []
	W0917 18:30:09.943234   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:09.943240   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:09.943300   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:09.980113   78008 cri.go:89] found id: ""
	I0917 18:30:09.980148   78008 logs.go:276] 0 containers: []
	W0917 18:30:09.980160   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:09.980167   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:09.980226   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:10.017582   78008 cri.go:89] found id: ""
	I0917 18:30:10.017613   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.017625   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:10.017632   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:10.017681   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:10.053698   78008 cri.go:89] found id: ""
	I0917 18:30:10.053722   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.053731   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:10.053736   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:10.053784   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:10.091391   78008 cri.go:89] found id: ""
	I0917 18:30:10.091421   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.091433   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:10.091439   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:10.091496   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:10.130636   78008 cri.go:89] found id: ""
	I0917 18:30:10.130668   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.130677   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:10.130682   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:10.130736   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:10.168175   78008 cri.go:89] found id: ""
	I0917 18:30:10.168203   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.168214   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:10.168222   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:10.168313   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:10.207085   78008 cri.go:89] found id: ""
	I0917 18:30:10.207109   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.207118   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:10.207126   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:10.207139   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:10.245978   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:10.246007   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:10.298522   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:10.298569   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:10.312878   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:10.312904   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:10.387530   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:10.387553   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:10.387565   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:12.967663   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:12.982157   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:12.982215   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:13.020177   78008 cri.go:89] found id: ""
	I0917 18:30:13.020224   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.020235   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:13.020241   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:13.020310   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:13.056317   78008 cri.go:89] found id: ""
	I0917 18:30:13.056342   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.056351   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:13.056356   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:13.056404   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:13.091799   78008 cri.go:89] found id: ""
	I0917 18:30:13.091823   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.091832   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:13.091838   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:13.091888   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:13.130421   78008 cri.go:89] found id: ""
	I0917 18:30:13.130450   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.130460   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:13.130465   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:13.130518   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:13.170623   78008 cri.go:89] found id: ""
	I0917 18:30:13.170654   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.170664   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:13.170672   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:13.170732   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:13.206396   78008 cri.go:89] found id: ""
	I0917 18:30:13.206441   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.206452   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:13.206460   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:13.206514   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:13.243090   78008 cri.go:89] found id: ""
	I0917 18:30:13.243121   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.243132   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:13.243139   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:13.243192   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:13.285690   78008 cri.go:89] found id: ""
	I0917 18:30:13.285730   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.285740   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:13.285747   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:13.285759   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:13.361992   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:13.362021   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:13.362043   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:13.448424   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:13.448467   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:13.489256   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:13.489284   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:13.544698   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:13.544735   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:16.060014   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:16.073504   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:16.073564   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:16.110538   78008 cri.go:89] found id: ""
	I0917 18:30:16.110567   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.110579   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:16.110587   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:16.110648   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:16.148521   78008 cri.go:89] found id: ""
	I0917 18:30:16.148551   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.148562   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:16.148570   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:16.148640   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:16.182772   78008 cri.go:89] found id: ""
	I0917 18:30:16.182796   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.182804   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:16.182809   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:16.182858   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:16.219617   78008 cri.go:89] found id: ""
	I0917 18:30:16.219642   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.219653   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:16.219660   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:16.219714   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:16.257320   78008 cri.go:89] found id: ""
	I0917 18:30:16.257345   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.257354   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:16.257359   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:16.257419   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:16.295118   78008 cri.go:89] found id: ""
	I0917 18:30:16.295150   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.295161   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:16.295168   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:16.295234   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:16.332448   78008 cri.go:89] found id: ""
	I0917 18:30:16.332482   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.332493   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:16.332500   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:16.332564   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:16.370155   78008 cri.go:89] found id: ""
	I0917 18:30:16.370182   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.370189   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:16.370197   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:16.370208   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:16.410230   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:16.410260   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:16.462306   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:16.462342   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:16.476472   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:16.476506   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:16.550449   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:16.550479   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:16.550497   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:19.129550   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:19.143333   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:19.143415   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:19.184184   78008 cri.go:89] found id: ""
	I0917 18:30:19.184213   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.184224   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:19.184231   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:19.184289   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:19.219455   78008 cri.go:89] found id: ""
	I0917 18:30:19.219489   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.219501   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:19.219508   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:19.219568   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:19.257269   78008 cri.go:89] found id: ""
	I0917 18:30:19.257303   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.257315   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:19.257328   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:19.257405   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:19.293898   78008 cri.go:89] found id: ""
	I0917 18:30:19.293931   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.293943   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:19.293951   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:19.294009   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:19.339154   78008 cri.go:89] found id: ""
	I0917 18:30:19.339183   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.339194   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:19.339201   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:19.339268   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:19.378608   78008 cri.go:89] found id: ""
	I0917 18:30:19.378634   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.378646   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:19.378653   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:19.378720   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:19.415280   78008 cri.go:89] found id: ""
	I0917 18:30:19.415311   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.415322   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:19.415330   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:19.415396   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:19.454025   78008 cri.go:89] found id: ""
	I0917 18:30:19.454066   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.454079   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:19.454089   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:19.454107   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:19.505918   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:19.505950   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:19.520996   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:19.521027   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:19.597408   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:19.597431   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:19.597442   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:19.678454   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:19.678487   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:22.223094   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:22.238644   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:22.238722   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:22.279497   78008 cri.go:89] found id: ""
	I0917 18:30:22.279529   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.279541   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:22.279554   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:22.279616   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:22.315953   78008 cri.go:89] found id: ""
	I0917 18:30:22.315980   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.315990   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:22.315997   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:22.316061   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:22.355157   78008 cri.go:89] found id: ""
	I0917 18:30:22.355191   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.355204   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:22.355212   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:22.355278   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:22.393304   78008 cri.go:89] found id: ""
	I0917 18:30:22.393335   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.393346   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:22.393353   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:22.393405   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:22.437541   78008 cri.go:89] found id: ""
	I0917 18:30:22.437567   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.437576   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:22.437582   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:22.437637   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:22.478560   78008 cri.go:89] found id: ""
	I0917 18:30:22.478588   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.478596   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:22.478601   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:22.478661   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:22.516049   78008 cri.go:89] found id: ""
	I0917 18:30:22.516084   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.516093   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:22.516099   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:22.516151   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:22.554321   78008 cri.go:89] found id: ""
	I0917 18:30:22.554350   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.554359   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:22.554367   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:22.554377   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:22.613073   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:22.613110   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:22.627768   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:22.627797   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:22.710291   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:22.710318   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:22.710333   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:22.807999   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:22.808035   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:25.350639   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:25.366302   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:25.366405   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:25.411585   78008 cri.go:89] found id: ""
	I0917 18:30:25.411613   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.411625   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:25.411632   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:25.411694   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:25.453414   78008 cri.go:89] found id: ""
	I0917 18:30:25.453441   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.453461   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:25.453467   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:25.453529   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:25.489776   78008 cri.go:89] found id: ""
	I0917 18:30:25.489803   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.489812   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:25.489817   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:25.489868   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:25.531594   78008 cri.go:89] found id: ""
	I0917 18:30:25.531624   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.531633   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:25.531638   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:25.531686   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:25.568796   78008 cri.go:89] found id: ""
	I0917 18:30:25.568820   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.568831   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:25.568837   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:25.568888   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:25.605612   78008 cri.go:89] found id: ""
	I0917 18:30:25.605643   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.605654   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:25.605661   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:25.605719   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:25.647673   78008 cri.go:89] found id: ""
	I0917 18:30:25.647698   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.647708   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:25.647713   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:25.647772   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:25.686943   78008 cri.go:89] found id: ""
	I0917 18:30:25.686976   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.686989   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:25.687000   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:25.687022   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:25.728440   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:25.728477   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:25.778211   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:25.778254   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:25.792519   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:25.792547   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:25.879452   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:25.879477   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:25.879492   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:28.460531   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:28.474595   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:28.474689   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:28.531065   78008 cri.go:89] found id: ""
	I0917 18:30:28.531099   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.531108   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:28.531117   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:28.531184   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:28.571952   78008 cri.go:89] found id: ""
	I0917 18:30:28.571991   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.572002   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:28.572012   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:28.572081   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:28.608315   78008 cri.go:89] found id: ""
	I0917 18:30:28.608348   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.608364   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:28.608371   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:28.608433   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:28.647882   78008 cri.go:89] found id: ""
	I0917 18:30:28.647913   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.647925   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:28.647932   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:28.647997   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:28.684998   78008 cri.go:89] found id: ""
	I0917 18:30:28.685021   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.685030   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:28.685036   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:28.685098   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:28.724249   78008 cri.go:89] found id: ""
	I0917 18:30:28.724274   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.724282   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:28.724287   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:28.724348   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:28.765932   78008 cri.go:89] found id: ""
	I0917 18:30:28.765965   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.765976   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:28.765982   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:28.766047   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:28.803857   78008 cri.go:89] found id: ""
	I0917 18:30:28.803888   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.803899   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:28.803910   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:28.803923   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:28.863667   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:28.863703   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:28.878148   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:28.878187   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:28.956714   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:28.956743   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:28.956760   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:29.036303   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:29.036342   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:31.581741   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:31.595509   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:31.595592   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:31.631185   78008 cri.go:89] found id: ""
	I0917 18:30:31.631215   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.631227   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:31.631234   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:31.631286   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:31.668059   78008 cri.go:89] found id: ""
	I0917 18:30:31.668091   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.668102   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:31.668109   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:31.668168   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:31.705807   78008 cri.go:89] found id: ""
	I0917 18:30:31.705838   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.705849   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:31.705856   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:31.705925   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:31.750168   78008 cri.go:89] found id: ""
	I0917 18:30:31.750198   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.750212   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:31.750220   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:31.750282   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:31.792032   78008 cri.go:89] found id: ""
	I0917 18:30:31.792054   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.792063   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:31.792069   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:31.792130   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:31.828596   78008 cri.go:89] found id: ""
	I0917 18:30:31.828632   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.828646   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:31.828654   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:31.828708   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:31.871963   78008 cri.go:89] found id: ""
	I0917 18:30:31.872000   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.872013   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:31.872023   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:31.872094   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:31.906688   78008 cri.go:89] found id: ""
	I0917 18:30:31.906718   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.906727   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:31.906735   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:31.906746   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:31.920311   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:31.920339   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:32.009966   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:32.009992   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:32.010006   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:32.088409   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:32.088447   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:32.132771   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:32.132806   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:34.686159   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:34.700133   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:34.700211   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:34.739392   78008 cri.go:89] found id: ""
	I0917 18:30:34.739431   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.739445   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:34.739453   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:34.739522   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:34.779141   78008 cri.go:89] found id: ""
	I0917 18:30:34.779175   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.779188   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:34.779195   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:34.779260   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:34.819883   78008 cri.go:89] found id: ""
	I0917 18:30:34.819907   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.819915   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:34.819920   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:34.819967   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:34.855886   78008 cri.go:89] found id: ""
	I0917 18:30:34.855912   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.855923   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:34.855931   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:34.855999   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:34.903919   78008 cri.go:89] found id: ""
	I0917 18:30:34.903956   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.903968   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:34.903975   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:34.904042   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:34.951895   78008 cri.go:89] found id: ""
	I0917 18:30:34.951925   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.951936   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:34.951943   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:34.952007   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:35.013084   78008 cri.go:89] found id: ""
	I0917 18:30:35.013124   78008 logs.go:276] 0 containers: []
	W0917 18:30:35.013132   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:35.013137   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:35.013189   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:35.051565   78008 cri.go:89] found id: ""
	I0917 18:30:35.051589   78008 logs.go:276] 0 containers: []
	W0917 18:30:35.051598   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:35.051606   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:35.051616   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:35.092723   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:35.092753   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:35.147996   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:35.148037   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:35.164989   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:35.165030   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:35.246216   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:35.246239   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:35.246252   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:37.828811   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:37.846467   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:37.846534   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:37.884725   78008 cri.go:89] found id: ""
	I0917 18:30:37.884758   78008 logs.go:276] 0 containers: []
	W0917 18:30:37.884769   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:37.884777   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:37.884836   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:37.923485   78008 cri.go:89] found id: ""
	I0917 18:30:37.923517   78008 logs.go:276] 0 containers: []
	W0917 18:30:37.923525   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:37.923531   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:37.923597   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:37.962829   78008 cri.go:89] found id: ""
	I0917 18:30:37.962857   78008 logs.go:276] 0 containers: []
	W0917 18:30:37.962867   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:37.962873   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:37.962938   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:38.003277   78008 cri.go:89] found id: ""
	I0917 18:30:38.003305   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.003313   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:38.003319   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:38.003380   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:38.047919   78008 cri.go:89] found id: ""
	I0917 18:30:38.047952   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.047963   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:38.047971   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:38.048043   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:38.084853   78008 cri.go:89] found id: ""
	I0917 18:30:38.084883   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.084896   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:38.084904   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:38.084967   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:38.122340   78008 cri.go:89] found id: ""
	I0917 18:30:38.122369   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.122379   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:38.122387   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:38.122446   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:38.163071   78008 cri.go:89] found id: ""
	I0917 18:30:38.163101   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.163112   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:38.163121   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:38.163134   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:38.243772   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:38.243812   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:38.291744   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:38.291777   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:38.346738   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:38.346778   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:38.361908   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:38.361953   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:38.441730   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:40.942693   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:40.960643   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:40.960713   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:41.016226   78008 cri.go:89] found id: ""
	I0917 18:30:41.016255   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.016265   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:41.016270   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:41.016328   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:41.054315   78008 cri.go:89] found id: ""
	I0917 18:30:41.054342   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.054353   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:41.054360   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:41.054426   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:41.092946   78008 cri.go:89] found id: ""
	I0917 18:30:41.092978   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.092991   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:41.092998   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:41.093058   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:41.133385   78008 cri.go:89] found id: ""
	I0917 18:30:41.133415   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.133423   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:41.133430   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:41.133487   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:41.173993   78008 cri.go:89] found id: ""
	I0917 18:30:41.174017   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.174025   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:41.174030   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:41.174083   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:41.211127   78008 cri.go:89] found id: ""
	I0917 18:30:41.211154   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.211168   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:41.211174   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:41.211244   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:41.248607   78008 cri.go:89] found id: ""
	I0917 18:30:41.248632   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.248645   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:41.248652   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:41.248714   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:41.284580   78008 cri.go:89] found id: ""
	I0917 18:30:41.284612   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.284621   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:41.284629   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:41.284640   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:41.336573   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:41.336613   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:41.352134   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:41.352167   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:41.419061   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:41.419085   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:41.419099   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:41.499758   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:41.499792   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:44.043361   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:44.057270   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:44.057339   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:44.096130   78008 cri.go:89] found id: ""
	I0917 18:30:44.096165   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.096176   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:44.096184   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:44.096238   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:44.134483   78008 cri.go:89] found id: ""
	I0917 18:30:44.134514   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.134526   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:44.134534   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:44.134601   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:44.172723   78008 cri.go:89] found id: ""
	I0917 18:30:44.172759   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.172774   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:44.172782   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:44.172855   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:44.208478   78008 cri.go:89] found id: ""
	I0917 18:30:44.208506   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.208514   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:44.208519   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:44.208577   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:44.249352   78008 cri.go:89] found id: ""
	I0917 18:30:44.249381   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.249391   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:44.249398   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:44.249457   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:44.291156   78008 cri.go:89] found id: ""
	I0917 18:30:44.291180   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.291188   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:44.291194   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:44.291243   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:44.331580   78008 cri.go:89] found id: ""
	I0917 18:30:44.331612   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.331623   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:44.331632   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:44.331705   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:44.370722   78008 cri.go:89] found id: ""
	I0917 18:30:44.370750   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.370763   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:44.370774   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:44.370797   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:44.421126   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:44.421161   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:44.478581   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:44.478624   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:44.493492   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:44.493522   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:44.566317   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:44.566347   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:44.566358   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:47.147466   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:47.162590   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:47.162654   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:47.201382   78008 cri.go:89] found id: ""
	I0917 18:30:47.201409   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.201418   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:47.201423   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:47.201474   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:47.249536   78008 cri.go:89] found id: ""
	I0917 18:30:47.249561   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.249569   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:47.249574   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:47.249631   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:47.292337   78008 cri.go:89] found id: ""
	I0917 18:30:47.292361   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.292369   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:47.292376   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:47.292438   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:47.341387   78008 cri.go:89] found id: ""
	I0917 18:30:47.341421   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.341433   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:47.341447   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:47.341531   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:47.382687   78008 cri.go:89] found id: ""
	I0917 18:30:47.382719   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.382748   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:47.382762   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:47.382827   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:47.419598   78008 cri.go:89] found id: ""
	I0917 18:30:47.419632   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.419644   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:47.419650   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:47.419717   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:47.456104   78008 cri.go:89] found id: ""
	I0917 18:30:47.456131   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.456141   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:47.456148   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:47.456210   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:47.498610   78008 cri.go:89] found id: ""
	I0917 18:30:47.498643   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.498654   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:47.498665   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:47.498706   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:47.573796   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:47.573819   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:47.573830   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:47.651234   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:47.651271   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:47.692875   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:47.692902   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:47.747088   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:47.747128   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:50.262789   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:50.277262   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:50.277415   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:50.314866   78008 cri.go:89] found id: ""
	I0917 18:30:50.314902   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.314911   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:50.314916   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:50.314971   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:50.353490   78008 cri.go:89] found id: ""
	I0917 18:30:50.353527   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.353536   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:50.353542   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:50.353590   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:50.391922   78008 cri.go:89] found id: ""
	I0917 18:30:50.391944   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.391952   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:50.391957   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:50.392003   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:50.431088   78008 cri.go:89] found id: ""
	I0917 18:30:50.431118   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.431129   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:50.431136   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:50.431186   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:50.469971   78008 cri.go:89] found id: ""
	I0917 18:30:50.469999   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.470010   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:50.470018   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:50.470083   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:50.509121   78008 cri.go:89] found id: ""
	I0917 18:30:50.509153   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.509165   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:50.509172   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:50.509256   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:50.546569   78008 cri.go:89] found id: ""
	I0917 18:30:50.546594   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.546602   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:50.546607   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:50.546656   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:50.586045   78008 cri.go:89] found id: ""
	I0917 18:30:50.586071   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.586080   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:50.586088   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:50.586098   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:50.642994   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:50.643040   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:50.658018   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:50.658050   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:50.730760   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:50.730792   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:50.730808   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:50.810154   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:50.810185   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:53.356859   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:53.371313   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:53.371406   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:53.412822   78008 cri.go:89] found id: ""
	I0917 18:30:53.412847   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.412858   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:53.412865   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:53.412931   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:53.448900   78008 cri.go:89] found id: ""
	I0917 18:30:53.448932   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.448943   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:53.448950   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:53.449014   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:53.487141   78008 cri.go:89] found id: ""
	I0917 18:30:53.487167   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.487176   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:53.487182   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:53.487251   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:53.528899   78008 cri.go:89] found id: ""
	I0917 18:30:53.528928   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.528940   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:53.528947   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:53.529008   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:53.564795   78008 cri.go:89] found id: ""
	I0917 18:30:53.564827   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.564839   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:53.564847   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:53.564914   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:53.605208   78008 cri.go:89] found id: ""
	I0917 18:30:53.605257   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.605268   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:53.605277   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:53.605339   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:53.647177   78008 cri.go:89] found id: ""
	I0917 18:30:53.647205   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.647214   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:53.647219   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:53.647278   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:53.694030   78008 cri.go:89] found id: ""
	I0917 18:30:53.694057   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.694067   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:53.694075   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:53.694085   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:53.746611   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:53.746645   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:53.761563   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:53.761595   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:53.835663   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:53.835694   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:53.835709   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:53.920796   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:53.920848   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:56.468452   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:56.482077   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:56.482148   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:56.518569   78008 cri.go:89] found id: ""
	I0917 18:30:56.518593   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.518601   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:56.518607   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:56.518665   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:56.560000   78008 cri.go:89] found id: ""
	I0917 18:30:56.560033   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.560045   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:56.560054   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:56.560117   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:56.600391   78008 cri.go:89] found id: ""
	I0917 18:30:56.600423   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.600435   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:56.600442   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:56.600519   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:56.637674   78008 cri.go:89] found id: ""
	I0917 18:30:56.637706   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.637720   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:56.637728   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:56.637781   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:56.673297   78008 cri.go:89] found id: ""
	I0917 18:30:56.673329   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.673340   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:56.673348   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:56.673414   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:56.708863   78008 cri.go:89] found id: ""
	I0917 18:30:56.708898   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.708910   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:56.708917   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:56.708979   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:56.745165   78008 cri.go:89] found id: ""
	I0917 18:30:56.745199   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.745211   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:56.745220   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:56.745297   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:56.793206   78008 cri.go:89] found id: ""
	I0917 18:30:56.793260   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.793273   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:56.793284   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:56.793297   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:56.880661   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:56.880699   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:56.926789   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:56.926820   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:56.978914   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:56.978965   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:56.993199   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:56.993236   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:57.065180   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:59.565927   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:59.579838   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:59.579921   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:59.616623   78008 cri.go:89] found id: ""
	I0917 18:30:59.616648   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.616656   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:59.616662   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:59.616716   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:59.659048   78008 cri.go:89] found id: ""
	I0917 18:30:59.659074   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.659084   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:59.659091   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:59.659153   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:59.694874   78008 cri.go:89] found id: ""
	I0917 18:30:59.694899   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.694910   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:59.694921   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:59.694988   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:59.732858   78008 cri.go:89] found id: ""
	I0917 18:30:59.732889   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.732902   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:59.732909   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:59.732972   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:59.771178   78008 cri.go:89] found id: ""
	I0917 18:30:59.771203   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.771212   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:59.771218   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:59.771271   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:59.812456   78008 cri.go:89] found id: ""
	I0917 18:30:59.812481   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.812490   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:59.812498   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:59.812560   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:59.849876   78008 cri.go:89] found id: ""
	I0917 18:30:59.849906   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.849917   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:59.849924   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:59.849988   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:59.889796   78008 cri.go:89] found id: ""
	I0917 18:30:59.889827   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.889839   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:59.889850   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:59.889865   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:59.942735   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:59.942774   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:59.957159   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:59.957186   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:00.030497   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:00.030522   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:00.030537   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:00.112077   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:00.112134   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:02.656525   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:02.671313   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:02.671379   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:02.710779   78008 cri.go:89] found id: ""
	I0917 18:31:02.710807   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.710820   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:02.710827   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:02.710890   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:02.750285   78008 cri.go:89] found id: ""
	I0917 18:31:02.750315   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.750326   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:02.750335   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:02.750399   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:02.790676   78008 cri.go:89] found id: ""
	I0917 18:31:02.790704   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.790712   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:02.790718   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:02.790766   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:02.832124   78008 cri.go:89] found id: ""
	I0917 18:31:02.832154   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.832166   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:02.832174   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:02.832236   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:02.868769   78008 cri.go:89] found id: ""
	I0917 18:31:02.868801   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.868813   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:02.868820   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:02.868886   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:02.910482   78008 cri.go:89] found id: ""
	I0917 18:31:02.910512   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.910524   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:02.910533   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:02.910587   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:02.948128   78008 cri.go:89] found id: ""
	I0917 18:31:02.948154   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.948165   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:02.948172   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:02.948239   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:02.987981   78008 cri.go:89] found id: ""
	I0917 18:31:02.988007   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.988018   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:02.988028   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:02.988042   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:03.044116   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:03.044157   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:03.059837   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:03.059866   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:03.134048   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:03.134073   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:03.134086   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:03.214751   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:03.214792   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:05.768145   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:05.782375   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:05.782455   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:05.820083   78008 cri.go:89] found id: ""
	I0917 18:31:05.820116   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.820127   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:05.820134   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:05.820188   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:05.856626   78008 cri.go:89] found id: ""
	I0917 18:31:05.856655   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.856666   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:05.856673   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:05.856737   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:05.893119   78008 cri.go:89] found id: ""
	I0917 18:31:05.893149   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.893162   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:05.893172   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:05.893299   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:05.931892   78008 cri.go:89] found id: ""
	I0917 18:31:05.931916   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.931924   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:05.931930   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:05.931991   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:05.968770   78008 cri.go:89] found id: ""
	I0917 18:31:05.968802   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.968814   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:05.968820   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:05.968888   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:06.008183   78008 cri.go:89] found id: ""
	I0917 18:31:06.008208   78008 logs.go:276] 0 containers: []
	W0917 18:31:06.008217   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:06.008222   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:06.008267   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:06.043161   78008 cri.go:89] found id: ""
	I0917 18:31:06.043189   78008 logs.go:276] 0 containers: []
	W0917 18:31:06.043199   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:06.043204   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:06.043271   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:06.079285   78008 cri.go:89] found id: ""
	I0917 18:31:06.079315   78008 logs.go:276] 0 containers: []
	W0917 18:31:06.079326   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:06.079336   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:06.079347   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:06.160863   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:06.160913   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:06.202101   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:06.202127   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:06.255482   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:06.255517   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:06.271518   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:06.271545   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:06.344034   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:08.844243   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:08.859312   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:08.859381   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:08.896915   78008 cri.go:89] found id: ""
	I0917 18:31:08.896942   78008 logs.go:276] 0 containers: []
	W0917 18:31:08.896952   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:08.896959   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:08.897022   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:08.937979   78008 cri.go:89] found id: ""
	I0917 18:31:08.938005   78008 logs.go:276] 0 containers: []
	W0917 18:31:08.938014   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:08.938022   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:08.938072   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:08.978502   78008 cri.go:89] found id: ""
	I0917 18:31:08.978536   78008 logs.go:276] 0 containers: []
	W0917 18:31:08.978548   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:08.978556   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:08.978616   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:09.044664   78008 cri.go:89] found id: ""
	I0917 18:31:09.044699   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.044711   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:09.044719   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:09.044796   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:09.082888   78008 cri.go:89] found id: ""
	I0917 18:31:09.082923   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.082944   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:09.082954   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:09.083027   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:09.120314   78008 cri.go:89] found id: ""
	I0917 18:31:09.120339   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.120350   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:09.120357   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:09.120418   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:09.160137   78008 cri.go:89] found id: ""
	I0917 18:31:09.160165   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.160176   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:09.160183   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:09.160241   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:09.198711   78008 cri.go:89] found id: ""
	I0917 18:31:09.198741   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.198749   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:09.198756   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:09.198766   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:09.253431   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:09.253485   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:09.270520   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:09.270554   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:09.349865   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:09.349889   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:09.349909   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:09.436606   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:09.436650   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:11.981998   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:11.995472   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:11.995556   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:12.035854   78008 cri.go:89] found id: ""
	I0917 18:31:12.035883   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.035894   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:12.035902   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:12.035953   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:12.070923   78008 cri.go:89] found id: ""
	I0917 18:31:12.070953   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.070965   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:12.070973   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:12.071041   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:12.108151   78008 cri.go:89] found id: ""
	I0917 18:31:12.108176   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.108185   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:12.108190   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:12.108238   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:12.146050   78008 cri.go:89] found id: ""
	I0917 18:31:12.146081   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.146092   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:12.146100   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:12.146158   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:12.185355   78008 cri.go:89] found id: ""
	I0917 18:31:12.185387   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.185396   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:12.185402   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:12.185449   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:12.222377   78008 cri.go:89] found id: ""
	I0917 18:31:12.222403   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.222412   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:12.222418   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:12.222488   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:12.258190   78008 cri.go:89] found id: ""
	I0917 18:31:12.258231   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.258242   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:12.258249   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:12.258326   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:12.295674   78008 cri.go:89] found id: ""
	I0917 18:31:12.295710   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.295722   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:12.295731   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:12.295742   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:12.348185   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:12.348223   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:12.363961   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:12.363992   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:12.438630   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:12.438661   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:12.438676   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:12.520086   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:12.520133   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:15.061926   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:15.079141   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:15.079206   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:15.122722   78008 cri.go:89] found id: ""
	I0917 18:31:15.122812   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.122828   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:15.122837   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:15.122895   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:15.168184   78008 cri.go:89] found id: ""
	I0917 18:31:15.168209   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.168218   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:15.168225   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:15.168288   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:15.208219   78008 cri.go:89] found id: ""
	I0917 18:31:15.208246   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.208259   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:15.208266   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:15.208318   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:15.248082   78008 cri.go:89] found id: ""
	I0917 18:31:15.248114   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.248126   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:15.248133   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:15.248197   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:15.285215   78008 cri.go:89] found id: ""
	I0917 18:31:15.285263   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.285274   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:15.285281   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:15.285339   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:15.328617   78008 cri.go:89] found id: ""
	I0917 18:31:15.328650   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.328669   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:15.328675   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:15.328738   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:15.371869   78008 cri.go:89] found id: ""
	I0917 18:31:15.371895   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.371903   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:15.371909   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:15.371955   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:15.418109   78008 cri.go:89] found id: ""
	I0917 18:31:15.418136   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.418145   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:15.418153   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:15.418166   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:15.443709   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:15.443741   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:15.540475   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:15.540499   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:15.540511   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:15.627751   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:15.627781   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:15.671027   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:15.671056   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:18.223732   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:18.239161   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:18.239242   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:18.280252   78008 cri.go:89] found id: ""
	I0917 18:31:18.280282   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.280294   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:18.280301   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:18.280350   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:18.318774   78008 cri.go:89] found id: ""
	I0917 18:31:18.318805   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.318815   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:18.318821   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:18.318877   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:18.354755   78008 cri.go:89] found id: ""
	I0917 18:31:18.354785   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.354796   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:18.354804   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:18.354862   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:18.391283   78008 cri.go:89] found id: ""
	I0917 18:31:18.391310   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.391318   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:18.391324   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:18.391372   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:18.429026   78008 cri.go:89] found id: ""
	I0917 18:31:18.429062   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.429074   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:18.429081   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:18.429135   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:18.468318   78008 cri.go:89] found id: ""
	I0917 18:31:18.468351   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.468365   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:18.468372   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:18.468421   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:18.509871   78008 cri.go:89] found id: ""
	I0917 18:31:18.509903   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.509914   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:18.509922   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:18.509979   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:18.548662   78008 cri.go:89] found id: ""
	I0917 18:31:18.548694   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.548705   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:18.548714   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:18.548729   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:18.587633   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:18.587662   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:18.640867   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:18.640910   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:18.658020   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:18.658054   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:18.729643   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:18.729674   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:18.729686   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:21.313013   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:21.329702   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:21.329768   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:21.378972   78008 cri.go:89] found id: ""
	I0917 18:31:21.378996   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.379004   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:21.379010   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:21.379065   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:21.433355   78008 cri.go:89] found id: ""
	I0917 18:31:21.433382   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.433393   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:21.433400   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:21.433462   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:21.489030   78008 cri.go:89] found id: ""
	I0917 18:31:21.489055   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.489063   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:21.489068   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:21.489124   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:21.529089   78008 cri.go:89] found id: ""
	I0917 18:31:21.529119   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.529131   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:21.529138   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:21.529188   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:21.566892   78008 cri.go:89] found id: ""
	I0917 18:31:21.566919   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.566929   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:21.566935   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:21.566985   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:21.605453   78008 cri.go:89] found id: ""
	I0917 18:31:21.605484   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.605496   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:21.605504   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:21.605569   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:21.647710   78008 cri.go:89] found id: ""
	I0917 18:31:21.647732   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.647740   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:21.647745   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:21.647804   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:21.687002   78008 cri.go:89] found id: ""
	I0917 18:31:21.687036   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.687048   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:21.687058   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:21.687074   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:21.738591   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:21.738631   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:21.752950   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:21.752987   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:21.826268   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:21.826292   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:21.826306   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:21.906598   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:21.906646   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:24.453057   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:24.468867   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:24.468930   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:24.511103   78008 cri.go:89] found id: ""
	I0917 18:31:24.511129   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.511140   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:24.511147   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:24.511200   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:24.546392   78008 cri.go:89] found id: ""
	I0917 18:31:24.546423   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.546434   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:24.546443   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:24.546505   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:24.583266   78008 cri.go:89] found id: ""
	I0917 18:31:24.583299   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.583310   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:24.583320   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:24.583381   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:24.620018   78008 cri.go:89] found id: ""
	I0917 18:31:24.620051   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.620063   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:24.620070   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:24.620133   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:24.659528   78008 cri.go:89] found id: ""
	I0917 18:31:24.659556   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.659566   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:24.659573   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:24.659636   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:24.699115   78008 cri.go:89] found id: ""
	I0917 18:31:24.699153   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.699167   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:24.699175   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:24.699234   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:24.745358   78008 cri.go:89] found id: ""
	I0917 18:31:24.745392   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.745404   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:24.745414   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:24.745483   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:24.786606   78008 cri.go:89] found id: ""
	I0917 18:31:24.786635   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.786644   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:24.786657   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:24.786671   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:24.838417   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:24.838462   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:24.852959   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:24.852988   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:24.927013   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:24.927039   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:24.927058   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:25.008679   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:25.008720   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:27.549945   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:27.565336   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:27.565450   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:27.605806   78008 cri.go:89] found id: ""
	I0917 18:31:27.605844   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.605853   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:27.605860   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:27.605909   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:27.652915   78008 cri.go:89] found id: ""
	I0917 18:31:27.652955   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.652968   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:27.652977   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:27.653044   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:27.701732   78008 cri.go:89] found id: ""
	I0917 18:31:27.701759   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.701771   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:27.701778   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:27.701841   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:27.744587   78008 cri.go:89] found id: ""
	I0917 18:31:27.744616   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.744628   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:27.744635   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:27.744705   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:27.789161   78008 cri.go:89] found id: ""
	I0917 18:31:27.789196   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.789207   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:27.789214   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:27.789296   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:27.833484   78008 cri.go:89] found id: ""
	I0917 18:31:27.833513   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.833525   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:27.833532   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:27.833591   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:27.873669   78008 cri.go:89] found id: ""
	I0917 18:31:27.873703   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.873715   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:27.873722   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:27.873792   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:27.911270   78008 cri.go:89] found id: ""
	I0917 18:31:27.911301   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.911313   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:27.911323   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:27.911336   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:27.951769   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:27.951798   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:28.002220   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:28.002254   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:28.017358   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:28.017392   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:28.091456   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:28.091481   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:28.091492   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:30.679643   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:30.693877   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:30.693948   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:30.732196   78008 cri.go:89] found id: ""
	I0917 18:31:30.732228   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.732240   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:30.732247   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:30.732320   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:30.774700   78008 cri.go:89] found id: ""
	I0917 18:31:30.774730   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.774742   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:30.774749   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:30.774838   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:30.814394   78008 cri.go:89] found id: ""
	I0917 18:31:30.814420   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.814428   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:30.814434   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:30.814487   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:30.854746   78008 cri.go:89] found id: ""
	I0917 18:31:30.854788   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.854801   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:30.854830   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:30.854899   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:30.893533   78008 cri.go:89] found id: ""
	I0917 18:31:30.893564   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.893574   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:30.893580   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:30.893649   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:30.932719   78008 cri.go:89] found id: ""
	I0917 18:31:30.932746   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.932757   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:30.932763   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:30.932811   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:30.974004   78008 cri.go:89] found id: ""
	I0917 18:31:30.974047   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.974056   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:30.974061   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:30.974117   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:31.017469   78008 cri.go:89] found id: ""
	I0917 18:31:31.017498   78008 logs.go:276] 0 containers: []
	W0917 18:31:31.017509   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:31.017517   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:31.017529   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:31.094385   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:31.094409   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:31.094424   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:31.177975   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:31.178012   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:31.218773   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:31.218804   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:31.272960   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:31.272996   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:33.788825   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:33.804904   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:33.804985   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:33.847149   78008 cri.go:89] found id: ""
	I0917 18:31:33.847178   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.847190   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:33.847198   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:33.847259   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:33.883548   78008 cri.go:89] found id: ""
	I0917 18:31:33.883573   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.883581   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:33.883586   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:33.883632   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:33.917495   78008 cri.go:89] found id: ""
	I0917 18:31:33.917523   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.917535   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:33.917542   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:33.917634   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:33.954931   78008 cri.go:89] found id: ""
	I0917 18:31:33.954955   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.954963   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:33.954969   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:33.955019   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:33.991535   78008 cri.go:89] found id: ""
	I0917 18:31:33.991568   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.991577   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:33.991582   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:33.991639   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:34.039451   78008 cri.go:89] found id: ""
	I0917 18:31:34.039478   78008 logs.go:276] 0 containers: []
	W0917 18:31:34.039489   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:34.039497   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:34.039557   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:34.081258   78008 cri.go:89] found id: ""
	I0917 18:31:34.081300   78008 logs.go:276] 0 containers: []
	W0917 18:31:34.081311   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:34.081317   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:34.081379   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:34.119557   78008 cri.go:89] found id: ""
	I0917 18:31:34.119586   78008 logs.go:276] 0 containers: []
	W0917 18:31:34.119597   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:34.119608   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:34.119623   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:34.163345   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:34.163379   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:34.218399   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:34.218454   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:34.232705   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:34.232736   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:34.309948   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:34.309972   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:34.309984   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:36.896504   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:36.913784   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:36.913870   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:36.954525   78008 cri.go:89] found id: ""
	I0917 18:31:36.954557   78008 logs.go:276] 0 containers: []
	W0917 18:31:36.954568   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:36.954578   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:36.954648   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:36.989379   78008 cri.go:89] found id: ""
	I0917 18:31:36.989408   78008 logs.go:276] 0 containers: []
	W0917 18:31:36.989419   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:36.989426   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:36.989491   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:37.029078   78008 cri.go:89] found id: ""
	I0917 18:31:37.029107   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.029119   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:37.029126   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:37.029180   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:37.066636   78008 cri.go:89] found id: ""
	I0917 18:31:37.066670   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.066683   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:37.066691   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:37.066754   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:37.109791   78008 cri.go:89] found id: ""
	I0917 18:31:37.109827   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.109838   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:37.109849   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:37.109925   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:37.153415   78008 cri.go:89] found id: ""
	I0917 18:31:37.153448   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.153459   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:37.153467   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:37.153527   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:37.192826   78008 cri.go:89] found id: ""
	I0917 18:31:37.192853   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.192864   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:37.192871   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:37.192930   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:37.230579   78008 cri.go:89] found id: ""
	I0917 18:31:37.230632   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.230647   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:37.230665   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:37.230677   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:37.315392   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:37.315430   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:37.356521   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:37.356554   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:37.410552   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:37.410591   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:37.426013   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:37.426040   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:37.499352   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:39.999538   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:40.014515   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:40.014590   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:40.051511   78008 cri.go:89] found id: ""
	I0917 18:31:40.051548   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.051558   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:40.051564   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:40.051623   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:40.089707   78008 cri.go:89] found id: ""
	I0917 18:31:40.089733   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.089747   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:40.089752   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:40.089802   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:40.137303   78008 cri.go:89] found id: ""
	I0917 18:31:40.137326   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.137335   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:40.137341   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:40.137389   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:40.176721   78008 cri.go:89] found id: ""
	I0917 18:31:40.176746   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.176755   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:40.176761   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:40.176809   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:40.212369   78008 cri.go:89] found id: ""
	I0917 18:31:40.212401   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.212412   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:40.212421   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:40.212494   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:40.255798   78008 cri.go:89] found id: ""
	I0917 18:31:40.255828   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.255838   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:40.255847   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:40.255982   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:40.293643   78008 cri.go:89] found id: ""
	I0917 18:31:40.293672   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.293682   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:40.293689   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:40.293752   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:40.332300   78008 cri.go:89] found id: ""
	I0917 18:31:40.332330   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.332340   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:40.332350   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:40.332365   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:40.389068   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:40.389115   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:40.403118   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:40.403149   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:40.476043   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:40.476067   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:40.476081   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:40.563164   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:40.563204   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:43.112107   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:43.127968   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:43.128034   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:43.166351   78008 cri.go:89] found id: ""
	I0917 18:31:43.166371   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.166379   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:43.166384   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:43.166433   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:43.201124   78008 cri.go:89] found id: ""
	I0917 18:31:43.201160   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.201173   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:43.201181   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:43.201265   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:43.245684   78008 cri.go:89] found id: ""
	I0917 18:31:43.245717   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.245728   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:43.245735   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:43.245796   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:43.282751   78008 cri.go:89] found id: ""
	I0917 18:31:43.282777   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.282785   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:43.282791   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:43.282844   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:43.322180   78008 cri.go:89] found id: ""
	I0917 18:31:43.322212   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.322223   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:43.322230   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:43.322294   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:43.359575   78008 cri.go:89] found id: ""
	I0917 18:31:43.359608   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.359620   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:43.359627   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:43.359689   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:43.398782   78008 cri.go:89] found id: ""
	I0917 18:31:43.398811   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.398824   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:43.398833   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:43.398913   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:43.437747   78008 cri.go:89] found id: ""
	I0917 18:31:43.437771   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.437779   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:43.437787   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:43.437800   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:43.477986   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:43.478019   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:43.532637   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:43.532674   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:43.547552   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:43.547577   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:43.632556   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:43.632578   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:43.632592   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:46.214890   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:46.229327   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:46.229408   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:46.268605   78008 cri.go:89] found id: ""
	I0917 18:31:46.268632   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.268642   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:46.268649   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:46.268711   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:46.309508   78008 cri.go:89] found id: ""
	I0917 18:31:46.309539   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.309549   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:46.309558   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:46.309620   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:46.352610   78008 cri.go:89] found id: ""
	I0917 18:31:46.352639   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.352648   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:46.352654   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:46.352723   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:46.398702   78008 cri.go:89] found id: ""
	I0917 18:31:46.398738   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.398747   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:46.398753   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:46.398811   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:46.437522   78008 cri.go:89] found id: ""
	I0917 18:31:46.437545   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.437554   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:46.437559   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:46.437641   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:46.474865   78008 cri.go:89] found id: ""
	I0917 18:31:46.474893   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.474902   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:46.474909   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:46.474957   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:46.514497   78008 cri.go:89] found id: ""
	I0917 18:31:46.514525   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.514536   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:46.514543   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:46.514605   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:46.556948   78008 cri.go:89] found id: ""
	I0917 18:31:46.556979   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.556988   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:46.556997   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:46.557008   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:46.609300   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:46.609337   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:46.626321   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:46.626351   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:46.707669   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:46.707701   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:46.707714   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:46.789774   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:46.789815   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:49.332780   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:49.347262   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:49.347334   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:49.388368   78008 cri.go:89] found id: ""
	I0917 18:31:49.388411   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.388423   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:49.388431   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:49.388493   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:49.423664   78008 cri.go:89] found id: ""
	I0917 18:31:49.423694   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.423707   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:49.423714   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:49.423776   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:49.462882   78008 cri.go:89] found id: ""
	I0917 18:31:49.462911   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.462924   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:49.462931   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:49.462988   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:49.524014   78008 cri.go:89] found id: ""
	I0917 18:31:49.524047   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.524056   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:49.524062   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:49.524114   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:49.564703   78008 cri.go:89] found id: ""
	I0917 18:31:49.564737   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.564748   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:49.564762   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:49.564827   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:49.609460   78008 cri.go:89] found id: ""
	I0917 18:31:49.609484   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.609493   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:49.609499   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:49.609554   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:49.651008   78008 cri.go:89] found id: ""
	I0917 18:31:49.651032   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.651040   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:49.651045   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:49.651106   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:49.693928   78008 cri.go:89] found id: ""
	I0917 18:31:49.693954   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.693961   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:49.693969   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:49.693981   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:49.774940   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:49.774977   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:49.820362   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:49.820398   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:49.875508   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:49.875549   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:49.890690   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:49.890723   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:49.967803   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:52.468533   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:52.483749   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:52.483812   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:52.523017   78008 cri.go:89] found id: ""
	I0917 18:31:52.523040   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.523048   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:52.523055   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:52.523101   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:52.559848   78008 cri.go:89] found id: ""
	I0917 18:31:52.559879   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.559889   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:52.559895   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:52.559955   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:52.597168   78008 cri.go:89] found id: ""
	I0917 18:31:52.597192   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.597200   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:52.597207   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:52.597278   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:52.634213   78008 cri.go:89] found id: ""
	I0917 18:31:52.634241   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.634252   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:52.634265   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:52.634326   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:52.673842   78008 cri.go:89] found id: ""
	I0917 18:31:52.673865   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.673873   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:52.673878   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:52.673926   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:52.711568   78008 cri.go:89] found id: ""
	I0917 18:31:52.711596   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.711609   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:52.711617   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:52.711676   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:52.757002   78008 cri.go:89] found id: ""
	I0917 18:31:52.757030   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.757038   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:52.757043   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:52.757092   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:52.793092   78008 cri.go:89] found id: ""
	I0917 18:31:52.793126   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.793135   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:52.793143   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:52.793155   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:52.847641   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:52.847682   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:52.862287   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:52.862314   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:52.941307   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:52.941331   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:52.941344   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:53.026114   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:53.026155   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:55.573116   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:55.588063   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:55.588125   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:55.633398   78008 cri.go:89] found id: ""
	I0917 18:31:55.633422   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.633430   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:55.633437   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:55.633511   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:55.669754   78008 cri.go:89] found id: ""
	I0917 18:31:55.669785   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.669796   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:55.669803   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:55.669876   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:55.711492   78008 cri.go:89] found id: ""
	I0917 18:31:55.711521   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.711533   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:55.711541   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:55.711603   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:55.749594   78008 cri.go:89] found id: ""
	I0917 18:31:55.749628   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.749638   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:55.749643   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:55.749695   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:55.786114   78008 cri.go:89] found id: ""
	I0917 18:31:55.786143   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.786155   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:55.786162   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:55.786222   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:55.824254   78008 cri.go:89] found id: ""
	I0917 18:31:55.824282   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.824293   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:55.824301   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:55.824361   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:55.861690   78008 cri.go:89] found id: ""
	I0917 18:31:55.861718   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.861728   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:55.861733   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:55.861794   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:55.913729   78008 cri.go:89] found id: ""
	I0917 18:31:55.913754   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.913766   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:55.913775   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:55.913788   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:55.976835   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:55.976880   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:56.003201   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:56.003236   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:56.092101   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:56.092123   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:56.092137   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:56.170498   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:56.170533   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:58.714212   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:58.730997   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:58.731072   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:58.775640   78008 cri.go:89] found id: ""
	I0917 18:31:58.775678   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.775693   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:58.775701   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:58.775770   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:58.811738   78008 cri.go:89] found id: ""
	I0917 18:31:58.811764   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.811776   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:58.811783   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:58.811852   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:58.849803   78008 cri.go:89] found id: ""
	I0917 18:31:58.849827   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.849836   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:58.849841   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:58.849898   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:58.885827   78008 cri.go:89] found id: ""
	I0917 18:31:58.885856   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.885871   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:58.885878   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:58.885943   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:58.925539   78008 cri.go:89] found id: ""
	I0917 18:31:58.925565   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.925574   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:58.925579   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:58.925628   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:58.961074   78008 cri.go:89] found id: ""
	I0917 18:31:58.961104   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.961116   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:58.961123   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:58.961190   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:58.997843   78008 cri.go:89] found id: ""
	I0917 18:31:58.997878   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.997889   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:58.997896   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:58.997962   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:59.034836   78008 cri.go:89] found id: ""
	I0917 18:31:59.034866   78008 logs.go:276] 0 containers: []
	W0917 18:31:59.034876   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:59.034884   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:59.034899   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:59.049085   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:59.049116   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:59.126143   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:59.126168   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:59.126183   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:59.210043   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:59.210096   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:59.258546   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:59.258575   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:32:01.811930   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:01.833160   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:32:01.833223   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:32:01.891148   78008 cri.go:89] found id: ""
	I0917 18:32:01.891178   78008 logs.go:276] 0 containers: []
	W0917 18:32:01.891189   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:32:01.891197   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:32:01.891260   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:32:01.954367   78008 cri.go:89] found id: ""
	I0917 18:32:01.954407   78008 logs.go:276] 0 containers: []
	W0917 18:32:01.954418   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:32:01.954425   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:32:01.954490   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:32:01.998154   78008 cri.go:89] found id: ""
	I0917 18:32:01.998187   78008 logs.go:276] 0 containers: []
	W0917 18:32:01.998199   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:32:01.998206   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:32:01.998267   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:32:02.035412   78008 cri.go:89] found id: ""
	I0917 18:32:02.035446   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.035457   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:32:02.035464   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:32:02.035539   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:32:02.070552   78008 cri.go:89] found id: ""
	I0917 18:32:02.070586   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.070599   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:32:02.070604   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:32:02.070673   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:32:02.108680   78008 cri.go:89] found id: ""
	I0917 18:32:02.108717   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.108729   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:32:02.108737   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:32:02.108787   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:32:02.148560   78008 cri.go:89] found id: ""
	I0917 18:32:02.148585   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.148594   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:32:02.148600   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:32:02.148647   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:32:02.186398   78008 cri.go:89] found id: ""
	I0917 18:32:02.186434   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.186445   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:32:02.186454   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:32:02.186468   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:32:02.273674   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:32:02.273695   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:32:02.273708   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:32:02.359656   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:32:02.359704   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:32:02.405465   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:32:02.405494   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:32:02.466534   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:32:02.466568   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:32:04.983572   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:04.998711   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:32:04.998796   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:32:05.038080   78008 cri.go:89] found id: ""
	I0917 18:32:05.038111   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.038121   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:32:05.038129   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:32:05.038189   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:32:05.074542   78008 cri.go:89] found id: ""
	I0917 18:32:05.074571   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.074582   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:32:05.074588   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:32:05.074652   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:32:05.113115   78008 cri.go:89] found id: ""
	I0917 18:32:05.113140   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.113149   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:32:05.113156   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:32:05.113216   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:32:05.151752   78008 cri.go:89] found id: ""
	I0917 18:32:05.151777   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.151786   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:32:05.151791   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:32:05.151840   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:32:05.191014   78008 cri.go:89] found id: ""
	I0917 18:32:05.191044   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.191056   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:32:05.191064   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:32:05.191126   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:32:05.226738   78008 cri.go:89] found id: ""
	I0917 18:32:05.226774   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.226787   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:32:05.226794   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:32:05.226856   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:32:05.263072   78008 cri.go:89] found id: ""
	I0917 18:32:05.263102   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.263115   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:32:05.263124   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:32:05.263184   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:32:05.302622   78008 cri.go:89] found id: ""
	I0917 18:32:05.302651   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.302666   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:32:05.302677   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:32:05.302691   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:32:05.358101   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:32:05.358150   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:32:05.373289   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:32:05.373326   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:32:05.451451   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:32:05.451484   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:32:05.451496   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:32:05.532529   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:32:05.532570   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:32:08.079204   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:08.093914   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:32:08.093996   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:32:08.131132   78008 cri.go:89] found id: ""
	I0917 18:32:08.131164   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.131173   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:32:08.131178   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:32:08.131230   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:32:08.168017   78008 cri.go:89] found id: ""
	I0917 18:32:08.168044   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.168055   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:32:08.168062   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:32:08.168124   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:32:08.210190   78008 cri.go:89] found id: ""
	I0917 18:32:08.210212   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.210221   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:32:08.210226   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:32:08.210279   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:32:08.250264   78008 cri.go:89] found id: ""
	I0917 18:32:08.250291   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.250299   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:32:08.250304   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:32:08.250352   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:32:08.287732   78008 cri.go:89] found id: ""
	I0917 18:32:08.287760   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.287768   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:32:08.287775   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:32:08.287826   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:32:08.325131   78008 cri.go:89] found id: ""
	I0917 18:32:08.325161   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.325170   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:32:08.325176   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:32:08.325243   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:32:08.365979   78008 cri.go:89] found id: ""
	I0917 18:32:08.366008   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.366019   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:32:08.366027   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:32:08.366088   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:32:08.403430   78008 cri.go:89] found id: ""
	I0917 18:32:08.403472   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.403484   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:32:08.403495   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:32:08.403511   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:32:08.444834   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:32:08.444869   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:32:08.500363   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:32:08.500408   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:32:08.516624   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:32:08.516653   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:32:08.591279   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:32:08.591304   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:32:08.591317   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:32:11.173345   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:11.187689   78008 kubeadm.go:597] duration metric: took 4m1.808927826s to restartPrimaryControlPlane
	W0917 18:32:11.187762   78008 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 18:32:11.187786   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:32:12.794262   78008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.606454478s)
	I0917 18:32:12.794344   78008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:32:12.809379   78008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:32:12.821912   78008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:32:12.833176   78008 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:32:12.833201   78008 kubeadm.go:157] found existing configuration files:
	
	I0917 18:32:12.833279   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:32:12.843175   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:32:12.843245   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:32:12.855310   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:32:12.866777   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:32:12.866846   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:32:12.878436   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:32:12.889677   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:32:12.889763   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:32:12.900141   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:32:12.909916   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:32:12.909994   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:32:12.920578   78008 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:32:12.993663   78008 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0917 18:32:12.993743   78008 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:32:13.145113   78008 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:32:13.145321   78008 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:32:13.145451   78008 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0917 18:32:13.346279   78008 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:32:13.348308   78008 out.go:235]   - Generating certificates and keys ...
	I0917 18:32:13.348411   78008 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:32:13.348505   78008 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:32:13.348622   78008 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:32:13.348719   78008 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:32:13.348814   78008 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:32:13.348895   78008 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:32:13.348991   78008 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:32:13.349126   78008 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:32:13.349595   78008 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:32:13.349939   78008 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:32:13.350010   78008 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:32:13.350096   78008 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:32:13.677314   78008 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:32:13.840807   78008 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:32:13.886801   78008 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:32:13.937675   78008 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:32:13.956057   78008 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:32:13.957185   78008 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:32:13.957266   78008 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:32:14.099317   78008 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:32:14.101339   78008 out.go:235]   - Booting up control plane ...
	I0917 18:32:14.101446   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:32:14.107518   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:32:14.107630   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:32:14.107964   78008 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:32:14.118995   78008 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0917 18:32:54.116602   78008 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0917 18:32:54.116783   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:32:54.117004   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:32:59.116802   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:32:59.117073   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:33:09.116772   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:33:09.117022   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:33:29.116398   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:33:29.116681   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:34:09.116050   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:34:09.116348   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:34:09.116382   78008 kubeadm.go:310] 
	I0917 18:34:09.116437   78008 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0917 18:34:09.116522   78008 kubeadm.go:310] 		timed out waiting for the condition
	I0917 18:34:09.116546   78008 kubeadm.go:310] 
	I0917 18:34:09.116595   78008 kubeadm.go:310] 	This error is likely caused by:
	I0917 18:34:09.116645   78008 kubeadm.go:310] 		- The kubelet is not running
	I0917 18:34:09.116792   78008 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0917 18:34:09.116804   78008 kubeadm.go:310] 
	I0917 18:34:09.116949   78008 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0917 18:34:09.116993   78008 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0917 18:34:09.117053   78008 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0917 18:34:09.117070   78008 kubeadm.go:310] 
	I0917 18:34:09.117199   78008 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0917 18:34:09.117318   78008 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0917 18:34:09.117331   78008 kubeadm.go:310] 
	I0917 18:34:09.117467   78008 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0917 18:34:09.117585   78008 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0917 18:34:09.117689   78008 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0917 18:34:09.117782   78008 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0917 18:34:09.117793   78008 kubeadm.go:310] 
	I0917 18:34:09.118509   78008 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:34:09.118613   78008 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0917 18:34:09.118740   78008 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0917 18:34:09.118821   78008 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0917 18:34:09.118869   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:34:09.597153   78008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:34:09.614431   78008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:34:09.627627   78008 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:34:09.627653   78008 kubeadm.go:157] found existing configuration files:
	
	I0917 18:34:09.627702   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:34:09.639927   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:34:09.639997   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:34:09.651694   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:34:09.662886   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:34:09.662951   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:34:09.675194   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:34:09.686971   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:34:09.687040   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:34:09.699343   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:34:09.711202   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:34:09.711259   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:34:09.722049   78008 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:34:09.800536   78008 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0917 18:34:09.800589   78008 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:34:09.951244   78008 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:34:09.951389   78008 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:34:09.951517   78008 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0917 18:34:10.148311   78008 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:34:10.150656   78008 out.go:235]   - Generating certificates and keys ...
	I0917 18:34:10.150769   78008 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:34:10.150858   78008 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:34:10.150978   78008 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:34:10.151065   78008 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:34:10.151169   78008 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:34:10.151256   78008 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:34:10.151519   78008 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:34:10.151757   78008 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:34:10.152388   78008 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:34:10.152908   78008 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:34:10.153071   78008 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:34:10.153159   78008 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:34:10.298790   78008 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:34:10.463403   78008 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:34:10.699997   78008 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:34:10.983279   78008 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:34:11.006708   78008 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:34:11.008239   78008 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:34:11.008306   78008 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:34:11.173261   78008 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:34:11.175163   78008 out.go:235]   - Booting up control plane ...
	I0917 18:34:11.175324   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:34:11.188834   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:34:11.189874   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:34:11.190719   78008 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:34:11.193221   78008 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0917 18:34:51.193814   78008 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0917 18:34:51.194231   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:34:51.194466   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:34:56.194972   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:34:56.195214   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:35:06.195454   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:35:06.195700   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:35:26.196645   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:35:26.196867   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:36:06.199013   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:36:06.199291   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:36:06.199313   78008 kubeadm.go:310] 
	I0917 18:36:06.199374   78008 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0917 18:36:06.199427   78008 kubeadm.go:310] 		timed out waiting for the condition
	I0917 18:36:06.199434   78008 kubeadm.go:310] 
	I0917 18:36:06.199481   78008 kubeadm.go:310] 	This error is likely caused by:
	I0917 18:36:06.199514   78008 kubeadm.go:310] 		- The kubelet is not running
	I0917 18:36:06.199643   78008 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0917 18:36:06.199663   78008 kubeadm.go:310] 
	I0917 18:36:06.199785   78008 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0917 18:36:06.199835   78008 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0917 18:36:06.199878   78008 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0917 18:36:06.199882   78008 kubeadm.go:310] 
	I0917 18:36:06.200017   78008 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0917 18:36:06.200218   78008 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0917 18:36:06.200235   78008 kubeadm.go:310] 
	I0917 18:36:06.200391   78008 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0917 18:36:06.200515   78008 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0917 18:36:06.200640   78008 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0917 18:36:06.200746   78008 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0917 18:36:06.200763   78008 kubeadm.go:310] 
	I0917 18:36:06.201520   78008 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:36:06.201636   78008 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0917 18:36:06.201798   78008 kubeadm.go:394] duration metric: took 7m56.884157814s to StartCluster
	I0917 18:36:06.201852   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:36:06.201800   78008 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0917 18:36:06.201920   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:36:06.251742   78008 cri.go:89] found id: ""
	I0917 18:36:06.251773   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.251781   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:36:06.251787   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:36:06.251853   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:36:06.292437   78008 cri.go:89] found id: ""
	I0917 18:36:06.292471   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.292483   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:36:06.292490   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:36:06.292548   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:36:06.334539   78008 cri.go:89] found id: ""
	I0917 18:36:06.334571   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.334580   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:36:06.334590   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:36:06.334641   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:36:06.372231   78008 cri.go:89] found id: ""
	I0917 18:36:06.372267   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.372279   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:36:06.372287   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:36:06.372346   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:36:06.411995   78008 cri.go:89] found id: ""
	I0917 18:36:06.412023   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.412031   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:36:06.412036   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:36:06.412100   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:36:06.450809   78008 cri.go:89] found id: ""
	I0917 18:36:06.450834   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.450842   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:36:06.450848   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:36:06.450897   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:36:06.486772   78008 cri.go:89] found id: ""
	I0917 18:36:06.486802   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.486814   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:36:06.486831   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:36:06.486886   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:36:06.528167   78008 cri.go:89] found id: ""
	I0917 18:36:06.528198   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.528210   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:36:06.528222   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:36:06.528234   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:36:06.610415   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:36:06.610445   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:36:06.610461   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:36:06.745881   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:36:06.745921   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:36:06.788764   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:36:06.788802   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:36:06.843477   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:36:06.843514   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0917 18:36:06.858338   78008 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0917 18:36:06.858388   78008 out.go:270] * 
	* 
	W0917 18:36:06.858456   78008 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0917 18:36:06.858485   78008 out.go:270] * 
	* 
	W0917 18:36:06.859898   78008 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 18:36:06.863606   78008 out.go:201] 
	W0917 18:36:06.865246   78008 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0917 18:36:06.865293   78008 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0917 18:36:06.865313   78008 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0917 18:36:06.866942   78008 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-190698 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-190698 -n old-k8s-version-190698
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-190698 -n old-k8s-version-190698: exit status 2 (241.055312ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-190698 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-190698 logs -n 25: (1.713477757s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo containerd config dump                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	| delete  | -p                                                     | disable-driver-mounts-671774 | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | disable-driver-mounts-671774                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:20 UTC |
	|         | default-k8s-diff-port-438836                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-081863            | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-081863                                  | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-328741             | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:20 UTC | 17 Sep 24 18:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-328741                                   | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:20 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-438836  | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:21 UTC | 17 Sep 24 18:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:21 UTC |                     |
	|         | default-k8s-diff-port-438836                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-081863                 | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-190698        | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-081863                                  | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC | 17 Sep 24 18:33 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-328741                  | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-328741                                   | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC | 17 Sep 24 18:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-438836       | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC | 17 Sep 24 18:32 UTC |
	|         | default-k8s-diff-port-438836                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-190698                              | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC | 17 Sep 24 18:23 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-190698             | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC | 17 Sep 24 18:23 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-190698                              | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 18:23:50
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 18:23:50.674050   78008 out.go:345] Setting OutFile to fd 1 ...
	I0917 18:23:50.674338   78008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:23:50.674349   78008 out.go:358] Setting ErrFile to fd 2...
	I0917 18:23:50.674356   78008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:23:50.674556   78008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 18:23:50.675161   78008 out.go:352] Setting JSON to false
	I0917 18:23:50.676159   78008 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7546,"bootTime":1726589885,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 18:23:50.676252   78008 start.go:139] virtualization: kvm guest
	I0917 18:23:50.678551   78008 out.go:177] * [old-k8s-version-190698] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 18:23:50.679898   78008 notify.go:220] Checking for updates...
	I0917 18:23:50.679923   78008 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 18:23:50.681520   78008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 18:23:50.683062   78008 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:23:50.684494   78008 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 18:23:50.685988   78008 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 18:23:50.687372   78008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 18:23:50.689066   78008 config.go:182] Loaded profile config "old-k8s-version-190698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0917 18:23:50.689526   78008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:23:50.689604   78008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:23:50.704879   78008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41839
	I0917 18:23:50.705416   78008 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:23:50.705985   78008 main.go:141] libmachine: Using API Version  1
	I0917 18:23:50.706014   78008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:23:50.706318   78008 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:23:50.706508   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:23:50.708560   78008 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0917 18:23:50.709804   78008 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 18:23:50.710139   78008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:23:50.710185   78008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:23:50.725466   78008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41935
	I0917 18:23:50.725978   78008 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:23:50.726521   78008 main.go:141] libmachine: Using API Version  1
	I0917 18:23:50.726552   78008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:23:50.726874   78008 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:23:50.727047   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:23:50.764769   78008 out.go:177] * Using the kvm2 driver based on existing profile
	I0917 18:23:50.766378   78008 start.go:297] selected driver: kvm2
	I0917 18:23:50.766396   78008 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:23:50.766522   78008 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 18:23:50.767254   78008 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:23:50.767323   78008 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19662-11085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0917 18:23:50.783226   78008 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0917 18:23:50.783619   78008 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 18:23:50.783658   78008 cni.go:84] Creating CNI manager for ""
	I0917 18:23:50.783697   78008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:23:50.783745   78008 start.go:340] cluster config:
	{Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:23:50.783859   78008 iso.go:125] acquiring lock: {Name:mkee7131e09b1c5eb94d106bda884bcbdbf6f906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:23:48.141429   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:23:50.786173   78008 out.go:177] * Starting "old-k8s-version-190698" primary control-plane node in "old-k8s-version-190698" cluster
	I0917 18:23:50.787985   78008 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0917 18:23:50.788036   78008 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0917 18:23:50.788046   78008 cache.go:56] Caching tarball of preloaded images
	I0917 18:23:50.788122   78008 preload.go:172] Found /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 18:23:50.788132   78008 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0917 18:23:50.788236   78008 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/config.json ...
	I0917 18:23:50.788409   78008 start.go:360] acquireMachinesLock for old-k8s-version-190698: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 18:23:54.221530   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:23:57.293515   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:03.373505   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:06.445563   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:12.525534   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:15.597572   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:21.677533   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:24.749529   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:30.829519   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:33.901554   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:39.981533   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:43.053468   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:49.133556   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:52.205564   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:58.285562   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:01.357500   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:07.437467   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:10.509559   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:16.589464   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:19.661586   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:25.741498   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:28.813506   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:34.893488   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:37.965553   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:44.045546   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:47.117526   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:53.197534   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:56.269532   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:02.349528   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:05.421492   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:11.501470   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:14.573534   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:20.653500   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:23.725530   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:29.805601   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:32.877548   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:38.957496   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:42.029510   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:48.109547   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:51.181567   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:57.261480   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:27:00.333628   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:27:03.338059   77433 start.go:364] duration metric: took 4m21.061938866s to acquireMachinesLock for "no-preload-328741"
	I0917 18:27:03.338119   77433 start.go:96] Skipping create...Using existing machine configuration
	I0917 18:27:03.338127   77433 fix.go:54] fixHost starting: 
	I0917 18:27:03.338580   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:27:03.338627   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:27:03.353917   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38843
	I0917 18:27:03.354383   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:27:03.354859   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:27:03.354881   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:27:03.355169   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:27:03.355331   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:03.355481   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetState
	I0917 18:27:03.357141   77433 fix.go:112] recreateIfNeeded on no-preload-328741: state=Stopped err=<nil>
	I0917 18:27:03.357164   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	W0917 18:27:03.357305   77433 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 18:27:03.359125   77433 out.go:177] * Restarting existing kvm2 VM for "no-preload-328741" ...
	I0917 18:27:03.335549   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:27:03.335586   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetMachineName
	I0917 18:27:03.335955   77264 buildroot.go:166] provisioning hostname "embed-certs-081863"
	I0917 18:27:03.335984   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetMachineName
	I0917 18:27:03.336183   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:27:03.337915   77264 machine.go:96] duration metric: took 4m37.417759423s to provisionDockerMachine
	I0917 18:27:03.337964   77264 fix.go:56] duration metric: took 4m37.441049892s for fixHost
	I0917 18:27:03.337973   77264 start.go:83] releasing machines lock for "embed-certs-081863", held for 4m37.441075799s
	W0917 18:27:03.337995   77264 start.go:714] error starting host: provision: host is not running
	W0917 18:27:03.338098   77264 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0917 18:27:03.338107   77264 start.go:729] Will try again in 5 seconds ...
	I0917 18:27:03.360504   77433 main.go:141] libmachine: (no-preload-328741) Calling .Start
	I0917 18:27:03.360723   77433 main.go:141] libmachine: (no-preload-328741) Ensuring networks are active...
	I0917 18:27:03.361552   77433 main.go:141] libmachine: (no-preload-328741) Ensuring network default is active
	I0917 18:27:03.361892   77433 main.go:141] libmachine: (no-preload-328741) Ensuring network mk-no-preload-328741 is active
	I0917 18:27:03.362266   77433 main.go:141] libmachine: (no-preload-328741) Getting domain xml...
	I0917 18:27:03.362986   77433 main.go:141] libmachine: (no-preload-328741) Creating domain...
	I0917 18:27:04.605668   77433 main.go:141] libmachine: (no-preload-328741) Waiting to get IP...
	I0917 18:27:04.606667   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:04.607120   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:04.607206   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:04.607116   78688 retry.go:31] will retry after 233.634344ms: waiting for machine to come up
	I0917 18:27:04.842666   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:04.843211   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:04.843238   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:04.843149   78688 retry.go:31] will retry after 295.987515ms: waiting for machine to come up
	I0917 18:27:05.140821   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:05.141150   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:05.141173   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:05.141121   78688 retry.go:31] will retry after 482.890276ms: waiting for machine to come up
	I0917 18:27:05.625952   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:05.626401   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:05.626461   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:05.626347   78688 retry.go:31] will retry after 554.515102ms: waiting for machine to come up
	I0917 18:27:06.182038   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:06.182421   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:06.182448   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:06.182375   78688 retry.go:31] will retry after 484.48355ms: waiting for machine to come up
	I0917 18:27:06.668366   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:06.668886   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:06.668917   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:06.668862   78688 retry.go:31] will retry after 821.433387ms: waiting for machine to come up
	I0917 18:27:08.338629   77264 start.go:360] acquireMachinesLock for embed-certs-081863: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 18:27:07.491878   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:07.492313   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:07.492333   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:07.492274   78688 retry.go:31] will retry after 777.017059ms: waiting for machine to come up
	I0917 18:27:08.271320   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:08.271721   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:08.271748   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:08.271671   78688 retry.go:31] will retry after 1.033548419s: waiting for machine to come up
	I0917 18:27:09.307361   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:09.307889   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:09.307922   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:09.307826   78688 retry.go:31] will retry after 1.347955425s: waiting for machine to come up
	I0917 18:27:10.657426   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:10.657903   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:10.657927   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:10.657850   78688 retry.go:31] will retry after 1.52847221s: waiting for machine to come up
	I0917 18:27:12.188594   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:12.189069   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:12.189094   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:12.189031   78688 retry.go:31] will retry after 2.329019451s: waiting for machine to come up
	I0917 18:27:14.519240   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:14.519691   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:14.519718   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:14.519643   78688 retry.go:31] will retry after 2.547184893s: waiting for machine to come up
	I0917 18:27:17.068162   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:17.068621   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:17.068645   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:17.068577   78688 retry.go:31] will retry after 3.042534231s: waiting for machine to come up
	I0917 18:27:21.442547   77819 start.go:364] duration metric: took 3m42.844200352s to acquireMachinesLock for "default-k8s-diff-port-438836"
	I0917 18:27:21.442612   77819 start.go:96] Skipping create...Using existing machine configuration
	I0917 18:27:21.442620   77819 fix.go:54] fixHost starting: 
	I0917 18:27:21.443035   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:27:21.443089   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:27:21.462997   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38191
	I0917 18:27:21.463468   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:27:21.464035   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:27:21.464056   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:27:21.464377   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:27:21.464546   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:21.464703   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetState
	I0917 18:27:21.466460   77819 fix.go:112] recreateIfNeeded on default-k8s-diff-port-438836: state=Stopped err=<nil>
	I0917 18:27:21.466502   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	W0917 18:27:21.466643   77819 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 18:27:21.468932   77819 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-438836" ...
	I0917 18:27:20.113857   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.114336   77433 main.go:141] libmachine: (no-preload-328741) Found IP for machine: 192.168.72.182
	I0917 18:27:20.114359   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has current primary IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.114364   77433 main.go:141] libmachine: (no-preload-328741) Reserving static IP address...
	I0917 18:27:20.114774   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "no-preload-328741", mac: "52:54:00:de:bd:6d", ip: "192.168.72.182"} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.114792   77433 main.go:141] libmachine: (no-preload-328741) Reserved static IP address: 192.168.72.182
	I0917 18:27:20.114808   77433 main.go:141] libmachine: (no-preload-328741) DBG | skip adding static IP to network mk-no-preload-328741 - found existing host DHCP lease matching {name: "no-preload-328741", mac: "52:54:00:de:bd:6d", ip: "192.168.72.182"}
	I0917 18:27:20.114822   77433 main.go:141] libmachine: (no-preload-328741) DBG | Getting to WaitForSSH function...
	I0917 18:27:20.114831   77433 main.go:141] libmachine: (no-preload-328741) Waiting for SSH to be available...
	I0917 18:27:20.116945   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.117224   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.117268   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.117371   77433 main.go:141] libmachine: (no-preload-328741) DBG | Using SSH client type: external
	I0917 18:27:20.117396   77433 main.go:141] libmachine: (no-preload-328741) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa (-rw-------)
	I0917 18:27:20.117427   77433 main.go:141] libmachine: (no-preload-328741) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:27:20.117439   77433 main.go:141] libmachine: (no-preload-328741) DBG | About to run SSH command:
	I0917 18:27:20.117446   77433 main.go:141] libmachine: (no-preload-328741) DBG | exit 0
	I0917 18:27:20.241462   77433 main.go:141] libmachine: (no-preload-328741) DBG | SSH cmd err, output: <nil>: 
	I0917 18:27:20.241844   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetConfigRaw
	I0917 18:27:20.242520   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetIP
	I0917 18:27:20.245397   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.245786   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.245821   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.246121   77433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/config.json ...
	I0917 18:27:20.246346   77433 machine.go:93] provisionDockerMachine start ...
	I0917 18:27:20.246367   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:20.246573   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.248978   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.249318   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.249345   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.249489   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:20.249643   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.249795   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.249911   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:20.250048   77433 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:20.250301   77433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0917 18:27:20.250317   77433 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 18:27:20.357778   77433 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 18:27:20.357805   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetMachineName
	I0917 18:27:20.358058   77433 buildroot.go:166] provisioning hostname "no-preload-328741"
	I0917 18:27:20.358083   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetMachineName
	I0917 18:27:20.358261   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.361057   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.361463   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.361498   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.361617   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:20.361774   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.361948   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.362031   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:20.362157   77433 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:20.362321   77433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0917 18:27:20.362337   77433 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-328741 && echo "no-preload-328741" | sudo tee /etc/hostname
	I0917 18:27:20.486928   77433 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-328741
	
	I0917 18:27:20.486956   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.489814   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.490212   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.490245   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.490451   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:20.490627   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.490846   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.491105   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:20.491327   77433 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:20.491532   77433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0917 18:27:20.491553   77433 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-328741' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-328741/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-328741' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:27:20.607308   77433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:27:20.607336   77433 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:27:20.607379   77433 buildroot.go:174] setting up certificates
	I0917 18:27:20.607394   77433 provision.go:84] configureAuth start
	I0917 18:27:20.607407   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetMachineName
	I0917 18:27:20.607708   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetIP
	I0917 18:27:20.610353   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.610722   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.610751   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.610897   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.612874   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.613160   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.613196   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.613366   77433 provision.go:143] copyHostCerts
	I0917 18:27:20.613425   77433 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:27:20.613435   77433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:27:20.613508   77433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:27:20.613607   77433 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:27:20.613614   77433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:27:20.613645   77433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:27:20.613706   77433 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:27:20.613713   77433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:27:20.613734   77433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:27:20.613789   77433 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.no-preload-328741 san=[127.0.0.1 192.168.72.182 localhost minikube no-preload-328741]
	I0917 18:27:20.808567   77433 provision.go:177] copyRemoteCerts
	I0917 18:27:20.808634   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:27:20.808662   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.811568   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.811927   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.811954   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.812154   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:20.812347   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.812503   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:20.812627   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:27:20.895825   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0917 18:27:20.922489   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 18:27:20.948827   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:27:20.974824   77433 provision.go:87] duration metric: took 367.418792ms to configureAuth
	I0917 18:27:20.974852   77433 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:27:20.975023   77433 config.go:182] Loaded profile config "no-preload-328741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:27:20.975090   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.977758   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.978068   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.978105   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.978254   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:20.978473   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.978662   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.978784   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:20.978951   77433 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:20.979110   77433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0917 18:27:20.979126   77433 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:27:21.205095   77433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:27:21.205123   77433 machine.go:96] duration metric: took 958.76263ms to provisionDockerMachine
	I0917 18:27:21.205136   77433 start.go:293] postStartSetup for "no-preload-328741" (driver="kvm2")
	I0917 18:27:21.205148   77433 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:27:21.205167   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:21.205532   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:27:21.205565   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:21.208451   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.208840   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:21.208882   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.209046   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:21.209355   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:21.209578   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:21.209759   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:27:21.291918   77433 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:27:21.296054   77433 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:27:21.296077   77433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:27:21.296139   77433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:27:21.296215   77433 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:27:21.296313   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:27:21.305838   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:27:21.331220   77433 start.go:296] duration metric: took 126.069168ms for postStartSetup
	I0917 18:27:21.331261   77433 fix.go:56] duration metric: took 17.993134184s for fixHost
	I0917 18:27:21.331280   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:21.334290   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.334663   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:21.334688   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.334893   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:21.335134   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:21.335275   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:21.335443   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:21.335597   77433 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:21.335788   77433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0917 18:27:21.335803   77433 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:27:21.442323   77433 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726597641.413351440
	
	I0917 18:27:21.442375   77433 fix.go:216] guest clock: 1726597641.413351440
	I0917 18:27:21.442390   77433 fix.go:229] Guest: 2024-09-17 18:27:21.41335144 +0000 UTC Remote: 2024-09-17 18:27:21.331264373 +0000 UTC m=+279.198911017 (delta=82.087067ms)
	I0917 18:27:21.442423   77433 fix.go:200] guest clock delta is within tolerance: 82.087067ms
	I0917 18:27:21.442443   77433 start.go:83] releasing machines lock for "no-preload-328741", held for 18.10434208s
	I0917 18:27:21.442489   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:21.442775   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetIP
	I0917 18:27:21.445223   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.445561   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:21.445602   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.445710   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:21.446182   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:21.446357   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:21.446466   77433 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:27:21.446519   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:21.446551   77433 ssh_runner.go:195] Run: cat /version.json
	I0917 18:27:21.446574   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:21.449063   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.449340   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.449400   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:21.449435   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.449557   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:21.449699   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:21.449832   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:21.449833   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:21.449866   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.450010   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:21.450004   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:27:21.450104   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:21.450222   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:21.450352   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:27:21.552947   77433 ssh_runner.go:195] Run: systemctl --version
	I0917 18:27:21.559634   77433 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:27:21.707720   77433 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:27:21.714672   77433 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:27:21.714746   77433 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:27:21.731669   77433 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:27:21.731700   77433 start.go:495] detecting cgroup driver to use...
	I0917 18:27:21.731776   77433 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:27:21.749370   77433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:27:21.765181   77433 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:27:21.765284   77433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:27:21.782356   77433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:27:21.801216   77433 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:27:21.918587   77433 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:27:22.089578   77433 docker.go:233] disabling docker service ...
	I0917 18:27:22.089661   77433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:27:22.110533   77433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:27:22.125372   77433 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:27:22.241575   77433 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:27:22.367081   77433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:27:22.381835   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:27:22.402356   77433 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 18:27:22.402432   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.413980   77433 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:27:22.414051   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.426845   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.439426   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.451352   77433 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:27:22.463891   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.475686   77433 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.495380   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.507217   77433 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:27:22.517776   77433 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:27:22.517844   77433 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:27:22.537889   77433 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:27:22.549554   77433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:27:22.663258   77433 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:27:22.762619   77433 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:27:22.762693   77433 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:27:22.769911   77433 start.go:563] Will wait 60s for crictl version
	I0917 18:27:22.769967   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:22.775014   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:27:22.819750   77433 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:27:22.819864   77433 ssh_runner.go:195] Run: crio --version
	I0917 18:27:22.849303   77433 ssh_runner.go:195] Run: crio --version
	I0917 18:27:22.887418   77433 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 18:27:21.470362   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Start
	I0917 18:27:21.470570   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Ensuring networks are active...
	I0917 18:27:21.471316   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Ensuring network default is active
	I0917 18:27:21.471781   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Ensuring network mk-default-k8s-diff-port-438836 is active
	I0917 18:27:21.472151   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Getting domain xml...
	I0917 18:27:21.472856   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Creating domain...
	I0917 18:27:22.744436   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting to get IP...
	I0917 18:27:22.745314   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:22.745829   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:22.745899   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:22.745819   78807 retry.go:31] will retry after 201.903728ms: waiting for machine to come up
	I0917 18:27:22.949838   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:22.951570   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:22.951596   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:22.951537   78807 retry.go:31] will retry after 376.852856ms: waiting for machine to come up
	I0917 18:27:23.330165   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:23.330685   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:23.330706   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:23.330633   78807 retry.go:31] will retry after 415.874344ms: waiting for machine to come up
	I0917 18:27:22.888728   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetIP
	I0917 18:27:22.891793   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:22.892111   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:22.892130   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:22.892513   77433 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0917 18:27:22.897071   77433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:27:22.911118   77433 kubeadm.go:883] updating cluster {Name:no-preload-328741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-328741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:27:22.911279   77433 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 18:27:22.911333   77433 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:27:22.949155   77433 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0917 18:27:22.949180   77433 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0917 18:27:22.949270   77433 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:22.949289   77433 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:22.949319   77433 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0917 18:27:22.949298   77433 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:22.949398   77433 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:22.949424   77433 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:22.949449   77433 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:22.949339   77433 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:22.950952   77433 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:22.951106   77433 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:22.951113   77433 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:22.951238   77433 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:22.951257   77433 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0917 18:27:22.951257   77433 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:22.951343   77433 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:22.951426   77433 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.145473   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:23.155577   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:23.167187   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:23.169154   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:23.171736   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.196199   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:23.225029   77433 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0917 18:27:23.225085   77433 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:23.225133   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.233185   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0917 18:27:23.269008   77433 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0917 18:27:23.269045   77433 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:23.269092   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.307273   77433 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0917 18:27:23.307319   77433 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:23.307374   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.345906   77433 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0917 18:27:23.345949   77433 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.345999   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.346222   77433 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0917 18:27:23.346259   77433 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:23.346316   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.362612   77433 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0917 18:27:23.362657   77433 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:23.362684   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:23.362707   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.464589   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:23.464684   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:23.464742   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:23.464815   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.464903   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:23.464911   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:23.616289   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:23.616349   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:23.616400   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:23.616459   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.616514   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:23.616548   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:23.752643   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:23.752754   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:23.752754   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:23.761857   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.761945   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0917 18:27:23.762041   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0917 18:27:23.768641   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:23.883181   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0917 18:27:23.883181   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0917 18:27:23.883230   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0917 18:27:23.883294   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0917 18:27:23.883301   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0917 18:27:23.883302   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0917 18:27:23.883314   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0917 18:27:23.883371   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0917 18:27:23.883388   77433 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0917 18:27:23.883401   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0917 18:27:23.883413   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0917 18:27:23.883680   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0917 18:27:23.883758   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0917 18:27:23.894354   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0917 18:27:23.894539   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0917 18:27:23.901735   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0917 18:27:23.901990   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0917 18:27:23.909116   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:26.450360   77433 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.566575076s)
	I0917 18:27:26.450405   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0917 18:27:26.450360   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.566921389s)
	I0917 18:27:26.450422   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0917 18:27:26.450429   77433 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.541282746s)
	I0917 18:27:26.450444   77433 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0917 18:27:26.450492   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0917 18:27:26.450485   77433 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0917 18:27:26.450524   77433 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:26.450567   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.748331   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:23.748832   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:23.748862   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:23.748765   78807 retry.go:31] will retry after 515.370863ms: waiting for machine to come up
	I0917 18:27:24.265477   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:24.265902   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:24.265939   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:24.265859   78807 retry.go:31] will retry after 629.410487ms: waiting for machine to come up
	I0917 18:27:24.896939   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:24.897469   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:24.897500   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:24.897415   78807 retry.go:31] will retry after 846.873676ms: waiting for machine to come up
	I0917 18:27:25.745594   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:25.746228   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:25.746254   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:25.746167   78807 retry.go:31] will retry after 1.192058073s: waiting for machine to come up
	I0917 18:27:26.940216   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:26.940678   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:26.940702   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:26.940637   78807 retry.go:31] will retry after 1.449067435s: waiting for machine to come up
	I0917 18:27:28.392247   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:28.392711   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:28.392753   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:28.392665   78807 retry.go:31] will retry after 1.444723582s: waiting for machine to come up
	I0917 18:27:29.730898   77433 ssh_runner.go:235] Completed: which crictl: (3.280308944s)
	I0917 18:27:29.730988   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:29.731032   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.280407278s)
	I0917 18:27:29.731069   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0917 18:27:29.731121   77433 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0917 18:27:29.731164   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0917 18:27:29.781214   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:32.016162   77433 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.234900005s)
	I0917 18:27:32.016246   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:32.016175   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.284993422s)
	I0917 18:27:32.016331   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0917 18:27:32.016382   77433 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0917 18:27:32.016431   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0917 18:27:32.062774   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0917 18:27:32.062903   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0917 18:27:29.839565   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:29.840118   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:29.840154   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:29.840044   78807 retry.go:31] will retry after 1.984255207s: waiting for machine to come up
	I0917 18:27:31.825642   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:31.826059   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:31.826105   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:31.826027   78807 retry.go:31] will retry after 1.870760766s: waiting for machine to come up
	I0917 18:27:34.201435   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.18496735s)
	I0917 18:27:34.201470   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0917 18:27:34.201493   77433 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0917 18:27:34.201506   77433 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.138578181s)
	I0917 18:27:34.201545   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0917 18:27:34.201547   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0917 18:27:36.281470   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.079903331s)
	I0917 18:27:36.281515   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0917 18:27:36.281539   77433 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0917 18:27:36.281581   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0917 18:27:33.698947   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:33.699358   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:33.699389   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:33.699308   78807 retry.go:31] will retry after 2.194557575s: waiting for machine to come up
	I0917 18:27:35.896774   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:35.897175   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:35.897215   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:35.897139   78807 retry.go:31] will retry after 3.232409388s: waiting for machine to come up
	I0917 18:27:40.422552   78008 start.go:364] duration metric: took 3m49.634084682s to acquireMachinesLock for "old-k8s-version-190698"
	I0917 18:27:40.422631   78008 start.go:96] Skipping create...Using existing machine configuration
	I0917 18:27:40.422641   78008 fix.go:54] fixHost starting: 
	I0917 18:27:40.423075   78008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:27:40.423129   78008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:27:40.444791   78008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38471
	I0917 18:27:40.445363   78008 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:27:40.446028   78008 main.go:141] libmachine: Using API Version  1
	I0917 18:27:40.446063   78008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:27:40.446445   78008 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:27:40.446690   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:27:40.446844   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetState
	I0917 18:27:40.448698   78008 fix.go:112] recreateIfNeeded on old-k8s-version-190698: state=Stopped err=<nil>
	I0917 18:27:40.448743   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	W0917 18:27:40.448912   78008 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 18:27:40.451316   78008 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-190698" ...
	I0917 18:27:40.452694   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .Start
	I0917 18:27:40.452899   78008 main.go:141] libmachine: (old-k8s-version-190698) Ensuring networks are active...
	I0917 18:27:40.453913   78008 main.go:141] libmachine: (old-k8s-version-190698) Ensuring network default is active
	I0917 18:27:40.454353   78008 main.go:141] libmachine: (old-k8s-version-190698) Ensuring network mk-old-k8s-version-190698 is active
	I0917 18:27:40.454806   78008 main.go:141] libmachine: (old-k8s-version-190698) Getting domain xml...
	I0917 18:27:40.455606   78008 main.go:141] libmachine: (old-k8s-version-190698) Creating domain...
	I0917 18:27:39.131665   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.132199   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Found IP for machine: 192.168.39.58
	I0917 18:27:39.132224   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Reserving static IP address...
	I0917 18:27:39.132241   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has current primary IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.132683   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-438836", mac: "52:54:00:78:fb:fd", ip: "192.168.39.58"} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.132716   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | skip adding static IP to network mk-default-k8s-diff-port-438836 - found existing host DHCP lease matching {name: "default-k8s-diff-port-438836", mac: "52:54:00:78:fb:fd", ip: "192.168.39.58"}
	I0917 18:27:39.132729   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Reserved static IP address: 192.168.39.58
	I0917 18:27:39.132744   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for SSH to be available...
	I0917 18:27:39.132759   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Getting to WaitForSSH function...
	I0917 18:27:39.135223   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.135590   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.135612   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.135797   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Using SSH client type: external
	I0917 18:27:39.135825   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa (-rw-------)
	I0917 18:27:39.135871   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:27:39.135888   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | About to run SSH command:
	I0917 18:27:39.135899   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | exit 0
	I0917 18:27:39.261644   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | SSH cmd err, output: <nil>: 
	I0917 18:27:39.261978   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetConfigRaw
	I0917 18:27:39.262594   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetIP
	I0917 18:27:39.265005   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.265308   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.265376   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.265576   77819 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/config.json ...
	I0917 18:27:39.265817   77819 machine.go:93] provisionDockerMachine start ...
	I0917 18:27:39.265835   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:39.266039   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.268290   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.268616   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.268646   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.268846   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:39.269019   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.269159   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.269333   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:39.269497   77819 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:39.269689   77819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0917 18:27:39.269701   77819 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 18:27:39.378024   77819 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 18:27:39.378050   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetMachineName
	I0917 18:27:39.378284   77819 buildroot.go:166] provisioning hostname "default-k8s-diff-port-438836"
	I0917 18:27:39.378322   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetMachineName
	I0917 18:27:39.378529   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.381247   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.381574   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.381614   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.381765   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:39.381938   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.382057   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.382169   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:39.382311   77819 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:39.382546   77819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0917 18:27:39.382567   77819 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-438836 && echo "default-k8s-diff-port-438836" | sudo tee /etc/hostname
	I0917 18:27:39.516431   77819 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-438836
	
	I0917 18:27:39.516462   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.519542   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.519934   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.519966   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.520172   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:39.520405   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.520594   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.520773   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:39.520927   77819 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:39.521094   77819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0917 18:27:39.521111   77819 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-438836' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-438836/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-438836' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:27:39.640608   77819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:27:39.640656   77819 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:27:39.640717   77819 buildroot.go:174] setting up certificates
	I0917 18:27:39.640731   77819 provision.go:84] configureAuth start
	I0917 18:27:39.640750   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetMachineName
	I0917 18:27:39.641038   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetIP
	I0917 18:27:39.643698   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.644026   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.644085   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.644374   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.646822   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.647198   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.647227   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.647360   77819 provision.go:143] copyHostCerts
	I0917 18:27:39.647428   77819 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:27:39.647441   77819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:27:39.647516   77819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:27:39.647637   77819 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:27:39.647658   77819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:27:39.647693   77819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:27:39.647782   77819 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:27:39.647790   77819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:27:39.647817   77819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:27:39.647883   77819 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-438836 san=[127.0.0.1 192.168.39.58 default-k8s-diff-port-438836 localhost minikube]
	I0917 18:27:39.751962   77819 provision.go:177] copyRemoteCerts
	I0917 18:27:39.752028   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:27:39.752053   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.754975   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.755348   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.755381   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.755541   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:39.755725   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.755872   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:39.755988   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:27:39.840071   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0917 18:27:39.866175   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 18:27:39.896353   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:27:39.924332   77819 provision.go:87] duration metric: took 283.582838ms to configureAuth
	I0917 18:27:39.924363   77819 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:27:39.924606   77819 config.go:182] Loaded profile config "default-k8s-diff-port-438836": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:27:39.924701   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.927675   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.928027   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.928058   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.928307   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:39.928545   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.928710   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.928854   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:39.929011   77819 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:39.929244   77819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0917 18:27:39.929272   77819 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:27:40.170729   77819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:27:40.170763   77819 machine.go:96] duration metric: took 904.932975ms to provisionDockerMachine
	I0917 18:27:40.170776   77819 start.go:293] postStartSetup for "default-k8s-diff-port-438836" (driver="kvm2")
	I0917 18:27:40.170789   77819 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:27:40.170810   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:40.171145   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:27:40.171187   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:40.173980   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.174451   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:40.174480   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.174739   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:40.174926   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:40.175096   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:40.175261   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:27:40.263764   77819 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:27:40.269500   77819 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:27:40.269528   77819 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:27:40.269611   77819 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:27:40.269711   77819 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:27:40.269838   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:27:40.280672   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:27:40.309608   77819 start.go:296] duration metric: took 138.819033ms for postStartSetup
	I0917 18:27:40.309648   77819 fix.go:56] duration metric: took 18.867027995s for fixHost
	I0917 18:27:40.309668   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:40.312486   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.313018   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:40.313042   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.313201   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:40.313408   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:40.313574   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:40.313691   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:40.313853   77819 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:40.314037   77819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0917 18:27:40.314050   77819 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:27:40.422393   77819 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726597660.391833807
	
	I0917 18:27:40.422417   77819 fix.go:216] guest clock: 1726597660.391833807
	I0917 18:27:40.422424   77819 fix.go:229] Guest: 2024-09-17 18:27:40.391833807 +0000 UTC Remote: 2024-09-17 18:27:40.309651352 +0000 UTC m=+241.856499140 (delta=82.182455ms)
	I0917 18:27:40.422443   77819 fix.go:200] guest clock delta is within tolerance: 82.182455ms
	I0917 18:27:40.422448   77819 start.go:83] releasing machines lock for "default-k8s-diff-port-438836", held for 18.97986821s
	I0917 18:27:40.422473   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:40.422745   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetIP
	I0917 18:27:40.425463   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.425856   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:40.425885   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.426048   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:40.426529   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:40.426665   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:40.426742   77819 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:27:40.426807   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:40.426910   77819 ssh_runner.go:195] Run: cat /version.json
	I0917 18:27:40.426936   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:40.429570   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.429639   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.429967   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:40.430004   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:40.430031   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.430047   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.430161   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:40.430297   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:40.430376   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:40.430470   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:40.430662   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:40.430664   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:40.430841   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:27:40.430837   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:27:40.532536   77819 ssh_runner.go:195] Run: systemctl --version
	I0917 18:27:40.540125   77819 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:27:40.697991   77819 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:27:40.705336   77819 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:27:40.705427   77819 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:27:40.723038   77819 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:27:40.723065   77819 start.go:495] detecting cgroup driver to use...
	I0917 18:27:40.723135   77819 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:27:40.745561   77819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:27:40.765884   77819 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:27:40.765955   77819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:27:40.786769   77819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:27:40.805655   77819 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:27:40.935895   77819 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:27:41.121556   77819 docker.go:233] disabling docker service ...
	I0917 18:27:41.121638   77819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:27:41.144711   77819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:27:41.164782   77819 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:27:41.308439   77819 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:27:41.467525   77819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:27:41.485989   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:27:41.510198   77819 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 18:27:41.510282   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.526458   77819 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:27:41.526566   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.543334   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.558978   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.574621   77819 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:27:41.587226   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.603144   77819 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.627410   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.639981   77819 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:27:41.651547   77819 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:27:41.651615   77819 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:27:41.669534   77819 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:27:41.684429   77819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:27:41.839270   77819 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:27:41.974151   77819 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:27:41.974230   77819 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:27:41.980491   77819 start.go:563] Will wait 60s for crictl version
	I0917 18:27:41.980563   77819 ssh_runner.go:195] Run: which crictl
	I0917 18:27:41.985802   77819 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:27:42.033141   77819 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:27:42.033247   77819 ssh_runner.go:195] Run: crio --version
	I0917 18:27:42.076192   77819 ssh_runner.go:195] Run: crio --version
	I0917 18:27:42.118442   77819 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 18:27:37.750960   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.469353165s)
	I0917 18:27:37.750995   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0917 18:27:37.751021   77433 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0917 18:27:37.751074   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0917 18:27:38.415240   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0917 18:27:38.415308   77433 cache_images.go:123] Successfully loaded all cached images
	I0917 18:27:38.415317   77433 cache_images.go:92] duration metric: took 15.466122195s to LoadCachedImages
	I0917 18:27:38.415338   77433 kubeadm.go:934] updating node { 192.168.72.182 8443 v1.31.1 crio true true} ...
	I0917 18:27:38.415428   77433 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-328741 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-328741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:27:38.415536   77433 ssh_runner.go:195] Run: crio config
	I0917 18:27:38.466849   77433 cni.go:84] Creating CNI manager for ""
	I0917 18:27:38.466880   77433 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:27:38.466893   77433 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:27:38.466921   77433 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.182 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-328741 NodeName:no-preload-328741 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 18:27:38.467090   77433 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-328741"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.182
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.182"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:27:38.467166   77433 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 18:27:38.478263   77433 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:27:38.478345   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:27:38.488938   77433 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0917 18:27:38.509613   77433 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:27:38.529224   77433 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0917 18:27:38.549010   77433 ssh_runner.go:195] Run: grep 192.168.72.182	control-plane.minikube.internal$ /etc/hosts
	I0917 18:27:38.553381   77433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:27:38.566215   77433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:27:38.688671   77433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:27:38.708655   77433 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741 for IP: 192.168.72.182
	I0917 18:27:38.708677   77433 certs.go:194] generating shared ca certs ...
	I0917 18:27:38.708693   77433 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:27:38.708860   77433 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:27:38.708916   77433 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:27:38.708930   77433 certs.go:256] generating profile certs ...
	I0917 18:27:38.709038   77433 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/client.key
	I0917 18:27:38.709130   77433 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/apiserver.key.843ed40b
	I0917 18:27:38.709199   77433 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/proxy-client.key
	I0917 18:27:38.709384   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:27:38.709421   77433 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:27:38.709435   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:27:38.709471   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:27:38.709519   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:27:38.709552   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:27:38.709606   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:27:38.710412   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:27:38.754736   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:27:38.792703   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:27:38.826420   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:27:38.869433   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0917 18:27:38.897601   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 18:27:38.928694   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:27:38.953856   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 18:27:38.978643   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:27:39.004382   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:27:39.031548   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:27:39.057492   77433 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:27:39.075095   77433 ssh_runner.go:195] Run: openssl version
	I0917 18:27:39.081033   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:27:39.092196   77433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:27:39.097013   77433 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:27:39.097070   77433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:27:39.103104   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:27:39.114377   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:27:39.125639   77433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:27:39.130757   77433 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:27:39.130828   77433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:27:39.137857   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:27:39.150215   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:27:39.161792   77433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:39.166467   77433 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:39.166528   77433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:39.172262   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:27:39.183793   77433 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:27:39.188442   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 18:27:39.194477   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 18:27:39.200688   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 18:27:39.207092   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 18:27:39.213451   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 18:27:39.220286   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 18:27:39.226642   77433 kubeadm.go:392] StartCluster: {Name:no-preload-328741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-328741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:27:39.226747   77433 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:27:39.226814   77433 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:27:39.273929   77433 cri.go:89] found id: ""
	I0917 18:27:39.274001   77433 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:27:39.286519   77433 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 18:27:39.286543   77433 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 18:27:39.286584   77433 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 18:27:39.298955   77433 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 18:27:39.300296   77433 kubeconfig.go:125] found "no-preload-328741" server: "https://192.168.72.182:8443"
	I0917 18:27:39.303500   77433 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 18:27:39.316866   77433 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.182
	I0917 18:27:39.316904   77433 kubeadm.go:1160] stopping kube-system containers ...
	I0917 18:27:39.316917   77433 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0917 18:27:39.316980   77433 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:27:39.356519   77433 cri.go:89] found id: ""
	I0917 18:27:39.356608   77433 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 18:27:39.373894   77433 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:27:39.387121   77433 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:27:39.387140   77433 kubeadm.go:157] found existing configuration files:
	
	I0917 18:27:39.387183   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:27:39.397807   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:27:39.397867   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:27:39.408393   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:27:39.420103   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:27:39.420175   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:27:39.432123   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:27:39.442237   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:27:39.442308   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:27:39.452902   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:27:39.462802   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:27:39.462857   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:27:39.473035   77433 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:27:39.483824   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:39.603594   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:40.540682   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:40.798278   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:40.876550   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:41.006410   77433 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:27:41.006504   77433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:41.507355   77433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:42.006707   77433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:42.054395   77433 api_server.go:72] duration metric: took 1.047984188s to wait for apiserver process to appear ...
	I0917 18:27:42.054448   77433 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:27:42.054473   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:42.054949   77433 api_server.go:269] stopped: https://192.168.72.182:8443/healthz: Get "https://192.168.72.182:8443/healthz": dial tcp 192.168.72.182:8443: connect: connection refused
	I0917 18:27:42.119537   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetIP
	I0917 18:27:42.122908   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:42.123378   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:42.123409   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:42.123739   77819 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0917 18:27:42.129654   77819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:27:42.144892   77819 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-438836 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-438836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:27:42.145015   77819 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 18:27:42.145054   77819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:27:42.191002   77819 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0917 18:27:42.191086   77819 ssh_runner.go:195] Run: which lz4
	I0917 18:27:42.196979   77819 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 18:27:42.203024   77819 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 18:27:42.203079   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0917 18:27:41.874915   78008 main.go:141] libmachine: (old-k8s-version-190698) Waiting to get IP...
	I0917 18:27:41.875882   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:41.876350   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:41.876438   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:41.876337   78975 retry.go:31] will retry after 221.467702ms: waiting for machine to come up
	I0917 18:27:42.100196   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:42.100848   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:42.100869   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:42.100798   78975 retry.go:31] will retry after 339.25287ms: waiting for machine to come up
	I0917 18:27:42.441407   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:42.442029   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:42.442057   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:42.441987   78975 retry.go:31] will retry after 471.576193ms: waiting for machine to come up
	I0917 18:27:42.915529   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:42.916159   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:42.916187   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:42.916123   78975 retry.go:31] will retry after 502.97146ms: waiting for machine to come up
	I0917 18:27:43.420795   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:43.421214   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:43.421256   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:43.421163   78975 retry.go:31] will retry after 660.138027ms: waiting for machine to come up
	I0917 18:27:44.082653   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:44.083225   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:44.083255   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:44.083166   78975 retry.go:31] will retry after 656.142121ms: waiting for machine to come up
	I0917 18:27:44.740700   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:44.741167   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:44.741193   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:44.741129   78975 retry.go:31] will retry after 928.613341ms: waiting for machine to come up
	I0917 18:27:45.671934   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:45.672452   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:45.672489   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:45.672370   78975 retry.go:31] will retry after 980.051509ms: waiting for machine to come up
	I0917 18:27:42.554732   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:45.472618   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:27:45.472651   77433 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:27:45.472667   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:45.491418   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:27:45.491447   77433 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:27:45.554728   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:45.562047   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:27:45.562083   77433 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:27:46.054709   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:46.077483   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:27:46.077533   77433 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:27:46.555249   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:46.570200   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:27:46.570242   77433 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:27:47.054604   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:47.062637   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 200:
	ok
	I0917 18:27:47.074075   77433 api_server.go:141] control plane version: v1.31.1
	I0917 18:27:47.074107   77433 api_server.go:131] duration metric: took 5.019651057s to wait for apiserver health ...
	I0917 18:27:47.074118   77433 cni.go:84] Creating CNI manager for ""
	I0917 18:27:47.074127   77433 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:27:47.275236   77433 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:27:43.762089   77819 crio.go:462] duration metric: took 1.565150626s to copy over tarball
	I0917 18:27:43.762183   77819 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 18:27:46.222613   77819 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.460401071s)
	I0917 18:27:46.222640   77819 crio.go:469] duration metric: took 2.460522168s to extract the tarball
	I0917 18:27:46.222649   77819 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 18:27:46.260257   77819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:27:46.314982   77819 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 18:27:46.315007   77819 cache_images.go:84] Images are preloaded, skipping loading
	I0917 18:27:46.315017   77819 kubeadm.go:934] updating node { 192.168.39.58 8444 v1.31.1 crio true true} ...
	I0917 18:27:46.315159   77819 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-438836 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-438836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:27:46.315267   77819 ssh_runner.go:195] Run: crio config
	I0917 18:27:46.372511   77819 cni.go:84] Creating CNI manager for ""
	I0917 18:27:46.372534   77819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:27:46.372545   77819 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:27:46.372564   77819 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-438836 NodeName:default-k8s-diff-port-438836 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 18:27:46.372684   77819 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-438836"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.58
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:27:46.372742   77819 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 18:27:46.383855   77819 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:27:46.383950   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:27:46.394588   77819 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0917 18:27:46.416968   77819 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:27:46.438389   77819 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0917 18:27:46.461630   77819 ssh_runner.go:195] Run: grep 192.168.39.58	control-plane.minikube.internal$ /etc/hosts
	I0917 18:27:46.467126   77819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:27:46.484625   77819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:27:46.614753   77819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:27:46.638959   77819 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836 for IP: 192.168.39.58
	I0917 18:27:46.638984   77819 certs.go:194] generating shared ca certs ...
	I0917 18:27:46.639004   77819 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:27:46.639166   77819 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:27:46.639228   77819 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:27:46.639240   77819 certs.go:256] generating profile certs ...
	I0917 18:27:46.639349   77819 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/client.key
	I0917 18:27:46.639420   77819 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/apiserver.key.06041009
	I0917 18:27:46.639484   77819 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/proxy-client.key
	I0917 18:27:46.639636   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:27:46.639695   77819 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:27:46.639708   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:27:46.639740   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:27:46.639773   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:27:46.639807   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:27:46.639904   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:27:46.640789   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:27:46.681791   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:27:46.715575   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:27:46.746415   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:27:46.780380   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 18:27:46.805518   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 18:27:46.841727   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:27:46.881056   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 18:27:46.918589   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:27:46.947113   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:27:46.977741   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:27:47.015143   77819 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:27:47.036837   77819 ssh_runner.go:195] Run: openssl version
	I0917 18:27:47.043152   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:27:47.057503   77819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:47.063479   77819 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:47.063554   77819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:47.072746   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:27:47.090698   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:27:47.105125   77819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:27:47.110617   77819 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:27:47.110690   77819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:27:47.117267   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:27:47.131593   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:27:47.145726   77819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:27:47.151245   77819 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:27:47.151350   77819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:27:47.157996   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:27:47.171327   77819 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:27:47.178058   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 18:27:47.185068   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 18:27:47.191776   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 18:27:47.198740   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 18:27:47.206057   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 18:27:47.212608   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 18:27:47.219345   77819 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-438836 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-438836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:27:47.219459   77819 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:27:47.219518   77819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:27:47.259853   77819 cri.go:89] found id: ""
	I0917 18:27:47.259944   77819 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:27:47.271127   77819 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 18:27:47.271146   77819 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 18:27:47.271197   77819 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 18:27:47.283724   77819 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 18:27:47.284834   77819 kubeconfig.go:125] found "default-k8s-diff-port-438836" server: "https://192.168.39.58:8444"
	I0917 18:27:47.287040   77819 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 18:27:47.298429   77819 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.58
	I0917 18:27:47.298462   77819 kubeadm.go:1160] stopping kube-system containers ...
	I0917 18:27:47.298481   77819 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0917 18:27:47.298535   77819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:27:47.341739   77819 cri.go:89] found id: ""
	I0917 18:27:47.341820   77819 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 18:27:47.361539   77819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:27:47.377218   77819 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:27:47.377254   77819 kubeadm.go:157] found existing configuration files:
	
	I0917 18:27:47.377301   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0917 18:27:47.390846   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:27:47.390913   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:27:47.401363   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0917 18:27:47.411412   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:27:47.411490   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:27:47.422596   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0917 18:27:47.438021   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:27:47.438102   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:27:47.450085   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0917 18:27:47.461269   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:27:47.461343   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:27:47.472893   77819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:27:47.484393   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:47.620947   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:46.654519   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:46.654962   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:46.655001   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:46.654927   78975 retry.go:31] will retry after 1.346541235s: waiting for machine to come up
	I0917 18:27:48.003569   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:48.004084   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:48.004118   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:48.004017   78975 retry.go:31] will retry after 2.098571627s: waiting for machine to come up
	I0917 18:27:50.105422   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:50.106073   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:50.106096   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:50.105998   78975 retry.go:31] will retry after 1.995584656s: waiting for machine to come up
	I0917 18:27:47.424559   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:27:47.441071   77433 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:27:47.462954   77433 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:27:47.636311   77433 system_pods.go:59] 8 kube-system pods found
	I0917 18:27:47.636361   77433 system_pods.go:61] "coredns-7c65d6cfc9-cgmx9" [e539dfc7-82f3-4e3a-b4d8-262c528fa5bf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:27:47.636373   77433 system_pods.go:61] "etcd-no-preload-328741" [16eed9ef-b991-4760-a116-af9716a70d71] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 18:27:47.636388   77433 system_pods.go:61] "kube-apiserver-no-preload-328741" [ed952dd4-6a99-4ad8-9cdb-c47a5f9d8e46] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 18:27:47.636397   77433 system_pods.go:61] "kube-controller-manager-no-preload-328741" [5da59a8e-4ce3-41f0-a8a0-d022f8788ce1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 18:27:47.636407   77433 system_pods.go:61] "kube-proxy-kpzxv" [eae9f1b2-95bf-44bf-9752-92e34a863520] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 18:27:47.636415   77433 system_pods.go:61] "kube-scheduler-no-preload-328741" [54c4a13c-e03c-4ccb-993b-7b454a66f266] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 18:27:47.636428   77433 system_pods.go:61] "metrics-server-6867b74b74-l8n57" [06210da2-3da4-4082-a966-7a808d762db9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:27:47.636434   77433 system_pods.go:61] "storage-provisioner" [c7501af5-63e1-499f-acfe-48c569e460dd] Running
	I0917 18:27:47.636445   77433 system_pods.go:74] duration metric: took 173.469578ms to wait for pod list to return data ...
	I0917 18:27:47.636458   77433 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:27:47.642831   77433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:27:47.642863   77433 node_conditions.go:123] node cpu capacity is 2
	I0917 18:27:47.642876   77433 node_conditions.go:105] duration metric: took 6.413638ms to run NodePressure ...
	I0917 18:27:47.642898   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:49.172338   77433 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.529413888s)
	I0917 18:27:49.172374   77433 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0917 18:27:49.181467   77433 kubeadm.go:739] kubelet initialised
	I0917 18:27:49.181492   77433 kubeadm.go:740] duration metric: took 9.106065ms waiting for restarted kubelet to initialise ...
	I0917 18:27:49.181504   77433 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:27:49.188444   77433 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:51.196629   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace has status "Ready":"False"
	I0917 18:27:48.837267   77819 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.216281013s)
	I0917 18:27:48.837303   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:49.079443   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:49.184248   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:49.270646   77819 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:27:49.270739   77819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:49.771210   77819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:50.270888   77819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:50.300440   77819 api_server.go:72] duration metric: took 1.029792788s to wait for apiserver process to appear ...
	I0917 18:27:50.300472   77819 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:27:50.300497   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:50.301150   77819 api_server.go:269] stopped: https://192.168.39.58:8444/healthz: Get "https://192.168.39.58:8444/healthz": dial tcp 192.168.39.58:8444: connect: connection refused
	I0917 18:27:50.800904   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:53.830413   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:27:53.830444   77819 api_server.go:103] status: https://192.168.39.58:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:27:53.830466   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:53.863997   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:27:53.864040   77819 api_server.go:103] status: https://192.168.39.58:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:27:54.301188   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:54.308708   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:27:54.308744   77819 api_server.go:103] status: https://192.168.39.58:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:27:54.801293   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:54.810135   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:27:54.810165   77819 api_server.go:103] status: https://192.168.39.58:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:27:55.300669   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:55.306598   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 200:
	ok
	I0917 18:27:55.314062   77819 api_server.go:141] control plane version: v1.31.1
	I0917 18:27:55.314089   77819 api_server.go:131] duration metric: took 5.013610515s to wait for apiserver health ...
	I0917 18:27:55.314098   77819 cni.go:84] Creating CNI manager for ""
	I0917 18:27:55.314105   77819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:27:55.315933   77819 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:27:52.103970   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:52.104598   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:52.104668   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:52.104610   78975 retry.go:31] will retry after 3.302824s: waiting for machine to come up
	I0917 18:27:55.410506   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:55.410967   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:55.410993   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:55.410917   78975 retry.go:31] will retry after 3.790367729s: waiting for machine to come up
	I0917 18:27:53.697650   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace has status "Ready":"False"
	I0917 18:27:56.195779   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace has status "Ready":"False"
	I0917 18:27:55.317026   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:27:55.328593   77819 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:27:55.353710   77819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:27:55.364593   77819 system_pods.go:59] 8 kube-system pods found
	I0917 18:27:55.364637   77819 system_pods.go:61] "coredns-7c65d6cfc9-5wm4j" [af3267b8-4da2-4e95-802e-981814415f7d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:27:55.364649   77819 system_pods.go:61] "etcd-default-k8s-diff-port-438836" [72235e11-dd9c-4560-a258-84ae2fefc0ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 18:27:55.364659   77819 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-438836" [606ffa55-26de-426a-b101-3e5db2329146] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 18:27:55.364682   77819 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-438836" [a9ef6aae-54f9-4ac7-959f-3fb9dcf6019d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 18:27:55.364694   77819 system_pods.go:61] "kube-proxy-pbjlc" [de4d4161-64cd-4794-9eaa-d42b1b13e4a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 18:27:55.364702   77819 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-438836" [ba637ee3-77ca-4b12-8936-3e8616be80d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 18:27:55.364712   77819 system_pods.go:61] "metrics-server-6867b74b74-gpdsn" [4d3193f7-7912-40c6-b86e-402935023601] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:27:55.364722   77819 system_pods.go:61] "storage-provisioner" [5dbf57a2-126c-46e2-9be5-eb2974b84720] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 18:27:55.364739   77819 system_pods.go:74] duration metric: took 10.995638ms to wait for pod list to return data ...
	I0917 18:27:55.364752   77819 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:27:55.369115   77819 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:27:55.369145   77819 node_conditions.go:123] node cpu capacity is 2
	I0917 18:27:55.369159   77819 node_conditions.go:105] duration metric: took 4.401118ms to run NodePressure ...
	I0917 18:27:55.369179   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:55.688791   77819 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0917 18:27:55.694004   77819 kubeadm.go:739] kubelet initialised
	I0917 18:27:55.694035   77819 kubeadm.go:740] duration metric: took 5.21454ms waiting for restarted kubelet to initialise ...
	I0917 18:27:55.694045   77819 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:27:55.700066   77819 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5wm4j" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.706889   77819 pod_ready.go:103] pod "coredns-7c65d6cfc9-5wm4j" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:00.566518   77264 start.go:364] duration metric: took 52.227841633s to acquireMachinesLock for "embed-certs-081863"
	I0917 18:28:00.566588   77264 start.go:96] Skipping create...Using existing machine configuration
	I0917 18:28:00.566596   77264 fix.go:54] fixHost starting: 
	I0917 18:28:00.567020   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:28:00.567055   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:28:00.585812   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46167
	I0917 18:28:00.586338   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:28:00.586855   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:28:00.586878   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:28:00.587201   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:28:00.587368   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:00.587552   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetState
	I0917 18:28:00.589641   77264 fix.go:112] recreateIfNeeded on embed-certs-081863: state=Stopped err=<nil>
	I0917 18:28:00.589668   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	W0917 18:28:00.589827   77264 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 18:28:00.591622   77264 out.go:177] * Restarting existing kvm2 VM for "embed-certs-081863" ...
	I0917 18:27:59.203551   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.204119   78008 main.go:141] libmachine: (old-k8s-version-190698) Found IP for machine: 192.168.61.143
	I0917 18:27:59.204145   78008 main.go:141] libmachine: (old-k8s-version-190698) Reserving static IP address...
	I0917 18:27:59.204160   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has current primary IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.204580   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "old-k8s-version-190698", mac: "52:54:00:72:8a:43", ip: "192.168.61.143"} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.204623   78008 main.go:141] libmachine: (old-k8s-version-190698) Reserved static IP address: 192.168.61.143
	I0917 18:27:59.204642   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | skip adding static IP to network mk-old-k8s-version-190698 - found existing host DHCP lease matching {name: "old-k8s-version-190698", mac: "52:54:00:72:8a:43", ip: "192.168.61.143"}
	I0917 18:27:59.204660   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | Getting to WaitForSSH function...
	I0917 18:27:59.204675   78008 main.go:141] libmachine: (old-k8s-version-190698) Waiting for SSH to be available...
	I0917 18:27:59.206831   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.207248   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.207277   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.207563   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | Using SSH client type: external
	I0917 18:27:59.207591   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa (-rw-------)
	I0917 18:27:59.207628   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.143 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:27:59.207648   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | About to run SSH command:
	I0917 18:27:59.207660   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | exit 0
	I0917 18:27:59.334284   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | SSH cmd err, output: <nil>: 
	I0917 18:27:59.334712   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetConfigRaw
	I0917 18:27:59.335400   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:27:59.337795   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.338175   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.338199   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.338448   78008 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/config.json ...
	I0917 18:27:59.338675   78008 machine.go:93] provisionDockerMachine start ...
	I0917 18:27:59.338696   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:27:59.338932   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.340943   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.341313   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.341338   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.341517   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:27:59.341695   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.341821   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.341953   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:27:59.342138   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:59.342349   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:27:59.342366   78008 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 18:27:59.449958   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 18:27:59.449986   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetMachineName
	I0917 18:27:59.450245   78008 buildroot.go:166] provisioning hostname "old-k8s-version-190698"
	I0917 18:27:59.450275   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetMachineName
	I0917 18:27:59.450449   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.453653   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.454015   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.454044   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.454246   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:27:59.454451   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.454608   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.454777   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:27:59.454978   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:59.455195   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:27:59.455212   78008 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-190698 && echo "old-k8s-version-190698" | sudo tee /etc/hostname
	I0917 18:27:59.576721   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-190698
	
	I0917 18:27:59.576758   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.579821   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.580176   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.580211   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.580420   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:27:59.580601   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.580774   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.580920   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:27:59.581097   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:59.581292   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:27:59.581313   78008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-190698' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-190698/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-190698' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:27:59.696335   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:27:59.696366   78008 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:27:59.696387   78008 buildroot.go:174] setting up certificates
	I0917 18:27:59.696396   78008 provision.go:84] configureAuth start
	I0917 18:27:59.696405   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetMachineName
	I0917 18:27:59.696689   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:27:59.699694   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.700052   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.700079   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.700251   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.702492   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.702870   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.702897   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.703098   78008 provision.go:143] copyHostCerts
	I0917 18:27:59.703211   78008 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:27:59.703228   78008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:27:59.703308   78008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:27:59.703494   78008 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:27:59.703511   78008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:27:59.703557   78008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:27:59.703696   78008 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:27:59.703711   78008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:27:59.703743   78008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:27:59.703843   78008 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-190698 san=[127.0.0.1 192.168.61.143 localhost minikube old-k8s-version-190698]
	I0917 18:27:59.881199   78008 provision.go:177] copyRemoteCerts
	I0917 18:27:59.881281   78008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:27:59.881319   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.884199   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.884526   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.884559   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.884808   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:27:59.885004   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.885174   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:27:59.885311   78008 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:27:59.972021   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:27:59.999996   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0917 18:28:00.028759   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 18:28:00.062167   78008 provision.go:87] duration metric: took 365.752983ms to configureAuth
	I0917 18:28:00.062224   78008 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:28:00.062431   78008 config.go:182] Loaded profile config "old-k8s-version-190698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0917 18:28:00.062530   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.065903   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.066354   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.066387   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.066851   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.067080   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.067272   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.067551   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.067782   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:00.068031   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:28:00.068058   78008 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:28:00.310378   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:28:00.310410   78008 machine.go:96] duration metric: took 971.72114ms to provisionDockerMachine
	I0917 18:28:00.310424   78008 start.go:293] postStartSetup for "old-k8s-version-190698" (driver="kvm2")
	I0917 18:28:00.310444   78008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:28:00.310465   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.310788   78008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:28:00.310822   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.313609   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.313975   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.314004   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.314158   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.314364   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.314518   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.314672   78008 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:28:00.402352   78008 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:28:00.407061   78008 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:28:00.407091   78008 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:28:00.407183   78008 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:28:00.407295   78008 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:28:00.407435   78008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:28:00.419527   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:28:00.449686   78008 start.go:296] duration metric: took 139.247596ms for postStartSetup
	I0917 18:28:00.449739   78008 fix.go:56] duration metric: took 20.027097941s for fixHost
	I0917 18:28:00.449764   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.452672   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.453033   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.453080   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.453218   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.453433   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.453637   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.453793   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.454001   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:00.454175   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:28:00.454185   78008 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:28:00.566377   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726597680.523257617
	
	I0917 18:28:00.566403   78008 fix.go:216] guest clock: 1726597680.523257617
	I0917 18:28:00.566413   78008 fix.go:229] Guest: 2024-09-17 18:28:00.523257617 +0000 UTC Remote: 2024-09-17 18:28:00.449744487 +0000 UTC m=+249.811602656 (delta=73.51313ms)
	I0917 18:28:00.566439   78008 fix.go:200] guest clock delta is within tolerance: 73.51313ms
	I0917 18:28:00.566445   78008 start.go:83] releasing machines lock for "old-k8s-version-190698", held for 20.143843614s
	I0917 18:28:00.566478   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.566748   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:28:00.570065   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.570491   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.570520   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.570731   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.571320   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.571497   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.571584   78008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:28:00.571649   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.571803   78008 ssh_runner.go:195] Run: cat /version.json
	I0917 18:28:00.571830   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.574802   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.575083   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.575343   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.575382   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.575506   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.575574   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.575600   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.575664   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.575881   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.575941   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.576030   78008 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:28:00.576082   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.576278   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.576430   78008 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:28:00.592850   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Start
	I0917 18:28:00.593044   77264 main.go:141] libmachine: (embed-certs-081863) Ensuring networks are active...
	I0917 18:28:00.593996   77264 main.go:141] libmachine: (embed-certs-081863) Ensuring network default is active
	I0917 18:28:00.594404   77264 main.go:141] libmachine: (embed-certs-081863) Ensuring network mk-embed-certs-081863 is active
	I0917 18:28:00.594855   77264 main.go:141] libmachine: (embed-certs-081863) Getting domain xml...
	I0917 18:28:00.595603   77264 main.go:141] libmachine: (embed-certs-081863) Creating domain...
	I0917 18:28:00.685146   78008 ssh_runner.go:195] Run: systemctl --version
	I0917 18:28:00.692059   78008 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:28:00.844888   78008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:28:00.852326   78008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:28:00.852438   78008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:28:00.869907   78008 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:28:00.869934   78008 start.go:495] detecting cgroup driver to use...
	I0917 18:28:00.870010   78008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:28:00.888992   78008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:28:00.905438   78008 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:28:00.905495   78008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:28:00.920872   78008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:28:00.939154   78008 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:28:01.067061   78008 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:28:01.220976   78008 docker.go:233] disabling docker service ...
	I0917 18:28:01.221068   78008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:28:01.240350   78008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:28:01.257396   78008 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:28:01.407317   78008 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:28:01.552256   78008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:28:01.567151   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:28:01.589401   78008 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0917 18:28:01.589465   78008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:01.604462   78008 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:28:01.604527   78008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:01.617293   78008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:01.629766   78008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:01.643336   78008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:28:01.656308   78008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:28:01.667116   78008 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:28:01.667187   78008 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:28:01.683837   78008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:28:01.697438   78008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:28:01.843288   78008 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:28:01.951590   78008 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:28:01.951666   78008 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:28:01.957158   78008 start.go:563] Will wait 60s for crictl version
	I0917 18:28:01.957240   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:01.961218   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:28:02.001679   78008 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:28:02.001772   78008 ssh_runner.go:195] Run: crio --version
	I0917 18:28:02.032619   78008 ssh_runner.go:195] Run: crio --version
	I0917 18:28:02.064108   78008 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0917 18:27:57.695202   77433 pod_ready.go:93] pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:57.695235   77433 pod_ready.go:82] duration metric: took 8.506750324s for pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.695249   77433 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.700040   77433 pod_ready.go:93] pod "etcd-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:57.700062   77433 pod_ready.go:82] duration metric: took 4.804815ms for pod "etcd-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.700070   77433 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.705836   77433 pod_ready.go:93] pod "kube-apiserver-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:57.705867   77433 pod_ready.go:82] duration metric: took 5.789446ms for pod "kube-apiserver-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.705880   77433 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.215156   77433 pod_ready.go:93] pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:58.215180   77433 pod_ready.go:82] duration metric: took 509.29189ms for pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.215193   77433 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kpzxv" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.221031   77433 pod_ready.go:93] pod "kube-proxy-kpzxv" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:58.221054   77433 pod_ready.go:82] duration metric: took 5.853831ms for pod "kube-proxy-kpzxv" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.221065   77433 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.493958   77433 pod_ready.go:93] pod "kube-scheduler-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:58.493984   77433 pod_ready.go:82] duration metric: took 272.911397ms for pod "kube-scheduler-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.493994   77433 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:00.501591   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:27:59.707995   77819 pod_ready.go:93] pod "coredns-7c65d6cfc9-5wm4j" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:59.708017   77819 pod_ready.go:82] duration metric: took 4.007926053s for pod "coredns-7c65d6cfc9-5wm4j" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:59.708026   77819 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:01.716326   77819 pod_ready.go:103] pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:02.065336   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:28:02.068703   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:02.069066   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:02.069094   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:02.069321   78008 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0917 18:28:02.074550   78008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:28:02.091863   78008 kubeadm.go:883] updating cluster {Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:28:02.092006   78008 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0917 18:28:02.092069   78008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:28:02.152944   78008 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0917 18:28:02.153024   78008 ssh_runner.go:195] Run: which lz4
	I0917 18:28:02.157664   78008 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 18:28:02.162231   78008 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 18:28:02.162290   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0917 18:28:04.015315   78008 crio.go:462] duration metric: took 1.857697544s to copy over tarball
	I0917 18:28:04.015398   78008 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 18:28:01.931491   77264 main.go:141] libmachine: (embed-certs-081863) Waiting to get IP...
	I0917 18:28:01.932448   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:01.932939   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:01.933006   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:01.932914   79167 retry.go:31] will retry after 232.498944ms: waiting for machine to come up
	I0917 18:28:02.167642   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:02.168159   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:02.168187   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:02.168114   79167 retry.go:31] will retry after 297.644768ms: waiting for machine to come up
	I0917 18:28:02.467583   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:02.468395   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:02.468422   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:02.468356   79167 retry.go:31] will retry after 486.22753ms: waiting for machine to come up
	I0917 18:28:02.956719   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:02.957187   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:02.957212   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:02.957151   79167 retry.go:31] will retry after 602.094874ms: waiting for machine to come up
	I0917 18:28:03.560509   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:03.561150   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:03.561177   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:03.561102   79167 retry.go:31] will retry after 732.31608ms: waiting for machine to come up
	I0917 18:28:04.294713   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:04.295264   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:04.295306   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:04.295212   79167 retry.go:31] will retry after 826.461309ms: waiting for machine to come up
	I0917 18:28:05.123086   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:05.123570   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:05.123596   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:05.123528   79167 retry.go:31] will retry after 785.524779ms: waiting for machine to come up
	I0917 18:28:02.503063   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:05.002750   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:03.716871   77819 pod_ready.go:103] pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:05.718652   77819 pod_ready.go:93] pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:05.718685   77819 pod_ready.go:82] duration metric: took 6.010651123s for pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:05.718697   77819 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:07.727355   77819 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:07.199571   78008 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.184141166s)
	I0917 18:28:07.199605   78008 crio.go:469] duration metric: took 3.184259546s to extract the tarball
	I0917 18:28:07.199625   78008 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 18:28:07.247308   78008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:28:07.290580   78008 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0917 18:28:07.290605   78008 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0917 18:28:07.290641   78008 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:28:07.290664   78008 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.290685   78008 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.290705   78008 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.290772   78008 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.290865   78008 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.290898   78008 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0917 18:28:07.290896   78008 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.292426   78008 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.292473   78008 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.292479   78008 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.292525   78008 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:28:07.292555   78008 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.292544   78008 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.292594   78008 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.292796   78008 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0917 18:28:07.460802   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.466278   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.466439   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.473442   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.484306   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.490062   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.517285   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0917 18:28:07.550668   78008 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0917 18:28:07.550730   78008 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.550779   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.598383   78008 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0917 18:28:07.598426   78008 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.598468   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.627615   78008 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0917 18:28:07.627665   78008 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.627737   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.675687   78008 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0917 18:28:07.675733   78008 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.675769   78008 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0917 18:28:07.675806   78008 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.675848   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.675809   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.689052   78008 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0917 18:28:07.689106   78008 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.689141   78008 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0917 18:28:07.689169   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.689186   78008 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0917 18:28:07.689200   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.689224   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.689252   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.689296   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.689336   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.689374   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.782923   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0917 18:28:07.783204   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.833121   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.833205   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.833278   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.833316   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.833343   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.880054   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0917 18:28:07.885156   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.982007   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.990252   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:08.005351   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:08.008118   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0917 18:28:08.008319   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:08.066339   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0917 18:28:08.066388   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0917 18:28:08.173842   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0917 18:28:08.173884   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0917 18:28:08.173951   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:08.181801   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0917 18:28:08.181832   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0917 18:28:08.181952   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0917 18:28:08.196666   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:28:08.219844   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0917 18:28:08.351645   78008 cache_images.go:92] duration metric: took 1.061022994s to LoadCachedImages
	W0917 18:28:08.351739   78008 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0917 18:28:08.351760   78008 kubeadm.go:934] updating node { 192.168.61.143 8443 v1.20.0 crio true true} ...
	I0917 18:28:08.351869   78008 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-190698 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.143
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:28:08.351947   78008 ssh_runner.go:195] Run: crio config
	I0917 18:28:08.404304   78008 cni.go:84] Creating CNI manager for ""
	I0917 18:28:08.404333   78008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:28:08.404347   78008 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:28:08.404369   78008 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.143 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-190698 NodeName:old-k8s-version-190698 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.143"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.143 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0917 18:28:08.404554   78008 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.143
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-190698"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.143
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.143"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:28:08.404636   78008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0917 18:28:08.415712   78008 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:28:08.415788   78008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:28:08.426074   78008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0917 18:28:08.446765   78008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:28:08.467884   78008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0917 18:28:08.489565   78008 ssh_runner.go:195] Run: grep 192.168.61.143	control-plane.minikube.internal$ /etc/hosts
	I0917 18:28:08.494030   78008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.143	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:28:08.510100   78008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:28:08.667598   78008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:28:08.686416   78008 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698 for IP: 192.168.61.143
	I0917 18:28:08.686453   78008 certs.go:194] generating shared ca certs ...
	I0917 18:28:08.686477   78008 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:28:08.686680   78008 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:28:08.686743   78008 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:28:08.686762   78008 certs.go:256] generating profile certs ...
	I0917 18:28:08.686886   78008 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/client.key
	I0917 18:28:08.686962   78008 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.key.8ffdb4af
	I0917 18:28:08.687069   78008 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/proxy-client.key
	I0917 18:28:08.687256   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:28:08.687302   78008 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:28:08.687318   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:28:08.687360   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:28:08.687397   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:28:08.687441   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:28:08.687511   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:28:08.688412   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:28:08.729318   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:28:08.772932   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:28:08.815329   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:28:08.866305   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 18:28:08.910004   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 18:28:08.950902   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:28:08.993679   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 18:28:09.021272   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:28:09.046848   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:28:09.078938   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:28:09.110919   78008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:28:09.134493   78008 ssh_runner.go:195] Run: openssl version
	I0917 18:28:09.142920   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:28:09.157440   78008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:28:09.163382   78008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:28:09.163460   78008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:28:09.170446   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:28:09.182690   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:28:09.195144   78008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:28:09.200544   78008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:28:09.200612   78008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:28:09.207418   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:28:09.219931   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:28:09.234765   78008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:09.240859   78008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:09.240930   78008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:09.249168   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:28:09.262225   78008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:28:09.267923   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 18:28:09.276136   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 18:28:09.284356   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 18:28:09.292809   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 18:28:09.301175   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 18:28:09.309486   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 18:28:09.317652   78008 kubeadm.go:392] StartCluster: {Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:28:09.317788   78008 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:28:09.317862   78008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:28:09.367633   78008 cri.go:89] found id: ""
	I0917 18:28:09.367714   78008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:28:09.378721   78008 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 18:28:09.378751   78008 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 18:28:09.378823   78008 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 18:28:09.389949   78008 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 18:28:09.391438   78008 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-190698" does not appear in /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:28:09.392494   78008 kubeconfig.go:62] /home/jenkins/minikube-integration/19662-11085/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-190698" cluster setting kubeconfig missing "old-k8s-version-190698" context setting]
	I0917 18:28:09.393951   78008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:28:09.396482   78008 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 18:28:09.407488   78008 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.143
	I0917 18:28:09.407541   78008 kubeadm.go:1160] stopping kube-system containers ...
	I0917 18:28:09.407555   78008 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0917 18:28:09.407617   78008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:28:09.454529   78008 cri.go:89] found id: ""
	I0917 18:28:09.454609   78008 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 18:28:09.473001   78008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:28:09.483455   78008 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:28:09.483478   78008 kubeadm.go:157] found existing configuration files:
	
	I0917 18:28:09.483524   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:28:09.492941   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:28:09.493015   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:28:09.503733   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:28:09.513646   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:28:09.513744   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:28:09.523852   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:28:09.533964   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:28:09.534023   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:28:09.544196   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:28:09.554778   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:28:09.554867   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:28:09.565305   78008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:28:09.576177   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:09.717093   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:10.376689   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:10.619407   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:05.910824   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:05.911297   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:05.911326   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:05.911249   79167 retry.go:31] will retry after 994.146737ms: waiting for machine to come up
	I0917 18:28:06.906856   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:06.907429   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:06.907489   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:06.907376   79167 retry.go:31] will retry after 1.592998284s: waiting for machine to come up
	I0917 18:28:08.502438   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:08.502946   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:08.502969   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:08.502894   79167 retry.go:31] will retry after 1.71066586s: waiting for machine to come up
	I0917 18:28:10.215620   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:10.216060   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:10.216088   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:10.216019   79167 retry.go:31] will retry after 2.640762654s: waiting for machine to come up
	I0917 18:28:07.502981   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:10.000910   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:12.002029   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:09.068583   77819 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:09.068620   77819 pod_ready.go:82] duration metric: took 3.349915006s for pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.068634   77819 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.104652   77819 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:09.104685   77819 pod_ready.go:82] duration metric: took 36.042715ms for pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.104698   77819 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-pbjlc" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.111983   77819 pod_ready.go:93] pod "kube-proxy-pbjlc" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:09.112010   77819 pod_ready.go:82] duration metric: took 7.304378ms for pod "kube-proxy-pbjlc" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.112022   77819 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.118242   77819 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:09.118270   77819 pod_ready.go:82] duration metric: took 6.238909ms for pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.118284   77819 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:11.128221   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:10.743928   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:10.832172   78008 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:28:10.832275   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:11.333365   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:11.832631   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:12.332364   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:12.832978   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:13.333348   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:13.833325   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:14.333130   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:14.833200   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:15.333019   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:12.859438   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:12.859907   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:12.859933   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:12.859855   79167 retry.go:31] will retry after 2.872904917s: waiting for machine to come up
	I0917 18:28:15.734778   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:15.735248   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:15.735276   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:15.735204   79167 retry.go:31] will retry after 3.980802088s: waiting for machine to come up
	I0917 18:28:14.002604   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:16.501220   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:13.625926   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:16.124315   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:18.125564   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:15.832326   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:16.333353   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:16.833183   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:17.332967   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:17.833315   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:18.333025   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:18.832727   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:19.333388   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:19.833387   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:20.332777   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:19.720378   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.720874   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has current primary IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.720895   77264 main.go:141] libmachine: (embed-certs-081863) Found IP for machine: 192.168.50.61
	I0917 18:28:19.720909   77264 main.go:141] libmachine: (embed-certs-081863) Reserving static IP address...
	I0917 18:28:19.721385   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "embed-certs-081863", mac: "52:54:00:3f:17:3d", ip: "192.168.50.61"} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:19.721428   77264 main.go:141] libmachine: (embed-certs-081863) DBG | skip adding static IP to network mk-embed-certs-081863 - found existing host DHCP lease matching {name: "embed-certs-081863", mac: "52:54:00:3f:17:3d", ip: "192.168.50.61"}
	I0917 18:28:19.721444   77264 main.go:141] libmachine: (embed-certs-081863) Reserved static IP address: 192.168.50.61
	I0917 18:28:19.721461   77264 main.go:141] libmachine: (embed-certs-081863) Waiting for SSH to be available...
	I0917 18:28:19.721478   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Getting to WaitForSSH function...
	I0917 18:28:19.723623   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.723932   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:19.723960   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.724082   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Using SSH client type: external
	I0917 18:28:19.724109   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa (-rw-------)
	I0917 18:28:19.724139   77264 main.go:141] libmachine: (embed-certs-081863) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.61 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:28:19.724161   77264 main.go:141] libmachine: (embed-certs-081863) DBG | About to run SSH command:
	I0917 18:28:19.724173   77264 main.go:141] libmachine: (embed-certs-081863) DBG | exit 0
	I0917 18:28:19.849714   77264 main.go:141] libmachine: (embed-certs-081863) DBG | SSH cmd err, output: <nil>: 
	I0917 18:28:19.850124   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetConfigRaw
	I0917 18:28:19.850841   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetIP
	I0917 18:28:19.853490   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.853866   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:19.853891   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.854193   77264 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/config.json ...
	I0917 18:28:19.854396   77264 machine.go:93] provisionDockerMachine start ...
	I0917 18:28:19.854414   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:19.854653   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:19.857041   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.857395   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:19.857423   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.857547   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:19.857729   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:19.857863   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:19.857975   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:19.858079   77264 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:19.858237   77264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I0917 18:28:19.858247   77264 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 18:28:19.965775   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 18:28:19.965805   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetMachineName
	I0917 18:28:19.966057   77264 buildroot.go:166] provisioning hostname "embed-certs-081863"
	I0917 18:28:19.966091   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetMachineName
	I0917 18:28:19.966278   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:19.968957   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.969277   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:19.969308   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.969469   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:19.969656   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:19.969816   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:19.969923   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:19.970068   77264 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:19.970294   77264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I0917 18:28:19.970314   77264 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-081863 && echo "embed-certs-081863" | sudo tee /etc/hostname
	I0917 18:28:20.096717   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-081863
	
	I0917 18:28:20.096753   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.099788   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.100162   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.100195   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.100351   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.100571   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.100731   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.100864   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.101043   77264 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:20.101273   77264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I0917 18:28:20.101297   77264 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-081863' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-081863/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-081863' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:28:20.224405   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:28:20.224447   77264 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:28:20.224468   77264 buildroot.go:174] setting up certificates
	I0917 18:28:20.224476   77264 provision.go:84] configureAuth start
	I0917 18:28:20.224487   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetMachineName
	I0917 18:28:20.224796   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetIP
	I0917 18:28:20.227642   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.227990   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.228020   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.228128   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.230411   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.230785   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.230819   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.230945   77264 provision.go:143] copyHostCerts
	I0917 18:28:20.231012   77264 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:28:20.231026   77264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:28:20.231097   77264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:28:20.231220   77264 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:28:20.231232   77264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:28:20.231263   77264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:28:20.231349   77264 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:28:20.231361   77264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:28:20.231387   77264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:28:20.231460   77264 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.embed-certs-081863 san=[127.0.0.1 192.168.50.61 embed-certs-081863 localhost minikube]
	I0917 18:28:20.293317   77264 provision.go:177] copyRemoteCerts
	I0917 18:28:20.293370   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:28:20.293395   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.296247   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.296611   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.296649   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.296878   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.297065   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.297251   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.297411   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:28:20.384577   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:28:20.409805   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0917 18:28:20.436199   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 18:28:20.463040   77264 provision.go:87] duration metric: took 238.548615ms to configureAuth
	I0917 18:28:20.463072   77264 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:28:20.463270   77264 config.go:182] Loaded profile config "embed-certs-081863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:28:20.463371   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.466291   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.466656   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.466688   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.466942   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.467172   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.467363   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.467511   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.467661   77264 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:20.467850   77264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I0917 18:28:20.467864   77264 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:28:20.713934   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:28:20.713961   77264 machine.go:96] duration metric: took 859.552656ms to provisionDockerMachine
	I0917 18:28:20.713975   77264 start.go:293] postStartSetup for "embed-certs-081863" (driver="kvm2")
	I0917 18:28:20.713989   77264 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:28:20.714017   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:20.714338   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:28:20.714366   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.717415   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.717784   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.717810   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.717979   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.718181   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.718334   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.718489   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:28:18.501410   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:21.001625   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:20.808582   77264 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:28:20.812874   77264 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:28:20.812903   77264 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:28:20.812985   77264 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:28:20.813082   77264 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:28:20.813202   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:28:20.823533   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:28:20.853907   77264 start.go:296] duration metric: took 139.917603ms for postStartSetup
	I0917 18:28:20.853950   77264 fix.go:56] duration metric: took 20.287354242s for fixHost
	I0917 18:28:20.853974   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.856746   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.857114   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.857141   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.857324   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.857572   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.857749   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.857925   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.858084   77264 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:20.858314   77264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I0917 18:28:20.858329   77264 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:28:20.970530   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726597700.949100009
	
	I0917 18:28:20.970553   77264 fix.go:216] guest clock: 1726597700.949100009
	I0917 18:28:20.970561   77264 fix.go:229] Guest: 2024-09-17 18:28:20.949100009 +0000 UTC Remote: 2024-09-17 18:28:20.853955257 +0000 UTC m=+355.105413575 (delta=95.144752ms)
	I0917 18:28:20.970581   77264 fix.go:200] guest clock delta is within tolerance: 95.144752ms
	I0917 18:28:20.970586   77264 start.go:83] releasing machines lock for "embed-certs-081863", held for 20.404030588s
	I0917 18:28:20.970604   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:20.970874   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetIP
	I0917 18:28:20.973477   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.973786   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.973813   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.973938   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:20.974529   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:20.974733   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:20.974825   77264 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:28:20.974881   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.974945   77264 ssh_runner.go:195] Run: cat /version.json
	I0917 18:28:20.974973   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.977671   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.977994   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.978044   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.978074   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.978203   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.978365   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.978517   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.978555   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.978590   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.978659   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:28:20.978775   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.978915   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.979042   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.979161   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:28:21.080649   77264 ssh_runner.go:195] Run: systemctl --version
	I0917 18:28:21.087412   77264 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:28:21.241355   77264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:28:21.249173   77264 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:28:21.249245   77264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:28:21.266337   77264 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:28:21.266369   77264 start.go:495] detecting cgroup driver to use...
	I0917 18:28:21.266441   77264 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:28:21.284535   77264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:28:21.300191   77264 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:28:21.300262   77264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:28:21.315687   77264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:28:21.331132   77264 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:28:21.469564   77264 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:28:21.618385   77264 docker.go:233] disabling docker service ...
	I0917 18:28:21.618465   77264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:28:21.635746   77264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:28:21.653011   77264 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:28:21.806397   77264 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:28:21.942768   77264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:28:21.957319   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:28:21.977409   77264 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 18:28:21.977479   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:21.989090   77264 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:28:21.989165   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.001555   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.013044   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.024634   77264 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:28:22.036482   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.048082   77264 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.067971   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.079429   77264 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:28:22.089772   77264 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:28:22.089841   77264 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:28:22.104492   77264 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:28:22.116429   77264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:28:22.250299   77264 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:28:22.353115   77264 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:28:22.353195   77264 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:28:22.359475   77264 start.go:563] Will wait 60s for crictl version
	I0917 18:28:22.359527   77264 ssh_runner.go:195] Run: which crictl
	I0917 18:28:22.363627   77264 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:28:22.402802   77264 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:28:22.402902   77264 ssh_runner.go:195] Run: crio --version
	I0917 18:28:22.432389   77264 ssh_runner.go:195] Run: crio --version
	I0917 18:28:22.463277   77264 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 18:28:20.625519   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:23.126788   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:20.832698   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:21.332644   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:21.832955   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:22.332859   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:22.832393   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:23.333067   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:23.833266   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:24.332837   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:24.832669   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:25.332772   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:22.464498   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetIP
	I0917 18:28:22.467595   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:22.468070   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:22.468104   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:22.468400   77264 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0917 18:28:22.473355   77264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:28:22.487043   77264 kubeadm.go:883] updating cluster {Name:embed-certs-081863 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-081863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.61 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:28:22.487162   77264 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 18:28:22.487204   77264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:28:22.525877   77264 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0917 18:28:22.525947   77264 ssh_runner.go:195] Run: which lz4
	I0917 18:28:22.530318   77264 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 18:28:22.534779   77264 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 18:28:22.534821   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0917 18:28:24.007808   77264 crio.go:462] duration metric: took 1.477544842s to copy over tarball
	I0917 18:28:24.007895   77264 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 18:28:23.002565   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:25.501068   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:25.627993   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:28.126373   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:25.832772   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:26.332949   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:26.833016   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:27.332604   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:27.833127   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:28.332337   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:28.832430   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.332564   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.833193   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:30.333057   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:26.210912   77264 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.202977006s)
	I0917 18:28:26.210942   77264 crio.go:469] duration metric: took 2.203106209s to extract the tarball
	I0917 18:28:26.210950   77264 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 18:28:26.249979   77264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:28:26.297086   77264 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 18:28:26.297112   77264 cache_images.go:84] Images are preloaded, skipping loading
	I0917 18:28:26.297122   77264 kubeadm.go:934] updating node { 192.168.50.61 8443 v1.31.1 crio true true} ...
	I0917 18:28:26.297238   77264 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-081863 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-081863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:28:26.297323   77264 ssh_runner.go:195] Run: crio config
	I0917 18:28:26.343491   77264 cni.go:84] Creating CNI manager for ""
	I0917 18:28:26.343516   77264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:28:26.343526   77264 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:28:26.343547   77264 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.61 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-081863 NodeName:embed-certs-081863 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 18:28:26.343711   77264 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-081863"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:28:26.343786   77264 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 18:28:26.354782   77264 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:28:26.354863   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:28:26.365347   77264 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0917 18:28:26.383377   77264 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:28:26.401629   77264 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0917 18:28:26.420595   77264 ssh_runner.go:195] Run: grep 192.168.50.61	control-plane.minikube.internal$ /etc/hosts
	I0917 18:28:26.424760   77264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.61	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:28:26.439152   77264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:28:26.582540   77264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:28:26.600662   77264 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863 for IP: 192.168.50.61
	I0917 18:28:26.600684   77264 certs.go:194] generating shared ca certs ...
	I0917 18:28:26.600701   77264 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:28:26.600877   77264 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:28:26.600932   77264 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:28:26.600946   77264 certs.go:256] generating profile certs ...
	I0917 18:28:26.601065   77264 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/client.key
	I0917 18:28:26.601154   77264 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/apiserver.key.b407faea
	I0917 18:28:26.601218   77264 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/proxy-client.key
	I0917 18:28:26.601382   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:28:26.601423   77264 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:28:26.601438   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:28:26.601501   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:28:26.601537   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:28:26.601568   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:28:26.601625   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:28:26.602482   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:28:26.641066   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:28:26.665154   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:28:26.699573   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:28:26.749625   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0917 18:28:26.790757   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 18:28:26.818331   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:28:26.848575   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 18:28:26.875901   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:28:26.902547   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:28:26.929873   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:28:26.954674   77264 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:28:26.972433   77264 ssh_runner.go:195] Run: openssl version
	I0917 18:28:26.978761   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:28:26.991752   77264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:28:26.996704   77264 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:28:26.996771   77264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:28:27.003567   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:28:27.015305   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:28:27.027052   77264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:27.032815   77264 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:27.032880   77264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:27.039495   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:28:27.051331   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:28:27.062771   77264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:28:27.067404   77264 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:28:27.067461   77264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:28:27.073663   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:28:27.085283   77264 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:28:27.090171   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 18:28:27.096537   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 18:28:27.103011   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 18:28:27.110516   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 18:28:27.116647   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 18:28:27.123087   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 18:28:27.129689   77264 kubeadm.go:392] StartCluster: {Name:embed-certs-081863 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-081863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.61 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:28:27.129958   77264 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:28:27.130021   77264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:28:27.171240   77264 cri.go:89] found id: ""
	I0917 18:28:27.171312   77264 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:28:27.183474   77264 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 18:28:27.183494   77264 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 18:28:27.183555   77264 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 18:28:27.195418   77264 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 18:28:27.196485   77264 kubeconfig.go:125] found "embed-certs-081863" server: "https://192.168.50.61:8443"
	I0917 18:28:27.198613   77264 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 18:28:27.210454   77264 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.61
	I0917 18:28:27.210489   77264 kubeadm.go:1160] stopping kube-system containers ...
	I0917 18:28:27.210503   77264 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0917 18:28:27.210560   77264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:28:27.249423   77264 cri.go:89] found id: ""
	I0917 18:28:27.249495   77264 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 18:28:27.270900   77264 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:28:27.283556   77264 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:28:27.283577   77264 kubeadm.go:157] found existing configuration files:
	
	I0917 18:28:27.283636   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:28:27.293555   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:28:27.293619   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:28:27.303876   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:28:27.313465   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:28:27.313533   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:28:27.323675   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:28:27.333753   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:28:27.333828   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:28:27.345276   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:28:27.356223   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:28:27.356278   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:28:27.366916   77264 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:28:27.380179   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:27.518193   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:28.381642   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:28.600807   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:28.674888   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:28.751910   77264 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:28:28.752037   77264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.252499   77264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.752690   77264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.792406   77264 api_server.go:72] duration metric: took 1.040494132s to wait for apiserver process to appear ...
	I0917 18:28:29.792439   77264 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:28:29.792463   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:29.793008   77264 api_server.go:269] stopped: https://192.168.50.61:8443/healthz: Get "https://192.168.50.61:8443/healthz": dial tcp 192.168.50.61:8443: connect: connection refused
	I0917 18:28:30.292587   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:27.501185   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:29.501753   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:32.000659   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:30.626195   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:33.126180   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:30.832853   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:31.332521   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:31.832513   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:32.332347   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:32.833201   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:33.332485   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:33.833002   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:34.333150   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:34.832985   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:35.332584   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:32.308247   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:28:32.308273   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:28:32.308286   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:32.327248   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:28:32.327283   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:28:32.792628   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:32.798368   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:32.798399   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:33.292887   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:33.298137   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:33.298162   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:33.792634   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:33.797062   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:33.797095   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:34.292626   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:34.297161   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:34.297198   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:34.792621   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:34.797092   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:34.797124   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:35.292693   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:35.298774   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:35.298806   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:35.793350   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:35.798559   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 200:
	ok
	I0917 18:28:35.805421   77264 api_server.go:141] control plane version: v1.31.1
	I0917 18:28:35.805455   77264 api_server.go:131] duration metric: took 6.013008084s to wait for apiserver health ...
	I0917 18:28:35.805467   77264 cni.go:84] Creating CNI manager for ""
	I0917 18:28:35.805476   77264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:28:35.807270   77264 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:28:34.500180   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:36.501455   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:35.625916   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:38.124412   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:35.833375   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:36.332518   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:36.833057   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:37.333093   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:37.832449   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:38.333260   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:38.832592   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:39.332352   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:39.833094   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:40.332401   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:35.808509   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:28:35.820438   77264 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:28:35.843308   77264 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:28:35.858341   77264 system_pods.go:59] 8 kube-system pods found
	I0917 18:28:35.858375   77264 system_pods.go:61] "coredns-7c65d6cfc9-fv5t2" [6d147703-1be6-4e14-b00a-00563bb9f05d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:28:35.858383   77264 system_pods.go:61] "etcd-embed-certs-081863" [e7da3a2f-02a8-4fb8-bcc1-2057560e2a99] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 18:28:35.858390   77264 system_pods.go:61] "kube-apiserver-embed-certs-081863" [f576f758-867b-45ff-83e7-c7ec010c784d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 18:28:35.858396   77264 system_pods.go:61] "kube-controller-manager-embed-certs-081863" [864cfdcd-bba9-41ef-a014-9b44f90d10fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 18:28:35.858400   77264 system_pods.go:61] "kube-proxy-5ctps" [adbf43b1-986e-4bef-b515-9bf20e847369] Running
	I0917 18:28:35.858407   77264 system_pods.go:61] "kube-scheduler-embed-certs-081863" [1c6dc904-888a-43e2-9edf-ad87025d9cd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 18:28:35.858425   77264 system_pods.go:61] "metrics-server-6867b74b74-g2ttm" [dbb935ab-664c-420e-8b8e-4c033c3e07d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:28:35.858438   77264 system_pods.go:61] "storage-provisioner" [3a81abf3-c894-4279-91ce-6a66e4517de9] Running
	I0917 18:28:35.858446   77264 system_pods.go:74] duration metric: took 15.115932ms to wait for pod list to return data ...
	I0917 18:28:35.858459   77264 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:28:35.865686   77264 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:28:35.865715   77264 node_conditions.go:123] node cpu capacity is 2
	I0917 18:28:35.865728   77264 node_conditions.go:105] duration metric: took 7.262354ms to run NodePressure ...
	I0917 18:28:35.865747   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:36.133217   77264 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0917 18:28:36.142193   77264 kubeadm.go:739] kubelet initialised
	I0917 18:28:36.142216   77264 kubeadm.go:740] duration metric: took 8.957553ms waiting for restarted kubelet to initialise ...
	I0917 18:28:36.142223   77264 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:28:36.148365   77264 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-fv5t2" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.154605   77264 pod_ready.go:98] node "embed-certs-081863" hosting pod "coredns-7c65d6cfc9-fv5t2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.154633   77264 pod_ready.go:82] duration metric: took 6.241589ms for pod "coredns-7c65d6cfc9-fv5t2" in "kube-system" namespace to be "Ready" ...
	E0917 18:28:36.154644   77264 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-081863" hosting pod "coredns-7c65d6cfc9-fv5t2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.154654   77264 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.160864   77264 pod_ready.go:98] node "embed-certs-081863" hosting pod "etcd-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.160888   77264 pod_ready.go:82] duration metric: took 6.224743ms for pod "etcd-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	E0917 18:28:36.160899   77264 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-081863" hosting pod "etcd-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.160906   77264 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.167006   77264 pod_ready.go:98] node "embed-certs-081863" hosting pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.167038   77264 pod_ready.go:82] duration metric: took 6.114714ms for pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	E0917 18:28:36.167049   77264 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-081863" hosting pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.167058   77264 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.247310   77264 pod_ready.go:98] node "embed-certs-081863" hosting pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.247349   77264 pod_ready.go:82] duration metric: took 80.274557ms for pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	E0917 18:28:36.247361   77264 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-081863" hosting pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.247368   77264 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5ctps" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.647989   77264 pod_ready.go:93] pod "kube-proxy-5ctps" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:36.648012   77264 pod_ready.go:82] duration metric: took 400.635503ms for pod "kube-proxy-5ctps" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.648022   77264 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:38.654947   77264 pod_ready.go:103] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:40.658044   77264 pod_ready.go:103] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:39.000917   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:41.001794   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:40.124879   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:42.125939   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:40.832609   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:41.332438   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:41.832456   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:42.332846   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:42.832374   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:43.332703   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:43.832502   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:44.332845   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:44.832341   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:45.333377   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:43.154904   77264 pod_ready.go:103] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:45.155253   77264 pod_ready.go:103] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:43.001900   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:45.501989   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:44.625492   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:47.124276   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:45.832541   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:46.332842   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:46.832446   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:47.333344   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:47.833087   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:48.332527   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:48.832377   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:49.332937   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:49.833254   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:50.332394   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:47.157575   77264 pod_ready.go:93] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:47.157603   77264 pod_ready.go:82] duration metric: took 10.509573459s for pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:47.157614   77264 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:49.163957   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:48.000696   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:50.001527   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:49.627381   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:52.125550   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:50.833049   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:51.333314   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:51.832959   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:52.332830   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:52.832394   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:53.333004   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:53.832841   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:54.333310   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:54.832648   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:55.332487   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:51.164376   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:53.164866   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:55.165065   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:52.501375   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:54.501792   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:57.006451   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:54.624863   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:57.125005   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:55.832339   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:56.333257   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:56.833293   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:57.332665   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:57.833189   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:58.332409   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:58.833030   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:59.333251   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:59.832903   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:00.333365   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:57.664921   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:00.165972   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:59.500173   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:01.501014   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:59.125299   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:01.125883   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:00.833018   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:01.332976   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:01.832860   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:02.332401   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:02.832409   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:03.333273   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:03.832435   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:04.332572   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:04.832618   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:05.333051   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:02.166251   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:04.665729   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:04.000731   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:06.000850   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:03.624799   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:05.625817   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:08.124471   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:05.833109   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:06.332870   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:06.833248   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:07.332856   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:07.832795   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:08.332779   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:08.832356   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:09.333340   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:09.832899   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:10.332646   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:06.666037   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:09.163623   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:08.501863   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:10.504311   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:10.125479   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:12.625676   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:10.833153   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:10.833224   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:10.877318   78008 cri.go:89] found id: ""
	I0917 18:29:10.877347   78008 logs.go:276] 0 containers: []
	W0917 18:29:10.877356   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:10.877363   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:10.877433   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:10.913506   78008 cri.go:89] found id: ""
	I0917 18:29:10.913532   78008 logs.go:276] 0 containers: []
	W0917 18:29:10.913540   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:10.913546   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:10.913607   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:10.952648   78008 cri.go:89] found id: ""
	I0917 18:29:10.952679   78008 logs.go:276] 0 containers: []
	W0917 18:29:10.952689   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:10.952699   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:10.952761   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:10.992819   78008 cri.go:89] found id: ""
	I0917 18:29:10.992851   78008 logs.go:276] 0 containers: []
	W0917 18:29:10.992863   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:10.992870   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:10.992923   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:11.032717   78008 cri.go:89] found id: ""
	I0917 18:29:11.032752   78008 logs.go:276] 0 containers: []
	W0917 18:29:11.032764   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:11.032772   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:11.032831   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:11.070909   78008 cri.go:89] found id: ""
	I0917 18:29:11.070934   78008 logs.go:276] 0 containers: []
	W0917 18:29:11.070944   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:11.070953   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:11.071005   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:11.111115   78008 cri.go:89] found id: ""
	I0917 18:29:11.111146   78008 logs.go:276] 0 containers: []
	W0917 18:29:11.111157   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:11.111164   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:11.111233   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:11.147704   78008 cri.go:89] found id: ""
	I0917 18:29:11.147738   78008 logs.go:276] 0 containers: []
	W0917 18:29:11.147751   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:11.147770   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:11.147783   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:11.222086   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:11.222131   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:11.268572   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:11.268598   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:11.320140   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:11.320179   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:11.336820   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:11.336862   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:11.476726   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:13.977359   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:13.991780   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:13.991861   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:14.029657   78008 cri.go:89] found id: ""
	I0917 18:29:14.029686   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.029697   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:14.029703   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:14.029761   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:14.070673   78008 cri.go:89] found id: ""
	I0917 18:29:14.070707   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.070716   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:14.070722   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:14.070781   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:14.109826   78008 cri.go:89] found id: ""
	I0917 18:29:14.109862   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.109872   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:14.109880   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:14.109938   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:14.156812   78008 cri.go:89] found id: ""
	I0917 18:29:14.156839   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.156848   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:14.156853   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:14.156909   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:14.203877   78008 cri.go:89] found id: ""
	I0917 18:29:14.203906   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.203915   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:14.203921   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:14.203973   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:14.263366   78008 cri.go:89] found id: ""
	I0917 18:29:14.263395   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.263403   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:14.263408   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:14.263469   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:14.305300   78008 cri.go:89] found id: ""
	I0917 18:29:14.305324   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.305331   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:14.305337   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:14.305393   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:14.342838   78008 cri.go:89] found id: ""
	I0917 18:29:14.342874   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.342888   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:14.342900   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:14.342915   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:14.394814   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:14.394864   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:14.410058   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:14.410084   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:14.497503   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:14.497532   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:14.497547   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:14.578545   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:14.578582   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:11.164670   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:13.664310   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:15.664728   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:13.001122   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:15.001204   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:15.124476   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:17.125696   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:17.119953   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:17.134019   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:17.134078   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:17.174236   78008 cri.go:89] found id: ""
	I0917 18:29:17.174259   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.174268   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:17.174273   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:17.174317   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:17.208678   78008 cri.go:89] found id: ""
	I0917 18:29:17.208738   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.208749   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:17.208757   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:17.208820   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:17.242890   78008 cri.go:89] found id: ""
	I0917 18:29:17.242915   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.242923   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:17.242929   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:17.242983   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:17.281990   78008 cri.go:89] found id: ""
	I0917 18:29:17.282013   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.282038   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:17.282046   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:17.282105   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:17.320104   78008 cri.go:89] found id: ""
	I0917 18:29:17.320140   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.320153   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:17.320160   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:17.320220   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:17.361959   78008 cri.go:89] found id: ""
	I0917 18:29:17.361993   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.362004   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:17.362012   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:17.362120   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:17.400493   78008 cri.go:89] found id: ""
	I0917 18:29:17.400531   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.400543   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:17.400550   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:17.400611   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:17.435549   78008 cri.go:89] found id: ""
	I0917 18:29:17.435574   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.435582   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:17.435590   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:17.435605   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:17.483883   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:17.483919   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:17.498771   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:17.498801   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:17.583654   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:17.583680   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:17.583695   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:17.670903   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:17.670935   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:20.213963   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:20.228410   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:20.228487   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:20.268252   78008 cri.go:89] found id: ""
	I0917 18:29:20.268290   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.268301   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:20.268308   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:20.268385   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:20.307725   78008 cri.go:89] found id: ""
	I0917 18:29:20.307765   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.307774   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:20.307779   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:20.307840   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:20.350112   78008 cri.go:89] found id: ""
	I0917 18:29:20.350138   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.350146   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:20.350151   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:20.350209   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:20.386658   78008 cri.go:89] found id: ""
	I0917 18:29:20.386683   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.386692   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:20.386697   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:20.386758   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:20.427135   78008 cri.go:89] found id: ""
	I0917 18:29:20.427168   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.427180   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:20.427186   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:20.427253   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:20.464054   78008 cri.go:89] found id: ""
	I0917 18:29:20.464081   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.464091   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:20.464098   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:20.464162   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:20.503008   78008 cri.go:89] found id: ""
	I0917 18:29:20.503034   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.503043   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:20.503048   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:20.503107   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:20.539095   78008 cri.go:89] found id: ""
	I0917 18:29:20.539125   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.539137   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:20.539149   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:20.539165   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:20.552429   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:20.552457   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:20.631977   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:20.632000   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:20.632012   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:18.164593   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:20.164968   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:17.501184   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:19.503422   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:22.001605   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:19.624854   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:21.625397   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:20.709917   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:20.709950   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:20.752312   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:20.752349   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:23.310520   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:23.327230   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:23.327296   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:23.369648   78008 cri.go:89] found id: ""
	I0917 18:29:23.369677   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.369687   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:23.369692   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:23.369756   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:23.406968   78008 cri.go:89] found id: ""
	I0917 18:29:23.407002   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.407010   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:23.407017   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:23.407079   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:23.448246   78008 cri.go:89] found id: ""
	I0917 18:29:23.448275   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.448285   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:23.448290   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:23.448350   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:23.486975   78008 cri.go:89] found id: ""
	I0917 18:29:23.487006   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.487016   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:23.487024   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:23.487077   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:23.523614   78008 cri.go:89] found id: ""
	I0917 18:29:23.523645   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.523656   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:23.523672   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:23.523751   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:23.567735   78008 cri.go:89] found id: ""
	I0917 18:29:23.567763   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.567774   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:23.567781   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:23.567846   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:23.610952   78008 cri.go:89] found id: ""
	I0917 18:29:23.610985   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.610995   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:23.611002   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:23.611063   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:23.647601   78008 cri.go:89] found id: ""
	I0917 18:29:23.647633   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.647645   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:23.647657   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:23.647674   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:23.720969   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:23.720998   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:23.721014   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:23.802089   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:23.802124   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:23.847641   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:23.847673   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:23.901447   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:23.901488   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:22.663696   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:25.164022   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:24.001853   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:26.002572   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:24.124362   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:26.125485   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:26.416524   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:26.432087   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:26.432148   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:26.473403   78008 cri.go:89] found id: ""
	I0917 18:29:26.473435   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.473446   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:26.473453   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:26.473516   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:26.510736   78008 cri.go:89] found id: ""
	I0917 18:29:26.510764   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.510774   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:26.510780   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:26.510847   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:26.549732   78008 cri.go:89] found id: ""
	I0917 18:29:26.549766   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.549779   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:26.549789   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:26.549857   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:26.586548   78008 cri.go:89] found id: ""
	I0917 18:29:26.586580   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.586592   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:26.586599   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:26.586664   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:26.624246   78008 cri.go:89] found id: ""
	I0917 18:29:26.624276   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.624286   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:26.624294   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:26.624353   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:26.662535   78008 cri.go:89] found id: ""
	I0917 18:29:26.662565   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.662576   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:26.662584   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:26.662648   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:26.697775   78008 cri.go:89] found id: ""
	I0917 18:29:26.697810   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.697820   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:26.697826   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:26.697885   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:26.734181   78008 cri.go:89] found id: ""
	I0917 18:29:26.734209   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.734218   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:26.734228   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:26.734239   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:26.783128   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:26.783163   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:26.797674   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:26.797713   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:26.873548   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:26.873570   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:26.873581   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:26.954031   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:26.954066   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:29.494364   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:29.508545   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:29.508616   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:29.545854   78008 cri.go:89] found id: ""
	I0917 18:29:29.545880   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.545888   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:29.545893   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:29.545941   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:29.581646   78008 cri.go:89] found id: ""
	I0917 18:29:29.581680   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.581691   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:29.581698   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:29.581770   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:29.627071   78008 cri.go:89] found id: ""
	I0917 18:29:29.627101   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.627112   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:29.627119   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:29.627176   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:29.662514   78008 cri.go:89] found id: ""
	I0917 18:29:29.662544   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.662555   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:29.662562   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:29.662622   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:29.699246   78008 cri.go:89] found id: ""
	I0917 18:29:29.699278   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.699291   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:29.699299   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:29.699359   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:29.736018   78008 cri.go:89] found id: ""
	I0917 18:29:29.736057   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.736070   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:29.736077   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:29.736138   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:29.773420   78008 cri.go:89] found id: ""
	I0917 18:29:29.773449   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.773459   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:29.773467   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:29.773527   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:29.811530   78008 cri.go:89] found id: ""
	I0917 18:29:29.811556   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.811568   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:29.811578   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:29.811592   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:29.870083   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:29.870123   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:29.885471   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:29.885500   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:29.964699   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:29.964730   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:29.964754   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:30.048858   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:30.048899   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:27.165404   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:29.166367   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:28.500007   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:30.500594   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:28.626043   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:31.125419   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:33.125872   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:32.597013   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:32.611613   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:32.611691   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:32.648043   78008 cri.go:89] found id: ""
	I0917 18:29:32.648074   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.648086   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:32.648093   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:32.648159   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:32.686471   78008 cri.go:89] found id: ""
	I0917 18:29:32.686514   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.686526   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:32.686533   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:32.686594   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:32.721495   78008 cri.go:89] found id: ""
	I0917 18:29:32.721521   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.721530   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:32.721536   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:32.721595   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:32.757916   78008 cri.go:89] found id: ""
	I0917 18:29:32.757949   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.757960   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:32.757968   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:32.758035   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:32.793880   78008 cri.go:89] found id: ""
	I0917 18:29:32.793913   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.793925   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:32.793933   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:32.794006   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:32.834944   78008 cri.go:89] found id: ""
	I0917 18:29:32.834965   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.834973   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:32.834983   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:32.835044   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:32.872852   78008 cri.go:89] found id: ""
	I0917 18:29:32.872875   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.872883   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:32.872888   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:32.872939   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:32.913506   78008 cri.go:89] found id: ""
	I0917 18:29:32.913530   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.913538   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:32.913547   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:32.913562   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:32.928726   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:32.928751   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:33.001220   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:33.001259   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:33.001274   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:33.080268   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:33.080304   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:33.123977   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:33.124008   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:31.664513   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:34.164735   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:33.001341   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:35.500975   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:35.625484   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:37.625964   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:35.678936   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:35.692953   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:35.693036   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:35.736947   78008 cri.go:89] found id: ""
	I0917 18:29:35.736984   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.737004   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:35.737012   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:35.737076   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:35.776148   78008 cri.go:89] found id: ""
	I0917 18:29:35.776173   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.776184   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:35.776191   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:35.776253   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:35.814136   78008 cri.go:89] found id: ""
	I0917 18:29:35.814167   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.814179   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:35.814189   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:35.814252   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:35.854451   78008 cri.go:89] found id: ""
	I0917 18:29:35.854480   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.854492   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:35.854505   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:35.854573   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:35.893068   78008 cri.go:89] found id: ""
	I0917 18:29:35.893091   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.893102   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:35.893108   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:35.893174   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:35.929116   78008 cri.go:89] found id: ""
	I0917 18:29:35.929140   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.929148   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:35.929153   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:35.929211   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:35.964253   78008 cri.go:89] found id: ""
	I0917 18:29:35.964284   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.964294   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:35.964300   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:35.964364   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:36.002761   78008 cri.go:89] found id: ""
	I0917 18:29:36.002790   78008 logs.go:276] 0 containers: []
	W0917 18:29:36.002800   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:36.002810   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:36.002825   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:36.017581   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:36.017614   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:36.086982   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:36.087008   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:36.087024   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:36.169886   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:36.169919   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:36.215327   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:36.215355   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:38.768619   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:38.781979   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:38.782049   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:38.818874   78008 cri.go:89] found id: ""
	I0917 18:29:38.818903   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.818911   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:38.818918   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:38.818967   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:38.857619   78008 cri.go:89] found id: ""
	I0917 18:29:38.857648   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.857664   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:38.857670   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:38.857747   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:38.896861   78008 cri.go:89] found id: ""
	I0917 18:29:38.896896   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.896907   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:38.896914   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:38.896977   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:38.934593   78008 cri.go:89] found id: ""
	I0917 18:29:38.934616   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.934625   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:38.934632   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:38.934707   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:38.972359   78008 cri.go:89] found id: ""
	I0917 18:29:38.972383   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.972394   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:38.972400   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:38.972468   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:39.007529   78008 cri.go:89] found id: ""
	I0917 18:29:39.007554   78008 logs.go:276] 0 containers: []
	W0917 18:29:39.007561   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:39.007567   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:39.007613   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:39.042646   78008 cri.go:89] found id: ""
	I0917 18:29:39.042679   78008 logs.go:276] 0 containers: []
	W0917 18:29:39.042690   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:39.042697   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:39.042758   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:39.080077   78008 cri.go:89] found id: ""
	I0917 18:29:39.080106   78008 logs.go:276] 0 containers: []
	W0917 18:29:39.080118   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:39.080128   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:39.080144   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:39.094785   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:39.094812   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:39.168149   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:39.168173   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:39.168184   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:39.258912   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:39.258958   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:39.303103   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:39.303133   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:36.664761   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:38.664881   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:37.501339   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:40.001032   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:42.001645   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:40.124869   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:42.125730   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:41.860904   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:41.875574   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:41.875644   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:41.916576   78008 cri.go:89] found id: ""
	I0917 18:29:41.916603   78008 logs.go:276] 0 containers: []
	W0917 18:29:41.916615   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:41.916623   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:41.916674   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:41.952222   78008 cri.go:89] found id: ""
	I0917 18:29:41.952284   78008 logs.go:276] 0 containers: []
	W0917 18:29:41.952298   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:41.952307   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:41.952374   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:41.992584   78008 cri.go:89] found id: ""
	I0917 18:29:41.992611   78008 logs.go:276] 0 containers: []
	W0917 18:29:41.992621   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:41.992627   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:41.992689   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:42.030490   78008 cri.go:89] found id: ""
	I0917 18:29:42.030522   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.030534   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:42.030542   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:42.030621   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:42.067240   78008 cri.go:89] found id: ""
	I0917 18:29:42.067274   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.067287   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:42.067312   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:42.067394   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:42.106093   78008 cri.go:89] found id: ""
	I0917 18:29:42.106124   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.106137   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:42.106148   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:42.106227   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:42.148581   78008 cri.go:89] found id: ""
	I0917 18:29:42.148623   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.148635   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:42.148643   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:42.148729   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:42.188248   78008 cri.go:89] found id: ""
	I0917 18:29:42.188277   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.188286   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:42.188294   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:42.188308   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:42.276866   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:42.276906   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:42.325636   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:42.325671   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:42.379370   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:42.379406   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:42.396321   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:42.396357   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:42.481770   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:44.982800   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:44.996898   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:44.997053   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:45.036594   78008 cri.go:89] found id: ""
	I0917 18:29:45.036623   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.036632   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:45.036638   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:45.036699   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:45.073760   78008 cri.go:89] found id: ""
	I0917 18:29:45.073788   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.073799   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:45.073807   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:45.073868   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:45.111080   78008 cri.go:89] found id: ""
	I0917 18:29:45.111106   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.111116   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:45.111127   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:45.111196   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:45.149986   78008 cri.go:89] found id: ""
	I0917 18:29:45.150017   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.150027   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:45.150035   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:45.150099   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:45.187597   78008 cri.go:89] found id: ""
	I0917 18:29:45.187620   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.187629   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:45.187635   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:45.187701   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:45.234149   78008 cri.go:89] found id: ""
	I0917 18:29:45.234174   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.234182   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:45.234188   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:45.234236   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:45.269840   78008 cri.go:89] found id: ""
	I0917 18:29:45.269867   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.269875   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:45.269882   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:45.269944   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:45.306377   78008 cri.go:89] found id: ""
	I0917 18:29:45.306407   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.306418   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:45.306427   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:45.306441   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:45.388767   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:45.388788   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:45.388799   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:45.470114   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:45.470147   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:45.516157   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:45.516185   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:45.573857   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:45.573895   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:41.166141   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:43.664951   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:44.501916   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:47.000980   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:44.626656   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:47.124445   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:48.090706   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:48.105691   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:48.105776   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:48.150986   78008 cri.go:89] found id: ""
	I0917 18:29:48.151013   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.151024   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:48.151032   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:48.151100   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:48.192061   78008 cri.go:89] found id: ""
	I0917 18:29:48.192090   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.192099   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:48.192104   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:48.192161   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:48.229101   78008 cri.go:89] found id: ""
	I0917 18:29:48.229131   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.229148   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:48.229157   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:48.229220   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:48.265986   78008 cri.go:89] found id: ""
	I0917 18:29:48.266016   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.266027   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:48.266034   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:48.266095   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:48.303726   78008 cri.go:89] found id: ""
	I0917 18:29:48.303766   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.303776   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:48.303781   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:48.303830   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:48.339658   78008 cri.go:89] found id: ""
	I0917 18:29:48.339686   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.339696   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:48.339704   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:48.339774   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:48.379115   78008 cri.go:89] found id: ""
	I0917 18:29:48.379140   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.379157   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:48.379164   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:48.379218   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:48.414414   78008 cri.go:89] found id: ""
	I0917 18:29:48.414449   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.414461   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:48.414472   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:48.414488   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:48.428450   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:48.428477   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:48.514098   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:48.514125   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:48.514140   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:48.593472   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:48.593505   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:48.644071   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:48.644108   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:46.165499   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:48.166008   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:50.663751   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:49.001133   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:51.001465   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:49.125957   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:51.126670   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:51.202414   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:51.216803   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:51.216880   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:51.258947   78008 cri.go:89] found id: ""
	I0917 18:29:51.258982   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.259000   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:51.259009   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:51.259075   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:51.298904   78008 cri.go:89] found id: ""
	I0917 18:29:51.298937   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.298949   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:51.298957   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:51.299019   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:51.340714   78008 cri.go:89] found id: ""
	I0917 18:29:51.340743   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.340755   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:51.340761   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:51.340823   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:51.382480   78008 cri.go:89] found id: ""
	I0917 18:29:51.382518   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.382527   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:51.382532   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:51.382584   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:51.423788   78008 cri.go:89] found id: ""
	I0917 18:29:51.423818   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.423829   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:51.423836   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:51.423905   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:51.459714   78008 cri.go:89] found id: ""
	I0917 18:29:51.459740   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.459755   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:51.459762   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:51.459810   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:51.495817   78008 cri.go:89] found id: ""
	I0917 18:29:51.495850   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.495862   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:51.495870   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:51.495926   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:51.531481   78008 cri.go:89] found id: ""
	I0917 18:29:51.531521   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.531538   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:51.531550   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:51.531566   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:51.547085   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:51.547120   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:51.622717   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:51.622743   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:51.622758   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:51.701363   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:51.701404   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:51.749746   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:51.749779   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:54.306208   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:54.320659   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:54.320737   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:54.365488   78008 cri.go:89] found id: ""
	I0917 18:29:54.365513   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.365521   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:54.365527   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:54.365588   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:54.417659   78008 cri.go:89] found id: ""
	I0917 18:29:54.417689   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.417700   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:54.417706   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:54.417773   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:54.460760   78008 cri.go:89] found id: ""
	I0917 18:29:54.460795   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.460806   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:54.460814   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:54.460865   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:54.501371   78008 cri.go:89] found id: ""
	I0917 18:29:54.501405   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.501419   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:54.501428   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:54.501501   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:54.549810   78008 cri.go:89] found id: ""
	I0917 18:29:54.549844   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.549853   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:54.549859   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:54.549910   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:54.586837   78008 cri.go:89] found id: ""
	I0917 18:29:54.586860   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.586867   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:54.586881   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:54.586942   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:54.623858   78008 cri.go:89] found id: ""
	I0917 18:29:54.623887   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.623898   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:54.623905   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:54.623967   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:54.660913   78008 cri.go:89] found id: ""
	I0917 18:29:54.660945   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.660955   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:54.660965   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:54.660979   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:54.716523   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:54.716560   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:54.731846   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:54.731877   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:54.812288   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:54.812311   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:54.812323   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:54.892779   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:54.892819   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:52.663861   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:54.664903   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:53.501802   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:56.001407   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:53.624682   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:56.124445   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:57.440435   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:57.454886   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:57.454964   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:57.491408   78008 cri.go:89] found id: ""
	I0917 18:29:57.491440   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.491453   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:57.491461   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:57.491523   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:57.535786   78008 cri.go:89] found id: ""
	I0917 18:29:57.535814   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.535829   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:57.535837   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:57.535904   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:57.578014   78008 cri.go:89] found id: ""
	I0917 18:29:57.578043   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.578051   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:57.578057   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:57.578108   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:57.615580   78008 cri.go:89] found id: ""
	I0917 18:29:57.615615   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.615626   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:57.615634   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:57.615699   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:57.660250   78008 cri.go:89] found id: ""
	I0917 18:29:57.660285   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.660296   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:57.660305   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:57.660366   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:57.700495   78008 cri.go:89] found id: ""
	I0917 18:29:57.700526   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.700536   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:57.700542   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:57.700600   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:57.740580   78008 cri.go:89] found id: ""
	I0917 18:29:57.740616   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.740627   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:57.740635   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:57.740694   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:57.776982   78008 cri.go:89] found id: ""
	I0917 18:29:57.777012   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.777024   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:57.777035   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:57.777049   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:57.877144   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:57.877184   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:57.923875   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:57.923912   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:57.976988   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:57.977025   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:57.992196   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:57.992223   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:58.071161   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:00.571930   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:00.586999   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:00.587083   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:00.625833   78008 cri.go:89] found id: ""
	I0917 18:30:00.625856   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.625864   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:00.625869   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:00.625924   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:00.669976   78008 cri.go:89] found id: ""
	I0917 18:30:00.669999   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.670007   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:00.670012   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:00.670072   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:56.665386   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:59.163695   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:58.002576   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:00.500510   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:58.624759   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:00.633084   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:03.124695   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:00.708223   78008 cri.go:89] found id: ""
	I0917 18:30:00.708249   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.708257   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:00.708263   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:00.708315   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:00.743322   78008 cri.go:89] found id: ""
	I0917 18:30:00.743352   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.743364   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:00.743371   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:00.743508   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:00.778595   78008 cri.go:89] found id: ""
	I0917 18:30:00.778625   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.778635   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:00.778643   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:00.778706   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:00.816878   78008 cri.go:89] found id: ""
	I0917 18:30:00.816911   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.816923   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:00.816930   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:00.816983   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:00.855841   78008 cri.go:89] found id: ""
	I0917 18:30:00.855876   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.855889   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:00.855898   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:00.855974   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:00.897170   78008 cri.go:89] found id: ""
	I0917 18:30:00.897195   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.897203   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:00.897210   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:00.897236   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:00.949640   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:00.949680   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:00.963799   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:00.963825   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:01.050102   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:01.050123   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:01.050135   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:01.129012   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:01.129061   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:03.672160   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:03.687572   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:03.687648   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:03.729586   78008 cri.go:89] found id: ""
	I0917 18:30:03.729615   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.729626   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:03.729632   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:03.729692   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:03.766993   78008 cri.go:89] found id: ""
	I0917 18:30:03.767022   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.767032   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:03.767039   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:03.767104   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:03.804340   78008 cri.go:89] found id: ""
	I0917 18:30:03.804368   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.804378   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:03.804385   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:03.804451   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:03.847020   78008 cri.go:89] found id: ""
	I0917 18:30:03.847050   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.847061   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:03.847068   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:03.847158   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:03.885900   78008 cri.go:89] found id: ""
	I0917 18:30:03.885927   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.885938   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:03.885946   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:03.886009   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:03.925137   78008 cri.go:89] found id: ""
	I0917 18:30:03.925167   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.925178   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:03.925184   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:03.925259   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:03.962225   78008 cri.go:89] found id: ""
	I0917 18:30:03.962261   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.962275   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:03.962283   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:03.962352   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:04.005866   78008 cri.go:89] found id: ""
	I0917 18:30:04.005892   78008 logs.go:276] 0 containers: []
	W0917 18:30:04.005902   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:04.005909   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:04.005921   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:04.057578   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:04.057615   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:04.072178   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:04.072213   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:04.145219   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:04.145251   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:04.145285   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:04.234230   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:04.234282   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:01.165075   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:03.666085   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:05.672830   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:03.000954   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:05.501361   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:05.124840   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:07.126821   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:06.777988   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:06.793426   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:06.793500   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:06.833313   78008 cri.go:89] found id: ""
	I0917 18:30:06.833352   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.833360   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:06.833365   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:06.833424   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:06.870020   78008 cri.go:89] found id: ""
	I0917 18:30:06.870047   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.870056   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:06.870062   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:06.870124   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:06.906682   78008 cri.go:89] found id: ""
	I0917 18:30:06.906716   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.906728   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:06.906735   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:06.906810   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:06.946328   78008 cri.go:89] found id: ""
	I0917 18:30:06.946356   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.946365   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:06.946371   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:06.946418   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:06.983832   78008 cri.go:89] found id: ""
	I0917 18:30:06.983856   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.983865   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:06.983871   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:06.983918   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:07.024526   78008 cri.go:89] found id: ""
	I0917 18:30:07.024560   78008 logs.go:276] 0 containers: []
	W0917 18:30:07.024571   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:07.024579   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:07.024637   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:07.066891   78008 cri.go:89] found id: ""
	I0917 18:30:07.066917   78008 logs.go:276] 0 containers: []
	W0917 18:30:07.066928   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:07.066935   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:07.066997   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:07.105669   78008 cri.go:89] found id: ""
	I0917 18:30:07.105709   78008 logs.go:276] 0 containers: []
	W0917 18:30:07.105721   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:07.105732   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:07.105754   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:07.120771   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:07.120802   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:07.195243   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:07.195272   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:07.195287   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:07.284377   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:07.284428   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:07.326894   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:07.326924   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:09.886998   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:09.900710   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:09.900773   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:09.943198   78008 cri.go:89] found id: ""
	I0917 18:30:09.943225   78008 logs.go:276] 0 containers: []
	W0917 18:30:09.943234   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:09.943240   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:09.943300   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:09.980113   78008 cri.go:89] found id: ""
	I0917 18:30:09.980148   78008 logs.go:276] 0 containers: []
	W0917 18:30:09.980160   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:09.980167   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:09.980226   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:10.017582   78008 cri.go:89] found id: ""
	I0917 18:30:10.017613   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.017625   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:10.017632   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:10.017681   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:10.053698   78008 cri.go:89] found id: ""
	I0917 18:30:10.053722   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.053731   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:10.053736   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:10.053784   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:10.091391   78008 cri.go:89] found id: ""
	I0917 18:30:10.091421   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.091433   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:10.091439   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:10.091496   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:10.130636   78008 cri.go:89] found id: ""
	I0917 18:30:10.130668   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.130677   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:10.130682   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:10.130736   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:10.168175   78008 cri.go:89] found id: ""
	I0917 18:30:10.168203   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.168214   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:10.168222   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:10.168313   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:10.207085   78008 cri.go:89] found id: ""
	I0917 18:30:10.207109   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.207118   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:10.207126   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:10.207139   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:10.245978   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:10.246007   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:10.298522   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:10.298569   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:10.312878   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:10.312904   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:10.387530   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:10.387553   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:10.387565   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:08.165955   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:10.663887   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:08.000401   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:10.000928   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:12.001022   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:09.625405   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:12.124546   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:12.967663   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:12.982157   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:12.982215   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:13.020177   78008 cri.go:89] found id: ""
	I0917 18:30:13.020224   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.020235   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:13.020241   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:13.020310   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:13.056317   78008 cri.go:89] found id: ""
	I0917 18:30:13.056342   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.056351   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:13.056356   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:13.056404   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:13.091799   78008 cri.go:89] found id: ""
	I0917 18:30:13.091823   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.091832   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:13.091838   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:13.091888   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:13.130421   78008 cri.go:89] found id: ""
	I0917 18:30:13.130450   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.130460   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:13.130465   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:13.130518   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:13.170623   78008 cri.go:89] found id: ""
	I0917 18:30:13.170654   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.170664   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:13.170672   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:13.170732   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:13.206396   78008 cri.go:89] found id: ""
	I0917 18:30:13.206441   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.206452   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:13.206460   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:13.206514   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:13.243090   78008 cri.go:89] found id: ""
	I0917 18:30:13.243121   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.243132   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:13.243139   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:13.243192   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:13.285690   78008 cri.go:89] found id: ""
	I0917 18:30:13.285730   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.285740   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:13.285747   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:13.285759   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:13.361992   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:13.362021   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:13.362043   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:13.448424   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:13.448467   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:13.489256   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:13.489284   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:13.544698   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:13.544735   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:12.665127   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:15.164296   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:14.501748   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:17.001119   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:14.124965   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:16.625638   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:16.060014   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:16.073504   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:16.073564   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:16.110538   78008 cri.go:89] found id: ""
	I0917 18:30:16.110567   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.110579   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:16.110587   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:16.110648   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:16.148521   78008 cri.go:89] found id: ""
	I0917 18:30:16.148551   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.148562   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:16.148570   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:16.148640   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:16.182772   78008 cri.go:89] found id: ""
	I0917 18:30:16.182796   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.182804   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:16.182809   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:16.182858   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:16.219617   78008 cri.go:89] found id: ""
	I0917 18:30:16.219642   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.219653   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:16.219660   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:16.219714   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:16.257320   78008 cri.go:89] found id: ""
	I0917 18:30:16.257345   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.257354   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:16.257359   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:16.257419   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:16.295118   78008 cri.go:89] found id: ""
	I0917 18:30:16.295150   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.295161   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:16.295168   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:16.295234   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:16.332448   78008 cri.go:89] found id: ""
	I0917 18:30:16.332482   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.332493   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:16.332500   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:16.332564   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:16.370155   78008 cri.go:89] found id: ""
	I0917 18:30:16.370182   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.370189   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:16.370197   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:16.370208   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:16.410230   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:16.410260   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:16.462306   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:16.462342   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:16.476472   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:16.476506   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:16.550449   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:16.550479   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:16.550497   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:19.129550   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:19.143333   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:19.143415   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:19.184184   78008 cri.go:89] found id: ""
	I0917 18:30:19.184213   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.184224   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:19.184231   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:19.184289   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:19.219455   78008 cri.go:89] found id: ""
	I0917 18:30:19.219489   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.219501   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:19.219508   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:19.219568   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:19.257269   78008 cri.go:89] found id: ""
	I0917 18:30:19.257303   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.257315   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:19.257328   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:19.257405   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:19.293898   78008 cri.go:89] found id: ""
	I0917 18:30:19.293931   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.293943   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:19.293951   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:19.294009   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:19.339154   78008 cri.go:89] found id: ""
	I0917 18:30:19.339183   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.339194   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:19.339201   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:19.339268   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:19.378608   78008 cri.go:89] found id: ""
	I0917 18:30:19.378634   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.378646   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:19.378653   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:19.378720   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:19.415280   78008 cri.go:89] found id: ""
	I0917 18:30:19.415311   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.415322   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:19.415330   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:19.415396   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:19.454025   78008 cri.go:89] found id: ""
	I0917 18:30:19.454066   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.454079   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:19.454089   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:19.454107   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:19.505918   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:19.505950   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:19.520996   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:19.521027   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:19.597408   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:19.597431   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:19.597442   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:19.678454   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:19.678487   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:17.165495   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:19.665976   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:19.001210   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:21.001549   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:19.123461   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:21.124423   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:23.124646   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:22.223094   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:22.238644   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:22.238722   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:22.279497   78008 cri.go:89] found id: ""
	I0917 18:30:22.279529   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.279541   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:22.279554   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:22.279616   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:22.315953   78008 cri.go:89] found id: ""
	I0917 18:30:22.315980   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.315990   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:22.315997   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:22.316061   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:22.355157   78008 cri.go:89] found id: ""
	I0917 18:30:22.355191   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.355204   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:22.355212   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:22.355278   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:22.393304   78008 cri.go:89] found id: ""
	I0917 18:30:22.393335   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.393346   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:22.393353   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:22.393405   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:22.437541   78008 cri.go:89] found id: ""
	I0917 18:30:22.437567   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.437576   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:22.437582   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:22.437637   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:22.478560   78008 cri.go:89] found id: ""
	I0917 18:30:22.478588   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.478596   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:22.478601   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:22.478661   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:22.516049   78008 cri.go:89] found id: ""
	I0917 18:30:22.516084   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.516093   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:22.516099   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:22.516151   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:22.554321   78008 cri.go:89] found id: ""
	I0917 18:30:22.554350   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.554359   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:22.554367   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:22.554377   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:22.613073   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:22.613110   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:22.627768   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:22.627797   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:22.710291   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:22.710318   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:22.710333   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:22.807999   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:22.808035   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:25.350639   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:25.366302   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:25.366405   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:25.411585   78008 cri.go:89] found id: ""
	I0917 18:30:25.411613   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.411625   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:25.411632   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:25.411694   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:25.453414   78008 cri.go:89] found id: ""
	I0917 18:30:25.453441   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.453461   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:25.453467   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:25.453529   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:25.489776   78008 cri.go:89] found id: ""
	I0917 18:30:25.489803   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.489812   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:25.489817   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:25.489868   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:25.531594   78008 cri.go:89] found id: ""
	I0917 18:30:25.531624   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.531633   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:25.531638   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:25.531686   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:25.568796   78008 cri.go:89] found id: ""
	I0917 18:30:25.568820   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.568831   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:25.568837   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:25.568888   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:25.605612   78008 cri.go:89] found id: ""
	I0917 18:30:25.605643   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.605654   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:25.605661   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:25.605719   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:25.647673   78008 cri.go:89] found id: ""
	I0917 18:30:25.647698   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.647708   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:25.647713   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:25.647772   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:22.164631   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:24.165353   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:23.500355   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:25.503250   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:25.125192   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:27.125540   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:25.686943   78008 cri.go:89] found id: ""
	I0917 18:30:25.686976   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.686989   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:25.687000   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:25.687022   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:25.728440   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:25.728477   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:25.778211   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:25.778254   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:25.792519   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:25.792547   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:25.879452   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:25.879477   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:25.879492   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:28.460531   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:28.474595   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:28.474689   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:28.531065   78008 cri.go:89] found id: ""
	I0917 18:30:28.531099   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.531108   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:28.531117   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:28.531184   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:28.571952   78008 cri.go:89] found id: ""
	I0917 18:30:28.571991   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.572002   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:28.572012   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:28.572081   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:28.608315   78008 cri.go:89] found id: ""
	I0917 18:30:28.608348   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.608364   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:28.608371   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:28.608433   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:28.647882   78008 cri.go:89] found id: ""
	I0917 18:30:28.647913   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.647925   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:28.647932   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:28.647997   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:28.684998   78008 cri.go:89] found id: ""
	I0917 18:30:28.685021   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.685030   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:28.685036   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:28.685098   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:28.724249   78008 cri.go:89] found id: ""
	I0917 18:30:28.724274   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.724282   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:28.724287   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:28.724348   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:28.765932   78008 cri.go:89] found id: ""
	I0917 18:30:28.765965   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.765976   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:28.765982   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:28.766047   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:28.803857   78008 cri.go:89] found id: ""
	I0917 18:30:28.803888   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.803899   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:28.803910   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:28.803923   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:28.863667   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:28.863703   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:28.878148   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:28.878187   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:28.956714   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:28.956743   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:28.956760   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:29.036303   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:29.036342   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:26.664369   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:28.665390   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:28.001973   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:30.500284   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:29.126782   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:31.626235   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:31.581741   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:31.595509   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:31.595592   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:31.631185   78008 cri.go:89] found id: ""
	I0917 18:30:31.631215   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.631227   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:31.631234   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:31.631286   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:31.668059   78008 cri.go:89] found id: ""
	I0917 18:30:31.668091   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.668102   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:31.668109   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:31.668168   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:31.705807   78008 cri.go:89] found id: ""
	I0917 18:30:31.705838   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.705849   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:31.705856   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:31.705925   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:31.750168   78008 cri.go:89] found id: ""
	I0917 18:30:31.750198   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.750212   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:31.750220   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:31.750282   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:31.792032   78008 cri.go:89] found id: ""
	I0917 18:30:31.792054   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.792063   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:31.792069   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:31.792130   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:31.828596   78008 cri.go:89] found id: ""
	I0917 18:30:31.828632   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.828646   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:31.828654   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:31.828708   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:31.871963   78008 cri.go:89] found id: ""
	I0917 18:30:31.872000   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.872013   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:31.872023   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:31.872094   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:31.906688   78008 cri.go:89] found id: ""
	I0917 18:30:31.906718   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.906727   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:31.906735   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:31.906746   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:31.920311   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:31.920339   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:32.009966   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:32.009992   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:32.010006   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:32.088409   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:32.088447   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:32.132771   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:32.132806   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:34.686159   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:34.700133   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:34.700211   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:34.739392   78008 cri.go:89] found id: ""
	I0917 18:30:34.739431   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.739445   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:34.739453   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:34.739522   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:34.779141   78008 cri.go:89] found id: ""
	I0917 18:30:34.779175   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.779188   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:34.779195   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:34.779260   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:34.819883   78008 cri.go:89] found id: ""
	I0917 18:30:34.819907   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.819915   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:34.819920   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:34.819967   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:34.855886   78008 cri.go:89] found id: ""
	I0917 18:30:34.855912   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.855923   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:34.855931   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:34.855999   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:34.903919   78008 cri.go:89] found id: ""
	I0917 18:30:34.903956   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.903968   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:34.903975   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:34.904042   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:34.951895   78008 cri.go:89] found id: ""
	I0917 18:30:34.951925   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.951936   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:34.951943   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:34.952007   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:35.013084   78008 cri.go:89] found id: ""
	I0917 18:30:35.013124   78008 logs.go:276] 0 containers: []
	W0917 18:30:35.013132   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:35.013137   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:35.013189   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:35.051565   78008 cri.go:89] found id: ""
	I0917 18:30:35.051589   78008 logs.go:276] 0 containers: []
	W0917 18:30:35.051598   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:35.051606   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:35.051616   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:35.092723   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:35.092753   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:35.147996   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:35.148037   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:35.164989   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:35.165030   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:35.246216   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:35.246239   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:35.246252   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:31.163920   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:33.664255   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:32.500662   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:35.002015   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:34.124883   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:36.125144   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:38.125514   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:37.828811   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:37.846467   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:37.846534   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:37.884725   78008 cri.go:89] found id: ""
	I0917 18:30:37.884758   78008 logs.go:276] 0 containers: []
	W0917 18:30:37.884769   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:37.884777   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:37.884836   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:37.923485   78008 cri.go:89] found id: ""
	I0917 18:30:37.923517   78008 logs.go:276] 0 containers: []
	W0917 18:30:37.923525   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:37.923531   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:37.923597   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:37.962829   78008 cri.go:89] found id: ""
	I0917 18:30:37.962857   78008 logs.go:276] 0 containers: []
	W0917 18:30:37.962867   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:37.962873   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:37.962938   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:38.003277   78008 cri.go:89] found id: ""
	I0917 18:30:38.003305   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.003313   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:38.003319   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:38.003380   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:38.047919   78008 cri.go:89] found id: ""
	I0917 18:30:38.047952   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.047963   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:38.047971   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:38.048043   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:38.084853   78008 cri.go:89] found id: ""
	I0917 18:30:38.084883   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.084896   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:38.084904   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:38.084967   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:38.122340   78008 cri.go:89] found id: ""
	I0917 18:30:38.122369   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.122379   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:38.122387   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:38.122446   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:38.163071   78008 cri.go:89] found id: ""
	I0917 18:30:38.163101   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.163112   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:38.163121   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:38.163134   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:38.243772   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:38.243812   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:38.291744   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:38.291777   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:38.346738   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:38.346778   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:38.361908   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:38.361953   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:38.441730   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:36.165051   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:38.165173   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:40.664192   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:37.500496   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:39.501199   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:42.000608   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:40.626165   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:43.125533   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:40.942693   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:40.960643   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:40.960713   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:41.016226   78008 cri.go:89] found id: ""
	I0917 18:30:41.016255   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.016265   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:41.016270   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:41.016328   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:41.054315   78008 cri.go:89] found id: ""
	I0917 18:30:41.054342   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.054353   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:41.054360   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:41.054426   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:41.092946   78008 cri.go:89] found id: ""
	I0917 18:30:41.092978   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.092991   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:41.092998   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:41.093058   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:41.133385   78008 cri.go:89] found id: ""
	I0917 18:30:41.133415   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.133423   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:41.133430   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:41.133487   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:41.173993   78008 cri.go:89] found id: ""
	I0917 18:30:41.174017   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.174025   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:41.174030   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:41.174083   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:41.211127   78008 cri.go:89] found id: ""
	I0917 18:30:41.211154   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.211168   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:41.211174   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:41.211244   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:41.248607   78008 cri.go:89] found id: ""
	I0917 18:30:41.248632   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.248645   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:41.248652   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:41.248714   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:41.284580   78008 cri.go:89] found id: ""
	I0917 18:30:41.284612   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.284621   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:41.284629   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:41.284640   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:41.336573   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:41.336613   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:41.352134   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:41.352167   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:41.419061   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:41.419085   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:41.419099   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:41.499758   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:41.499792   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:44.043361   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:44.057270   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:44.057339   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:44.096130   78008 cri.go:89] found id: ""
	I0917 18:30:44.096165   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.096176   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:44.096184   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:44.096238   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:44.134483   78008 cri.go:89] found id: ""
	I0917 18:30:44.134514   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.134526   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:44.134534   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:44.134601   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:44.172723   78008 cri.go:89] found id: ""
	I0917 18:30:44.172759   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.172774   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:44.172782   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:44.172855   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:44.208478   78008 cri.go:89] found id: ""
	I0917 18:30:44.208506   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.208514   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:44.208519   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:44.208577   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:44.249352   78008 cri.go:89] found id: ""
	I0917 18:30:44.249381   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.249391   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:44.249398   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:44.249457   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:44.291156   78008 cri.go:89] found id: ""
	I0917 18:30:44.291180   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.291188   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:44.291194   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:44.291243   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:44.331580   78008 cri.go:89] found id: ""
	I0917 18:30:44.331612   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.331623   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:44.331632   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:44.331705   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:44.370722   78008 cri.go:89] found id: ""
	I0917 18:30:44.370750   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.370763   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:44.370774   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:44.370797   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:44.421126   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:44.421161   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:44.478581   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:44.478624   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:44.493492   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:44.493522   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:44.566317   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:44.566347   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:44.566358   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:42.664631   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:44.664871   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:44.001209   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:46.003437   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:45.625415   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:47.626515   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:47.147466   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:47.162590   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:47.162654   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:47.201382   78008 cri.go:89] found id: ""
	I0917 18:30:47.201409   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.201418   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:47.201423   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:47.201474   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:47.249536   78008 cri.go:89] found id: ""
	I0917 18:30:47.249561   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.249569   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:47.249574   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:47.249631   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:47.292337   78008 cri.go:89] found id: ""
	I0917 18:30:47.292361   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.292369   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:47.292376   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:47.292438   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:47.341387   78008 cri.go:89] found id: ""
	I0917 18:30:47.341421   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.341433   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:47.341447   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:47.341531   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:47.382687   78008 cri.go:89] found id: ""
	I0917 18:30:47.382719   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.382748   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:47.382762   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:47.382827   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:47.419598   78008 cri.go:89] found id: ""
	I0917 18:30:47.419632   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.419644   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:47.419650   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:47.419717   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:47.456104   78008 cri.go:89] found id: ""
	I0917 18:30:47.456131   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.456141   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:47.456148   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:47.456210   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:47.498610   78008 cri.go:89] found id: ""
	I0917 18:30:47.498643   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.498654   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:47.498665   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:47.498706   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:47.573796   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:47.573819   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:47.573830   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:47.651234   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:47.651271   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:47.692875   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:47.692902   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:47.747088   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:47.747128   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:50.262789   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:50.277262   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:50.277415   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:50.314866   78008 cri.go:89] found id: ""
	I0917 18:30:50.314902   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.314911   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:50.314916   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:50.314971   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:50.353490   78008 cri.go:89] found id: ""
	I0917 18:30:50.353527   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.353536   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:50.353542   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:50.353590   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:50.391922   78008 cri.go:89] found id: ""
	I0917 18:30:50.391944   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.391952   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:50.391957   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:50.392003   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:50.431088   78008 cri.go:89] found id: ""
	I0917 18:30:50.431118   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.431129   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:50.431136   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:50.431186   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:50.469971   78008 cri.go:89] found id: ""
	I0917 18:30:50.469999   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.470010   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:50.470018   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:50.470083   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:50.509121   78008 cri.go:89] found id: ""
	I0917 18:30:50.509153   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.509165   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:50.509172   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:50.509256   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:50.546569   78008 cri.go:89] found id: ""
	I0917 18:30:50.546594   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.546602   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:50.546607   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:50.546656   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:50.586045   78008 cri.go:89] found id: ""
	I0917 18:30:50.586071   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.586080   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:50.586088   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:50.586098   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:50.642994   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:50.643040   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:50.658018   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:50.658050   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 18:30:46.665597   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:49.164714   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:48.501502   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:51.001554   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:50.124526   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:52.625006   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	W0917 18:30:50.730760   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:50.730792   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:50.730808   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:50.810154   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:50.810185   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:53.356859   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:53.371313   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:53.371406   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:53.412822   78008 cri.go:89] found id: ""
	I0917 18:30:53.412847   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.412858   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:53.412865   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:53.412931   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:53.448900   78008 cri.go:89] found id: ""
	I0917 18:30:53.448932   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.448943   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:53.448950   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:53.449014   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:53.487141   78008 cri.go:89] found id: ""
	I0917 18:30:53.487167   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.487176   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:53.487182   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:53.487251   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:53.528899   78008 cri.go:89] found id: ""
	I0917 18:30:53.528928   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.528940   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:53.528947   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:53.529008   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:53.564795   78008 cri.go:89] found id: ""
	I0917 18:30:53.564827   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.564839   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:53.564847   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:53.564914   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:53.605208   78008 cri.go:89] found id: ""
	I0917 18:30:53.605257   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.605268   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:53.605277   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:53.605339   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:53.647177   78008 cri.go:89] found id: ""
	I0917 18:30:53.647205   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.647214   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:53.647219   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:53.647278   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:53.694030   78008 cri.go:89] found id: ""
	I0917 18:30:53.694057   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.694067   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:53.694075   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:53.694085   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:53.746611   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:53.746645   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:53.761563   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:53.761595   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:53.835663   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:53.835694   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:53.835709   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:53.920796   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:53.920848   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:51.166015   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:53.665173   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:53.001959   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:55.501150   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:54.625124   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:56.626246   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:56.468452   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:56.482077   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:56.482148   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:56.518569   78008 cri.go:89] found id: ""
	I0917 18:30:56.518593   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.518601   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:56.518607   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:56.518665   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:56.560000   78008 cri.go:89] found id: ""
	I0917 18:30:56.560033   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.560045   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:56.560054   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:56.560117   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:56.600391   78008 cri.go:89] found id: ""
	I0917 18:30:56.600423   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.600435   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:56.600442   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:56.600519   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:56.637674   78008 cri.go:89] found id: ""
	I0917 18:30:56.637706   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.637720   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:56.637728   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:56.637781   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:56.673297   78008 cri.go:89] found id: ""
	I0917 18:30:56.673329   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.673340   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:56.673348   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:56.673414   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:56.708863   78008 cri.go:89] found id: ""
	I0917 18:30:56.708898   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.708910   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:56.708917   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:56.708979   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:56.745165   78008 cri.go:89] found id: ""
	I0917 18:30:56.745199   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.745211   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:56.745220   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:56.745297   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:56.793206   78008 cri.go:89] found id: ""
	I0917 18:30:56.793260   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.793273   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:56.793284   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:56.793297   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:56.880661   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:56.880699   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:56.926789   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:56.926820   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:56.978914   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:56.978965   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:56.993199   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:56.993236   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:57.065180   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:59.565927   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:59.579838   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:59.579921   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:59.616623   78008 cri.go:89] found id: ""
	I0917 18:30:59.616648   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.616656   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:59.616662   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:59.616716   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:59.659048   78008 cri.go:89] found id: ""
	I0917 18:30:59.659074   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.659084   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:59.659091   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:59.659153   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:59.694874   78008 cri.go:89] found id: ""
	I0917 18:30:59.694899   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.694910   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:59.694921   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:59.694988   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:59.732858   78008 cri.go:89] found id: ""
	I0917 18:30:59.732889   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.732902   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:59.732909   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:59.732972   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:59.771178   78008 cri.go:89] found id: ""
	I0917 18:30:59.771203   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.771212   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:59.771218   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:59.771271   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:59.812456   78008 cri.go:89] found id: ""
	I0917 18:30:59.812481   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.812490   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:59.812498   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:59.812560   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:59.849876   78008 cri.go:89] found id: ""
	I0917 18:30:59.849906   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.849917   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:59.849924   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:59.849988   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:59.889796   78008 cri.go:89] found id: ""
	I0917 18:30:59.889827   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.889839   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:59.889850   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:59.889865   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:59.942735   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:59.942774   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:59.957159   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:59.957186   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:00.030497   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:00.030522   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:00.030537   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:00.112077   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:00.112134   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:56.164011   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:58.164643   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:00.164831   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:57.502585   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:00.002013   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:02.002047   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:59.125188   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:01.127691   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:02.656525   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:02.671313   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:02.671379   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:02.710779   78008 cri.go:89] found id: ""
	I0917 18:31:02.710807   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.710820   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:02.710827   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:02.710890   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:02.750285   78008 cri.go:89] found id: ""
	I0917 18:31:02.750315   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.750326   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:02.750335   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:02.750399   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:02.790676   78008 cri.go:89] found id: ""
	I0917 18:31:02.790704   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.790712   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:02.790718   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:02.790766   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:02.832124   78008 cri.go:89] found id: ""
	I0917 18:31:02.832154   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.832166   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:02.832174   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:02.832236   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:02.868769   78008 cri.go:89] found id: ""
	I0917 18:31:02.868801   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.868813   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:02.868820   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:02.868886   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:02.910482   78008 cri.go:89] found id: ""
	I0917 18:31:02.910512   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.910524   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:02.910533   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:02.910587   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:02.948128   78008 cri.go:89] found id: ""
	I0917 18:31:02.948154   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.948165   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:02.948172   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:02.948239   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:02.987981   78008 cri.go:89] found id: ""
	I0917 18:31:02.988007   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.988018   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:02.988028   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:02.988042   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:03.044116   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:03.044157   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:03.059837   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:03.059866   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:03.134048   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:03.134073   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:03.134086   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:03.214751   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:03.214792   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:02.169026   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:04.664829   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:04.501493   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:07.001722   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:03.625165   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:06.126203   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:05.768145   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:05.782375   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:05.782455   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:05.820083   78008 cri.go:89] found id: ""
	I0917 18:31:05.820116   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.820127   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:05.820134   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:05.820188   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:05.856626   78008 cri.go:89] found id: ""
	I0917 18:31:05.856655   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.856666   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:05.856673   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:05.856737   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:05.893119   78008 cri.go:89] found id: ""
	I0917 18:31:05.893149   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.893162   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:05.893172   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:05.893299   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:05.931892   78008 cri.go:89] found id: ""
	I0917 18:31:05.931916   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.931924   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:05.931930   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:05.931991   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:05.968770   78008 cri.go:89] found id: ""
	I0917 18:31:05.968802   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.968814   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:05.968820   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:05.968888   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:06.008183   78008 cri.go:89] found id: ""
	I0917 18:31:06.008208   78008 logs.go:276] 0 containers: []
	W0917 18:31:06.008217   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:06.008222   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:06.008267   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:06.043161   78008 cri.go:89] found id: ""
	I0917 18:31:06.043189   78008 logs.go:276] 0 containers: []
	W0917 18:31:06.043199   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:06.043204   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:06.043271   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:06.079285   78008 cri.go:89] found id: ""
	I0917 18:31:06.079315   78008 logs.go:276] 0 containers: []
	W0917 18:31:06.079326   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:06.079336   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:06.079347   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:06.160863   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:06.160913   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:06.202101   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:06.202127   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:06.255482   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:06.255517   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:06.271518   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:06.271545   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:06.344034   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:08.844243   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:08.859312   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:08.859381   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:08.896915   78008 cri.go:89] found id: ""
	I0917 18:31:08.896942   78008 logs.go:276] 0 containers: []
	W0917 18:31:08.896952   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:08.896959   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:08.897022   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:08.937979   78008 cri.go:89] found id: ""
	I0917 18:31:08.938005   78008 logs.go:276] 0 containers: []
	W0917 18:31:08.938014   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:08.938022   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:08.938072   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:08.978502   78008 cri.go:89] found id: ""
	I0917 18:31:08.978536   78008 logs.go:276] 0 containers: []
	W0917 18:31:08.978548   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:08.978556   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:08.978616   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:09.044664   78008 cri.go:89] found id: ""
	I0917 18:31:09.044699   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.044711   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:09.044719   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:09.044796   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:09.082888   78008 cri.go:89] found id: ""
	I0917 18:31:09.082923   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.082944   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:09.082954   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:09.083027   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:09.120314   78008 cri.go:89] found id: ""
	I0917 18:31:09.120339   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.120350   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:09.120357   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:09.120418   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:09.160137   78008 cri.go:89] found id: ""
	I0917 18:31:09.160165   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.160176   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:09.160183   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:09.160241   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:09.198711   78008 cri.go:89] found id: ""
	I0917 18:31:09.198741   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.198749   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:09.198756   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:09.198766   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:09.253431   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:09.253485   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:09.270520   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:09.270554   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:09.349865   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:09.349889   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:09.349909   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:09.436606   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:09.436650   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:07.165101   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:09.165704   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:09.001786   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:11.500557   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:08.625085   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:11.124817   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:13.125531   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:11.981998   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:11.995472   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:11.995556   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:12.035854   78008 cri.go:89] found id: ""
	I0917 18:31:12.035883   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.035894   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:12.035902   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:12.035953   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:12.070923   78008 cri.go:89] found id: ""
	I0917 18:31:12.070953   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.070965   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:12.070973   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:12.071041   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:12.108151   78008 cri.go:89] found id: ""
	I0917 18:31:12.108176   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.108185   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:12.108190   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:12.108238   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:12.146050   78008 cri.go:89] found id: ""
	I0917 18:31:12.146081   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.146092   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:12.146100   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:12.146158   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:12.185355   78008 cri.go:89] found id: ""
	I0917 18:31:12.185387   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.185396   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:12.185402   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:12.185449   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:12.222377   78008 cri.go:89] found id: ""
	I0917 18:31:12.222403   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.222412   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:12.222418   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:12.222488   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:12.258190   78008 cri.go:89] found id: ""
	I0917 18:31:12.258231   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.258242   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:12.258249   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:12.258326   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:12.295674   78008 cri.go:89] found id: ""
	I0917 18:31:12.295710   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.295722   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:12.295731   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:12.295742   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:12.348185   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:12.348223   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:12.363961   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:12.363992   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:12.438630   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:12.438661   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:12.438676   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:12.520086   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:12.520133   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:15.061926   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:15.079141   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:15.079206   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:15.122722   78008 cri.go:89] found id: ""
	I0917 18:31:15.122812   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.122828   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:15.122837   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:15.122895   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:15.168184   78008 cri.go:89] found id: ""
	I0917 18:31:15.168209   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.168218   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:15.168225   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:15.168288   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:15.208219   78008 cri.go:89] found id: ""
	I0917 18:31:15.208246   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.208259   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:15.208266   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:15.208318   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:15.248082   78008 cri.go:89] found id: ""
	I0917 18:31:15.248114   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.248126   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:15.248133   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:15.248197   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:15.285215   78008 cri.go:89] found id: ""
	I0917 18:31:15.285263   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.285274   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:15.285281   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:15.285339   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:15.328617   78008 cri.go:89] found id: ""
	I0917 18:31:15.328650   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.328669   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:15.328675   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:15.328738   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:15.371869   78008 cri.go:89] found id: ""
	I0917 18:31:15.371895   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.371903   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:15.371909   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:15.371955   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:15.418109   78008 cri.go:89] found id: ""
	I0917 18:31:15.418136   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.418145   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:15.418153   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:15.418166   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:15.443709   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:15.443741   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:15.540475   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:15.540499   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:15.540511   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:15.627751   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:15.627781   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:15.671027   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:15.671056   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:11.664755   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:14.164563   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:14.001567   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:16.500724   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:15.127715   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:17.624831   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:18.223732   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:18.239161   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:18.239242   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:18.280252   78008 cri.go:89] found id: ""
	I0917 18:31:18.280282   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.280294   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:18.280301   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:18.280350   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:18.318774   78008 cri.go:89] found id: ""
	I0917 18:31:18.318805   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.318815   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:18.318821   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:18.318877   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:18.354755   78008 cri.go:89] found id: ""
	I0917 18:31:18.354785   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.354796   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:18.354804   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:18.354862   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:18.391283   78008 cri.go:89] found id: ""
	I0917 18:31:18.391310   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.391318   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:18.391324   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:18.391372   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:18.429026   78008 cri.go:89] found id: ""
	I0917 18:31:18.429062   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.429074   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:18.429081   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:18.429135   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:18.468318   78008 cri.go:89] found id: ""
	I0917 18:31:18.468351   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.468365   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:18.468372   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:18.468421   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:18.509871   78008 cri.go:89] found id: ""
	I0917 18:31:18.509903   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.509914   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:18.509922   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:18.509979   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:18.548662   78008 cri.go:89] found id: ""
	I0917 18:31:18.548694   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.548705   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:18.548714   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:18.548729   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:18.587633   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:18.587662   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:18.640867   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:18.640910   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:18.658020   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:18.658054   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:18.729643   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:18.729674   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:18.729686   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:16.664372   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:18.666834   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:18.501952   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:21.001547   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:20.125423   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:22.626597   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:21.313013   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:21.329702   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:21.329768   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:21.378972   78008 cri.go:89] found id: ""
	I0917 18:31:21.378996   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.379004   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:21.379010   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:21.379065   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:21.433355   78008 cri.go:89] found id: ""
	I0917 18:31:21.433382   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.433393   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:21.433400   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:21.433462   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:21.489030   78008 cri.go:89] found id: ""
	I0917 18:31:21.489055   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.489063   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:21.489068   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:21.489124   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:21.529089   78008 cri.go:89] found id: ""
	I0917 18:31:21.529119   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.529131   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:21.529138   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:21.529188   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:21.566892   78008 cri.go:89] found id: ""
	I0917 18:31:21.566919   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.566929   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:21.566935   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:21.566985   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:21.605453   78008 cri.go:89] found id: ""
	I0917 18:31:21.605484   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.605496   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:21.605504   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:21.605569   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:21.647710   78008 cri.go:89] found id: ""
	I0917 18:31:21.647732   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.647740   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:21.647745   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:21.647804   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:21.687002   78008 cri.go:89] found id: ""
	I0917 18:31:21.687036   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.687048   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:21.687058   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:21.687074   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:21.738591   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:21.738631   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:21.752950   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:21.752987   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:21.826268   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:21.826292   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:21.826306   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:21.906598   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:21.906646   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:24.453057   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:24.468867   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:24.468930   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:24.511103   78008 cri.go:89] found id: ""
	I0917 18:31:24.511129   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.511140   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:24.511147   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:24.511200   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:24.546392   78008 cri.go:89] found id: ""
	I0917 18:31:24.546423   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.546434   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:24.546443   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:24.546505   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:24.583266   78008 cri.go:89] found id: ""
	I0917 18:31:24.583299   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.583310   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:24.583320   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:24.583381   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:24.620018   78008 cri.go:89] found id: ""
	I0917 18:31:24.620051   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.620063   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:24.620070   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:24.620133   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:24.659528   78008 cri.go:89] found id: ""
	I0917 18:31:24.659556   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.659566   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:24.659573   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:24.659636   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:24.699115   78008 cri.go:89] found id: ""
	I0917 18:31:24.699153   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.699167   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:24.699175   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:24.699234   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:24.745358   78008 cri.go:89] found id: ""
	I0917 18:31:24.745392   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.745404   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:24.745414   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:24.745483   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:24.786606   78008 cri.go:89] found id: ""
	I0917 18:31:24.786635   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.786644   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:24.786657   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:24.786671   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:24.838417   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:24.838462   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:24.852959   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:24.852988   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:24.927013   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:24.927039   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:24.927058   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:25.008679   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:25.008720   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:21.164500   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:23.165380   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:25.665618   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:23.501265   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:26.002113   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:25.126406   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:27.627599   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:27.549945   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:27.565336   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:27.565450   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:27.605806   78008 cri.go:89] found id: ""
	I0917 18:31:27.605844   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.605853   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:27.605860   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:27.605909   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:27.652915   78008 cri.go:89] found id: ""
	I0917 18:31:27.652955   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.652968   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:27.652977   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:27.653044   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:27.701732   78008 cri.go:89] found id: ""
	I0917 18:31:27.701759   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.701771   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:27.701778   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:27.701841   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:27.744587   78008 cri.go:89] found id: ""
	I0917 18:31:27.744616   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.744628   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:27.744635   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:27.744705   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:27.789161   78008 cri.go:89] found id: ""
	I0917 18:31:27.789196   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.789207   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:27.789214   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:27.789296   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:27.833484   78008 cri.go:89] found id: ""
	I0917 18:31:27.833513   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.833525   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:27.833532   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:27.833591   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:27.873669   78008 cri.go:89] found id: ""
	I0917 18:31:27.873703   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.873715   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:27.873722   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:27.873792   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:27.911270   78008 cri.go:89] found id: ""
	I0917 18:31:27.911301   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.911313   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:27.911323   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:27.911336   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:27.951769   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:27.951798   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:28.002220   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:28.002254   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:28.017358   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:28.017392   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:28.091456   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:28.091481   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:28.091492   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:27.666003   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:30.164548   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:28.501094   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:31.005569   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:30.124439   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:32.126247   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:30.679643   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:30.693877   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:30.693948   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:30.732196   78008 cri.go:89] found id: ""
	I0917 18:31:30.732228   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.732240   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:30.732247   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:30.732320   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:30.774700   78008 cri.go:89] found id: ""
	I0917 18:31:30.774730   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.774742   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:30.774749   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:30.774838   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:30.814394   78008 cri.go:89] found id: ""
	I0917 18:31:30.814420   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.814428   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:30.814434   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:30.814487   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:30.854746   78008 cri.go:89] found id: ""
	I0917 18:31:30.854788   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.854801   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:30.854830   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:30.854899   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:30.893533   78008 cri.go:89] found id: ""
	I0917 18:31:30.893564   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.893574   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:30.893580   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:30.893649   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:30.932719   78008 cri.go:89] found id: ""
	I0917 18:31:30.932746   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.932757   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:30.932763   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:30.932811   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:30.974004   78008 cri.go:89] found id: ""
	I0917 18:31:30.974047   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.974056   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:30.974061   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:30.974117   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:31.017469   78008 cri.go:89] found id: ""
	I0917 18:31:31.017498   78008 logs.go:276] 0 containers: []
	W0917 18:31:31.017509   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:31.017517   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:31.017529   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:31.094385   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:31.094409   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:31.094424   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:31.177975   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:31.178012   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:31.218773   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:31.218804   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:31.272960   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:31.272996   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:33.788825   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:33.804904   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:33.804985   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:33.847149   78008 cri.go:89] found id: ""
	I0917 18:31:33.847178   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.847190   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:33.847198   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:33.847259   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:33.883548   78008 cri.go:89] found id: ""
	I0917 18:31:33.883573   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.883581   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:33.883586   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:33.883632   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:33.917495   78008 cri.go:89] found id: ""
	I0917 18:31:33.917523   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.917535   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:33.917542   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:33.917634   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:33.954931   78008 cri.go:89] found id: ""
	I0917 18:31:33.954955   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.954963   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:33.954969   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:33.955019   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:33.991535   78008 cri.go:89] found id: ""
	I0917 18:31:33.991568   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.991577   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:33.991582   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:33.991639   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:34.039451   78008 cri.go:89] found id: ""
	I0917 18:31:34.039478   78008 logs.go:276] 0 containers: []
	W0917 18:31:34.039489   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:34.039497   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:34.039557   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:34.081258   78008 cri.go:89] found id: ""
	I0917 18:31:34.081300   78008 logs.go:276] 0 containers: []
	W0917 18:31:34.081311   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:34.081317   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:34.081379   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:34.119557   78008 cri.go:89] found id: ""
	I0917 18:31:34.119586   78008 logs.go:276] 0 containers: []
	W0917 18:31:34.119597   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:34.119608   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:34.119623   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:34.163345   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:34.163379   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:34.218399   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:34.218454   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:34.232705   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:34.232736   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:34.309948   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:34.309972   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:34.309984   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:32.164688   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:34.165267   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:33.500604   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:35.501094   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:34.624847   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:36.624971   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:36.896504   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:36.913784   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:36.913870   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:36.954525   78008 cri.go:89] found id: ""
	I0917 18:31:36.954557   78008 logs.go:276] 0 containers: []
	W0917 18:31:36.954568   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:36.954578   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:36.954648   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:36.989379   78008 cri.go:89] found id: ""
	I0917 18:31:36.989408   78008 logs.go:276] 0 containers: []
	W0917 18:31:36.989419   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:36.989426   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:36.989491   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:37.029078   78008 cri.go:89] found id: ""
	I0917 18:31:37.029107   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.029119   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:37.029126   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:37.029180   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:37.066636   78008 cri.go:89] found id: ""
	I0917 18:31:37.066670   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.066683   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:37.066691   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:37.066754   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:37.109791   78008 cri.go:89] found id: ""
	I0917 18:31:37.109827   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.109838   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:37.109849   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:37.109925   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:37.153415   78008 cri.go:89] found id: ""
	I0917 18:31:37.153448   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.153459   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:37.153467   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:37.153527   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:37.192826   78008 cri.go:89] found id: ""
	I0917 18:31:37.192853   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.192864   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:37.192871   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:37.192930   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:37.230579   78008 cri.go:89] found id: ""
	I0917 18:31:37.230632   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.230647   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:37.230665   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:37.230677   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:37.315392   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:37.315430   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:37.356521   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:37.356554   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:37.410552   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:37.410591   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:37.426013   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:37.426040   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:37.499352   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:39.999538   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:40.014515   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:40.014590   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:40.051511   78008 cri.go:89] found id: ""
	I0917 18:31:40.051548   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.051558   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:40.051564   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:40.051623   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:40.089707   78008 cri.go:89] found id: ""
	I0917 18:31:40.089733   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.089747   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:40.089752   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:40.089802   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:40.137303   78008 cri.go:89] found id: ""
	I0917 18:31:40.137326   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.137335   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:40.137341   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:40.137389   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:40.176721   78008 cri.go:89] found id: ""
	I0917 18:31:40.176746   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.176755   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:40.176761   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:40.176809   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:40.212369   78008 cri.go:89] found id: ""
	I0917 18:31:40.212401   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.212412   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:40.212421   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:40.212494   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:40.255798   78008 cri.go:89] found id: ""
	I0917 18:31:40.255828   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.255838   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:40.255847   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:40.255982   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:40.293643   78008 cri.go:89] found id: ""
	I0917 18:31:40.293672   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.293682   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:40.293689   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:40.293752   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:40.332300   78008 cri.go:89] found id: ""
	I0917 18:31:40.332330   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.332340   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:40.332350   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:40.332365   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:40.389068   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:40.389115   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:40.403118   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:40.403149   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:40.476043   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:40.476067   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:40.476081   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:40.563164   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:40.563204   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:36.664291   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:38.666750   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:37.501943   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:40.000891   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:42.001550   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:38.625406   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:41.124655   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:43.126544   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:43.112107   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:43.127968   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:43.128034   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:43.166351   78008 cri.go:89] found id: ""
	I0917 18:31:43.166371   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.166379   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:43.166384   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:43.166433   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:43.201124   78008 cri.go:89] found id: ""
	I0917 18:31:43.201160   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.201173   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:43.201181   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:43.201265   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:43.245684   78008 cri.go:89] found id: ""
	I0917 18:31:43.245717   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.245728   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:43.245735   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:43.245796   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:43.282751   78008 cri.go:89] found id: ""
	I0917 18:31:43.282777   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.282785   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:43.282791   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:43.282844   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:43.322180   78008 cri.go:89] found id: ""
	I0917 18:31:43.322212   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.322223   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:43.322230   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:43.322294   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:43.359575   78008 cri.go:89] found id: ""
	I0917 18:31:43.359608   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.359620   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:43.359627   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:43.359689   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:43.398782   78008 cri.go:89] found id: ""
	I0917 18:31:43.398811   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.398824   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:43.398833   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:43.398913   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:43.437747   78008 cri.go:89] found id: ""
	I0917 18:31:43.437771   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.437779   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:43.437787   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:43.437800   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:43.477986   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:43.478019   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:43.532637   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:43.532674   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:43.547552   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:43.547577   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:43.632556   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:43.632578   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:43.632592   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:41.163988   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:43.165378   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:45.664803   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:44.500302   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:46.500489   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:45.128136   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:47.626024   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:46.214890   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:46.229327   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:46.229408   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:46.268605   78008 cri.go:89] found id: ""
	I0917 18:31:46.268632   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.268642   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:46.268649   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:46.268711   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:46.309508   78008 cri.go:89] found id: ""
	I0917 18:31:46.309539   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.309549   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:46.309558   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:46.309620   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:46.352610   78008 cri.go:89] found id: ""
	I0917 18:31:46.352639   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.352648   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:46.352654   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:46.352723   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:46.398702   78008 cri.go:89] found id: ""
	I0917 18:31:46.398738   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.398747   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:46.398753   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:46.398811   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:46.437522   78008 cri.go:89] found id: ""
	I0917 18:31:46.437545   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.437554   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:46.437559   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:46.437641   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:46.474865   78008 cri.go:89] found id: ""
	I0917 18:31:46.474893   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.474902   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:46.474909   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:46.474957   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:46.514497   78008 cri.go:89] found id: ""
	I0917 18:31:46.514525   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.514536   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:46.514543   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:46.514605   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:46.556948   78008 cri.go:89] found id: ""
	I0917 18:31:46.556979   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.556988   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:46.556997   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:46.557008   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:46.609300   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:46.609337   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:46.626321   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:46.626351   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:46.707669   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:46.707701   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:46.707714   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:46.789774   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:46.789815   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:49.332780   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:49.347262   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:49.347334   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:49.388368   78008 cri.go:89] found id: ""
	I0917 18:31:49.388411   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.388423   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:49.388431   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:49.388493   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:49.423664   78008 cri.go:89] found id: ""
	I0917 18:31:49.423694   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.423707   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:49.423714   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:49.423776   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:49.462882   78008 cri.go:89] found id: ""
	I0917 18:31:49.462911   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.462924   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:49.462931   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:49.462988   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:49.524014   78008 cri.go:89] found id: ""
	I0917 18:31:49.524047   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.524056   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:49.524062   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:49.524114   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:49.564703   78008 cri.go:89] found id: ""
	I0917 18:31:49.564737   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.564748   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:49.564762   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:49.564827   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:49.609460   78008 cri.go:89] found id: ""
	I0917 18:31:49.609484   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.609493   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:49.609499   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:49.609554   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:49.651008   78008 cri.go:89] found id: ""
	I0917 18:31:49.651032   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.651040   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:49.651045   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:49.651106   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:49.693928   78008 cri.go:89] found id: ""
	I0917 18:31:49.693954   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.693961   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:49.693969   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:49.693981   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:49.774940   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:49.774977   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:49.820362   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:49.820398   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:49.875508   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:49.875549   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:49.890690   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:49.890723   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:49.967803   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:47.664890   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:49.664943   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:48.502246   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:51.001296   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:50.125915   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:52.625169   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:52.468533   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:52.483749   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:52.483812   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:52.523017   78008 cri.go:89] found id: ""
	I0917 18:31:52.523040   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.523048   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:52.523055   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:52.523101   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:52.559848   78008 cri.go:89] found id: ""
	I0917 18:31:52.559879   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.559889   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:52.559895   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:52.559955   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:52.597168   78008 cri.go:89] found id: ""
	I0917 18:31:52.597192   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.597200   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:52.597207   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:52.597278   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:52.634213   78008 cri.go:89] found id: ""
	I0917 18:31:52.634241   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.634252   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:52.634265   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:52.634326   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:52.673842   78008 cri.go:89] found id: ""
	I0917 18:31:52.673865   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.673873   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:52.673878   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:52.673926   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:52.711568   78008 cri.go:89] found id: ""
	I0917 18:31:52.711596   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.711609   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:52.711617   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:52.711676   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:52.757002   78008 cri.go:89] found id: ""
	I0917 18:31:52.757030   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.757038   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:52.757043   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:52.757092   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:52.793092   78008 cri.go:89] found id: ""
	I0917 18:31:52.793126   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.793135   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:52.793143   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:52.793155   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:52.847641   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:52.847682   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:52.862287   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:52.862314   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:52.941307   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:52.941331   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:52.941344   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:53.026114   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:53.026155   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:55.573116   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:55.588063   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:55.588125   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:55.633398   78008 cri.go:89] found id: ""
	I0917 18:31:55.633422   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.633430   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:55.633437   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:55.633511   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:55.669754   78008 cri.go:89] found id: ""
	I0917 18:31:55.669785   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.669796   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:55.669803   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:55.669876   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:52.165645   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:54.166228   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:53.500688   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:55.501849   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:55.126327   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:57.624683   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:55.711492   78008 cri.go:89] found id: ""
	I0917 18:31:55.711521   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.711533   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:55.711541   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:55.711603   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:55.749594   78008 cri.go:89] found id: ""
	I0917 18:31:55.749628   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.749638   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:55.749643   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:55.749695   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:55.786114   78008 cri.go:89] found id: ""
	I0917 18:31:55.786143   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.786155   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:55.786162   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:55.786222   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:55.824254   78008 cri.go:89] found id: ""
	I0917 18:31:55.824282   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.824293   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:55.824301   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:55.824361   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:55.861690   78008 cri.go:89] found id: ""
	I0917 18:31:55.861718   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.861728   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:55.861733   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:55.861794   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:55.913729   78008 cri.go:89] found id: ""
	I0917 18:31:55.913754   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.913766   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:55.913775   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:55.913788   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:55.976835   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:55.976880   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:56.003201   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:56.003236   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:56.092101   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:56.092123   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:56.092137   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:56.170498   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:56.170533   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:58.714212   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:58.730997   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:58.731072   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:58.775640   78008 cri.go:89] found id: ""
	I0917 18:31:58.775678   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.775693   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:58.775701   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:58.775770   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:58.811738   78008 cri.go:89] found id: ""
	I0917 18:31:58.811764   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.811776   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:58.811783   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:58.811852   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:58.849803   78008 cri.go:89] found id: ""
	I0917 18:31:58.849827   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.849836   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:58.849841   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:58.849898   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:58.885827   78008 cri.go:89] found id: ""
	I0917 18:31:58.885856   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.885871   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:58.885878   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:58.885943   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:58.925539   78008 cri.go:89] found id: ""
	I0917 18:31:58.925565   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.925574   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:58.925579   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:58.925628   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:58.961074   78008 cri.go:89] found id: ""
	I0917 18:31:58.961104   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.961116   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:58.961123   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:58.961190   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:58.997843   78008 cri.go:89] found id: ""
	I0917 18:31:58.997878   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.997889   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:58.997896   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:58.997962   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:59.034836   78008 cri.go:89] found id: ""
	I0917 18:31:59.034866   78008 logs.go:276] 0 containers: []
	W0917 18:31:59.034876   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:59.034884   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:59.034899   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:59.049085   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:59.049116   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:59.126143   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:59.126168   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:59.126183   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:59.210043   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:59.210096   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:59.258546   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:59.258575   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:56.664145   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:58.664990   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:58.000809   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:58.494554   77433 pod_ready.go:82] duration metric: took 4m0.000545882s for pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace to be "Ready" ...
	E0917 18:31:58.494588   77433 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace to be "Ready" (will not retry!)
	I0917 18:31:58.494611   77433 pod_ready.go:39] duration metric: took 4m9.313096637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:31:58.494638   77433 kubeadm.go:597] duration metric: took 4m19.208089477s to restartPrimaryControlPlane
	W0917 18:31:58.494716   77433 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 18:31:58.494760   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:31:59.625888   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:02.125831   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:01.811930   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:01.833160   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:32:01.833223   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:32:01.891148   78008 cri.go:89] found id: ""
	I0917 18:32:01.891178   78008 logs.go:276] 0 containers: []
	W0917 18:32:01.891189   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:32:01.891197   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:32:01.891260   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:32:01.954367   78008 cri.go:89] found id: ""
	I0917 18:32:01.954407   78008 logs.go:276] 0 containers: []
	W0917 18:32:01.954418   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:32:01.954425   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:32:01.954490   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:32:01.998154   78008 cri.go:89] found id: ""
	I0917 18:32:01.998187   78008 logs.go:276] 0 containers: []
	W0917 18:32:01.998199   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:32:01.998206   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:32:01.998267   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:32:02.035412   78008 cri.go:89] found id: ""
	I0917 18:32:02.035446   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.035457   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:32:02.035464   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:32:02.035539   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:32:02.070552   78008 cri.go:89] found id: ""
	I0917 18:32:02.070586   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.070599   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:32:02.070604   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:32:02.070673   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:32:02.108680   78008 cri.go:89] found id: ""
	I0917 18:32:02.108717   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.108729   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:32:02.108737   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:32:02.108787   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:32:02.148560   78008 cri.go:89] found id: ""
	I0917 18:32:02.148585   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.148594   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:32:02.148600   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:32:02.148647   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:32:02.186398   78008 cri.go:89] found id: ""
	I0917 18:32:02.186434   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.186445   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:32:02.186454   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:32:02.186468   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:32:02.273674   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:32:02.273695   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:32:02.273708   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:32:02.359656   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:32:02.359704   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:32:02.405465   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:32:02.405494   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:32:02.466534   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:32:02.466568   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:32:04.983572   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:04.998711   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:32:04.998796   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:32:05.038080   78008 cri.go:89] found id: ""
	I0917 18:32:05.038111   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.038121   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:32:05.038129   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:32:05.038189   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:32:05.074542   78008 cri.go:89] found id: ""
	I0917 18:32:05.074571   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.074582   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:32:05.074588   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:32:05.074652   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:32:05.113115   78008 cri.go:89] found id: ""
	I0917 18:32:05.113140   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.113149   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:32:05.113156   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:32:05.113216   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:32:05.151752   78008 cri.go:89] found id: ""
	I0917 18:32:05.151777   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.151786   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:32:05.151791   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:32:05.151840   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:32:05.191014   78008 cri.go:89] found id: ""
	I0917 18:32:05.191044   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.191056   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:32:05.191064   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:32:05.191126   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:32:05.226738   78008 cri.go:89] found id: ""
	I0917 18:32:05.226774   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.226787   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:32:05.226794   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:32:05.226856   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:32:05.263072   78008 cri.go:89] found id: ""
	I0917 18:32:05.263102   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.263115   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:32:05.263124   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:32:05.263184   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:32:05.302622   78008 cri.go:89] found id: ""
	I0917 18:32:05.302651   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.302666   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:32:05.302677   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:32:05.302691   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:32:05.358101   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:32:05.358150   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:32:05.373289   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:32:05.373326   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:32:05.451451   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:32:05.451484   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:32:05.451496   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:32:05.532529   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:32:05.532570   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:32:01.165911   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:03.665523   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:04.126090   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:06.625207   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:08.079204   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:08.093914   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:32:08.093996   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:32:08.131132   78008 cri.go:89] found id: ""
	I0917 18:32:08.131164   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.131173   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:32:08.131178   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:32:08.131230   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:32:08.168017   78008 cri.go:89] found id: ""
	I0917 18:32:08.168044   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.168055   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:32:08.168062   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:32:08.168124   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:32:08.210190   78008 cri.go:89] found id: ""
	I0917 18:32:08.210212   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.210221   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:32:08.210226   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:32:08.210279   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:32:08.250264   78008 cri.go:89] found id: ""
	I0917 18:32:08.250291   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.250299   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:32:08.250304   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:32:08.250352   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:32:08.287732   78008 cri.go:89] found id: ""
	I0917 18:32:08.287760   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.287768   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:32:08.287775   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:32:08.287826   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:32:08.325131   78008 cri.go:89] found id: ""
	I0917 18:32:08.325161   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.325170   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:32:08.325176   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:32:08.325243   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:32:08.365979   78008 cri.go:89] found id: ""
	I0917 18:32:08.366008   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.366019   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:32:08.366027   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:32:08.366088   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:32:08.403430   78008 cri.go:89] found id: ""
	I0917 18:32:08.403472   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.403484   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:32:08.403495   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:32:08.403511   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:32:08.444834   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:32:08.444869   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:32:08.500363   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:32:08.500408   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:32:08.516624   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:32:08.516653   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:32:08.591279   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:32:08.591304   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:32:08.591317   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:32:06.165279   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:08.168012   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:10.665050   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:11.173345   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:11.187689   78008 kubeadm.go:597] duration metric: took 4m1.808927826s to restartPrimaryControlPlane
	W0917 18:32:11.187762   78008 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 18:32:11.187786   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:32:12.794262   78008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.606454478s)
	I0917 18:32:12.794344   78008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:32:12.809379   78008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:32:12.821912   78008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:32:12.833176   78008 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:32:12.833201   78008 kubeadm.go:157] found existing configuration files:
	
	I0917 18:32:12.833279   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:32:12.843175   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:32:12.843245   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:32:12.855310   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:32:12.866777   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:32:12.866846   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:32:12.878436   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:32:12.889677   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:32:12.889763   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:32:12.900141   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:32:12.909916   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:32:12.909994   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:32:12.920578   78008 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:32:12.993663   78008 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0917 18:32:12.993743   78008 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:32:13.145113   78008 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:32:13.145321   78008 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:32:13.145451   78008 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0917 18:32:13.346279   78008 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:32:08.627002   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:09.118558   77819 pod_ready.go:82] duration metric: took 4m0.00024297s for pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace to be "Ready" ...
	E0917 18:32:09.118584   77819 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0917 18:32:09.118600   77819 pod_ready.go:39] duration metric: took 4m13.424544466s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:09.118628   77819 kubeadm.go:597] duration metric: took 4m21.847475999s to restartPrimaryControlPlane
	W0917 18:32:09.118695   77819 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 18:32:09.118723   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:32:13.348308   78008 out.go:235]   - Generating certificates and keys ...
	I0917 18:32:13.348411   78008 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:32:13.348505   78008 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:32:13.348622   78008 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:32:13.348719   78008 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:32:13.348814   78008 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:32:13.348895   78008 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:32:13.348991   78008 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:32:13.349126   78008 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:32:13.349595   78008 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:32:13.349939   78008 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:32:13.350010   78008 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:32:13.350096   78008 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:32:13.677314   78008 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:32:13.840807   78008 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:32:13.886801   78008 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:32:13.937675   78008 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:32:13.956057   78008 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:32:13.957185   78008 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:32:13.957266   78008 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:32:14.099317   78008 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:32:14.101339   78008 out.go:235]   - Booting up control plane ...
	I0917 18:32:14.101446   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:32:14.107518   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:32:14.107630   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:32:14.107964   78008 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:32:14.118995   78008 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0917 18:32:13.164003   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:15.165309   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:17.664956   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:20.165073   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:24.890884   77433 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.396095322s)
	I0917 18:32:24.890966   77433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:32:24.915367   77433 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:32:24.928191   77433 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:32:24.945924   77433 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:32:24.945943   77433 kubeadm.go:157] found existing configuration files:
	
	I0917 18:32:24.945988   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:32:24.961382   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:32:24.961454   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:32:24.977324   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:32:24.989771   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:32:24.989861   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:32:25.001342   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:32:25.035933   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:32:25.036004   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:32:25.047185   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:32:25.058299   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:32:25.058358   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:32:25.070264   77433 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:32:25.124517   77433 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 18:32:25.124634   77433 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:32:25.257042   77433 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:32:25.257211   77433 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:32:25.257378   77433 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 18:32:25.267568   77433 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:32:22.663592   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:24.665849   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:25.269902   77433 out.go:235]   - Generating certificates and keys ...
	I0917 18:32:25.270012   77433 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:32:25.270115   77433 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:32:25.270221   77433 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:32:25.270288   77433 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:32:25.270379   77433 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:32:25.270462   77433 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:32:25.270563   77433 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:32:25.270664   77433 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:32:25.270747   77433 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:32:25.270810   77433 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:32:25.270844   77433 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:32:25.270892   77433 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:32:25.425276   77433 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:32:25.498604   77433 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 18:32:25.848094   77433 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:32:26.011742   77433 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:32:26.097462   77433 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:32:26.097929   77433 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:32:26.100735   77433 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:32:26.102662   77433 out.go:235]   - Booting up control plane ...
	I0917 18:32:26.102777   77433 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:32:26.102880   77433 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:32:26.102954   77433 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:32:26.123221   77433 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:32:26.130932   77433 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:32:26.131021   77433 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:32:26.291311   77433 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 18:32:26.291462   77433 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 18:32:27.164870   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:29.165716   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:27.298734   77433 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00350356s
	I0917 18:32:27.298851   77433 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 18:32:32.298994   77433 kubeadm.go:310] [api-check] The API server is healthy after 5.002867585s
	I0917 18:32:32.319430   77433 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 18:32:32.345527   77433 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 18:32:32.381518   77433 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 18:32:32.381817   77433 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-328741 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 18:32:32.398185   77433 kubeadm.go:310] [bootstrap-token] Using token: jgy27g.uvhet1w3psx1hofx
	I0917 18:32:32.399853   77433 out.go:235]   - Configuring RBAC rules ...
	I0917 18:32:32.400009   77433 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 18:32:32.407740   77433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 18:32:32.421320   77433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 18:32:32.427046   77433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 18:32:32.434506   77433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 18:32:32.438950   77433 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 18:32:32.705233   77433 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 18:32:33.140761   77433 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 18:32:33.720560   77433 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 18:32:33.720589   77433 kubeadm.go:310] 
	I0917 18:32:33.720679   77433 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 18:32:33.720690   77433 kubeadm.go:310] 
	I0917 18:32:33.720803   77433 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 18:32:33.720823   77433 kubeadm.go:310] 
	I0917 18:32:33.720869   77433 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 18:32:33.720932   77433 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 18:32:33.721010   77433 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 18:32:33.721021   77433 kubeadm.go:310] 
	I0917 18:32:33.721094   77433 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 18:32:33.721103   77433 kubeadm.go:310] 
	I0917 18:32:33.721168   77433 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 18:32:33.721176   77433 kubeadm.go:310] 
	I0917 18:32:33.721291   77433 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 18:32:33.721406   77433 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 18:32:33.721515   77433 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 18:32:33.721527   77433 kubeadm.go:310] 
	I0917 18:32:33.721653   77433 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 18:32:33.721780   77433 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 18:32:33.721797   77433 kubeadm.go:310] 
	I0917 18:32:33.721923   77433 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jgy27g.uvhet1w3psx1hofx \
	I0917 18:32:33.722093   77433 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 \
	I0917 18:32:33.722131   77433 kubeadm.go:310] 	--control-plane 
	I0917 18:32:33.722140   77433 kubeadm.go:310] 
	I0917 18:32:33.722267   77433 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 18:32:33.722278   77433 kubeadm.go:310] 
	I0917 18:32:33.722389   77433 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jgy27g.uvhet1w3psx1hofx \
	I0917 18:32:33.722565   77433 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 
	I0917 18:32:33.723290   77433 kubeadm.go:310] W0917 18:32:25.090856    2941 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:32:33.723705   77433 kubeadm.go:310] W0917 18:32:25.092716    2941 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:32:33.723861   77433 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:32:33.723883   77433 cni.go:84] Creating CNI manager for ""
	I0917 18:32:33.723896   77433 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:32:33.725956   77433 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:32:31.665048   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:34.166586   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:33.727153   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:32:33.739127   77433 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:32:33.759704   77433 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 18:32:33.759766   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:33.759799   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-328741 minikube.k8s.io/updated_at=2024_09_17T18_32_33_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=no-preload-328741 minikube.k8s.io/primary=true
	I0917 18:32:33.977462   77433 ops.go:34] apiserver oom_adj: -16
	I0917 18:32:33.977485   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:34.477572   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:34.977644   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:35.477829   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:35.977732   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:36.477549   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:36.978147   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:37.477629   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:37.977554   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:38.125930   77433 kubeadm.go:1113] duration metric: took 4.366225265s to wait for elevateKubeSystemPrivileges
	I0917 18:32:38.125973   77433 kubeadm.go:394] duration metric: took 4m58.899335742s to StartCluster
	I0917 18:32:38.125999   77433 settings.go:142] acquiring lock: {Name:mk34898861b3ed534da4bd8404feab95fe690001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:32:38.126117   77433 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:32:38.128667   77433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:32:38.129071   77433 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 18:32:38.129134   77433 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 18:32:38.129258   77433 addons.go:69] Setting storage-provisioner=true in profile "no-preload-328741"
	I0917 18:32:38.129284   77433 addons.go:234] Setting addon storage-provisioner=true in "no-preload-328741"
	W0917 18:32:38.129295   77433 addons.go:243] addon storage-provisioner should already be in state true
	I0917 18:32:38.129331   77433 host.go:66] Checking if "no-preload-328741" exists ...
	I0917 18:32:38.129364   77433 config.go:182] Loaded profile config "no-preload-328741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:32:38.129374   77433 addons.go:69] Setting default-storageclass=true in profile "no-preload-328741"
	I0917 18:32:38.129397   77433 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-328741"
	I0917 18:32:38.129397   77433 addons.go:69] Setting metrics-server=true in profile "no-preload-328741"
	I0917 18:32:38.129440   77433 addons.go:234] Setting addon metrics-server=true in "no-preload-328741"
	W0917 18:32:38.129451   77433 addons.go:243] addon metrics-server should already be in state true
	I0917 18:32:38.129491   77433 host.go:66] Checking if "no-preload-328741" exists ...
	I0917 18:32:38.129831   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.129832   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.129875   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.129965   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.129980   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.129992   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.130833   77433 out.go:177] * Verifying Kubernetes components...
	I0917 18:32:38.132232   77433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:32:38.151440   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37047
	I0917 18:32:38.151521   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43717
	I0917 18:32:38.151524   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46631
	I0917 18:32:38.152003   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.152216   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.152574   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.152591   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.152728   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.152743   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.153076   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.153077   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.153304   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetState
	I0917 18:32:38.153689   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.153731   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.156960   77433 addons.go:234] Setting addon default-storageclass=true in "no-preload-328741"
	W0917 18:32:38.156980   77433 addons.go:243] addon default-storageclass should already be in state true
	I0917 18:32:38.157007   77433 host.go:66] Checking if "no-preload-328741" exists ...
	I0917 18:32:38.157358   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.157404   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.157700   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.158314   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.158332   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.158738   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.159296   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.159332   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.179409   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41819
	I0917 18:32:38.179948   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.180402   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.180433   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.180922   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.181082   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetState
	I0917 18:32:38.183522   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35935
	I0917 18:32:38.183904   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.184373   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.184389   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.184750   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.184911   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetState
	I0917 18:32:38.187520   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37647
	I0917 18:32:38.187560   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:32:38.187560   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:32:38.188071   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.188750   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.188768   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.189208   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.189573   77433 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:32:38.189597   77433 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0917 18:32:35.488250   77819 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.369501216s)
	I0917 18:32:35.488328   77819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:32:35.507245   77819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:32:35.522739   77819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:32:35.537981   77819 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:32:35.538002   77819 kubeadm.go:157] found existing configuration files:
	
	I0917 18:32:35.538060   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0917 18:32:35.552269   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:32:35.552346   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:32:35.566005   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0917 18:32:35.577402   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:32:35.577482   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:32:35.588633   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0917 18:32:35.600487   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:32:35.600559   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:32:35.612243   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0917 18:32:35.623548   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:32:35.623630   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:32:35.635837   77819 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:32:35.690169   77819 kubeadm.go:310] W0917 18:32:35.657767    2589 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:32:35.690728   77819 kubeadm.go:310] W0917 18:32:35.658500    2589 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:32:35.819945   77819 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:32:38.189867   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.189904   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.191297   77433 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 18:32:38.191318   77433 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 18:32:38.191340   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:32:38.191421   77433 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:32:38.191441   77433 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 18:32:38.191467   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:32:38.195617   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.196040   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:32:38.196070   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.196098   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.196292   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:32:38.196554   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:32:38.196633   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:32:38.196645   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.196829   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:32:38.196868   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:32:38.196999   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:32:38.197320   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:32:38.197549   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:32:38.197724   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:32:38.211021   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34023
	I0917 18:32:38.211713   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.212330   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.212349   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.212900   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.213161   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetState
	I0917 18:32:38.214937   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:32:38.215252   77433 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 18:32:38.215267   77433 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 18:32:38.215284   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:32:38.218542   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.219120   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:32:38.219141   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.219398   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:32:38.219649   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:32:38.219795   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:32:38.219983   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:32:38.350631   77433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:32:38.420361   77433 node_ready.go:35] waiting up to 6m0s for node "no-preload-328741" to be "Ready" ...
	I0917 18:32:38.445121   77433 node_ready.go:49] node "no-preload-328741" has status "Ready":"True"
	I0917 18:32:38.445147   77433 node_ready.go:38] duration metric: took 24.749282ms for node "no-preload-328741" to be "Ready" ...
	I0917 18:32:38.445159   77433 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:38.468481   77433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:38.473593   77433 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 18:32:38.529563   77433 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 18:32:38.529592   77433 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0917 18:32:38.569714   77433 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:32:38.611817   77433 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 18:32:38.611845   77433 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 18:32:38.681763   77433 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:32:38.681791   77433 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 18:32:38.754936   77433 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:32:38.771115   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:38.771142   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:38.771564   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:38.771583   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:38.771603   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:38.771612   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:38.773362   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:38.773370   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:38.773381   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:38.782423   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:38.782468   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:38.782821   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:38.782877   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:38.782889   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:39.826176   77433 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.256415127s)
	I0917 18:32:39.826230   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:39.826241   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:39.826591   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:39.826618   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:39.826619   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:39.826627   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:39.826638   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:39.826905   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:39.828259   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:39.828279   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:40.095498   77433 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.340502717s)
	I0917 18:32:40.095562   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:40.095578   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:40.096020   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:40.096018   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:40.096047   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:40.096056   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:40.096064   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:40.096372   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:40.096391   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:40.097299   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:40.097317   77433 addons.go:475] Verifying addon metrics-server=true in "no-preload-328741"
	I0917 18:32:40.099125   77433 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0917 18:32:36.663739   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:38.666621   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:40.100317   77433 addons.go:510] duration metric: took 1.971194765s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0917 18:32:40.481646   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:44.319473   77819 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 18:32:44.319570   77819 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:32:44.319698   77819 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:32:44.319793   77819 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:32:44.319888   77819 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 18:32:44.319977   77819 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:32:44.322424   77819 out.go:235]   - Generating certificates and keys ...
	I0917 18:32:44.322509   77819 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:32:44.322570   77819 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:32:44.322640   77819 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:32:44.322732   77819 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:32:44.322806   77819 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:32:44.322854   77819 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:32:44.322911   77819 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:32:44.322993   77819 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:32:44.323071   77819 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:32:44.323150   77819 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:32:44.323197   77819 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:32:44.323246   77819 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:32:44.323289   77819 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:32:44.323337   77819 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 18:32:44.323390   77819 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:32:44.323456   77819 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:32:44.323521   77819 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:32:44.323613   77819 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:32:44.323704   77819 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:32:44.324959   77819 out.go:235]   - Booting up control plane ...
	I0917 18:32:44.325043   77819 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:32:44.325120   77819 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:32:44.325187   77819 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:32:44.325303   77819 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:32:44.325371   77819 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:32:44.325404   77819 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:32:44.325533   77819 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 18:32:44.325635   77819 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 18:32:44.325710   77819 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001958745s
	I0917 18:32:44.325774   77819 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 18:32:44.325830   77819 kubeadm.go:310] [api-check] The API server is healthy after 5.002835169s
	I0917 18:32:44.325919   77819 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 18:32:44.326028   77819 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 18:32:44.326086   77819 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 18:32:44.326239   77819 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-438836 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 18:32:44.326311   77819 kubeadm.go:310] [bootstrap-token] Using token: xgap2f.3rz1qjyfivkbqx8u
	I0917 18:32:44.327661   77819 out.go:235]   - Configuring RBAC rules ...
	I0917 18:32:44.327770   77819 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 18:32:44.327838   77819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 18:32:44.328050   77819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 18:32:44.328166   77819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 18:32:44.328266   77819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 18:32:44.328337   77819 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 18:32:44.328483   77819 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 18:32:44.328519   77819 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 18:32:44.328564   77819 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 18:32:44.328573   77819 kubeadm.go:310] 
	I0917 18:32:44.328628   77819 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 18:32:44.328634   77819 kubeadm.go:310] 
	I0917 18:32:44.328702   77819 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 18:32:44.328710   77819 kubeadm.go:310] 
	I0917 18:32:44.328736   77819 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 18:32:44.328798   77819 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 18:32:44.328849   77819 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 18:32:44.328858   77819 kubeadm.go:310] 
	I0917 18:32:44.328940   77819 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 18:32:44.328949   77819 kubeadm.go:310] 
	I0917 18:32:44.328997   77819 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 18:32:44.329011   77819 kubeadm.go:310] 
	I0917 18:32:44.329054   77819 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 18:32:44.329122   77819 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 18:32:44.329184   77819 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 18:32:44.329191   77819 kubeadm.go:310] 
	I0917 18:32:44.329281   77819 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 18:32:44.329359   77819 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 18:32:44.329372   77819 kubeadm.go:310] 
	I0917 18:32:44.329487   77819 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token xgap2f.3rz1qjyfivkbqx8u \
	I0917 18:32:44.329599   77819 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 \
	I0917 18:32:44.329619   77819 kubeadm.go:310] 	--control-plane 
	I0917 18:32:44.329625   77819 kubeadm.go:310] 
	I0917 18:32:44.329709   77819 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 18:32:44.329716   77819 kubeadm.go:310] 
	I0917 18:32:44.329784   77819 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token xgap2f.3rz1qjyfivkbqx8u \
	I0917 18:32:44.329896   77819 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 
	I0917 18:32:44.329910   77819 cni.go:84] Creating CNI manager for ""
	I0917 18:32:44.329916   77819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:32:44.331403   77819 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:32:41.165452   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:43.167184   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:45.664612   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:42.976970   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:45.475620   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:44.332786   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:32:44.344553   77819 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:32:44.365355   77819 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 18:32:44.365417   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:44.365457   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-438836 minikube.k8s.io/updated_at=2024_09_17T18_32_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=default-k8s-diff-port-438836 minikube.k8s.io/primary=true
	I0917 18:32:44.393987   77819 ops.go:34] apiserver oom_adj: -16
	I0917 18:32:44.608512   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:45.109295   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:45.609455   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:46.108538   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:46.609062   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:47.108933   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:47.608565   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:48.109355   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:48.609390   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:49.109204   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:49.305574   77819 kubeadm.go:1113] duration metric: took 4.940218828s to wait for elevateKubeSystemPrivileges
	I0917 18:32:49.305616   77819 kubeadm.go:394] duration metric: took 5m2.086280483s to StartCluster
	I0917 18:32:49.305640   77819 settings.go:142] acquiring lock: {Name:mk34898861b3ed534da4bd8404feab95fe690001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:32:49.305743   77819 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:32:49.308226   77819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:32:49.308590   77819 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.58 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 18:32:49.308755   77819 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 18:32:49.308838   77819 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-438836"
	I0917 18:32:49.308861   77819 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-438836"
	I0917 18:32:49.308863   77819 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-438836"
	I0917 18:32:49.308882   77819 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-438836"
	I0917 18:32:49.308881   77819 config.go:182] Loaded profile config "default-k8s-diff-port-438836": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:32:49.308895   77819 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-438836"
	W0917 18:32:49.308946   77819 addons.go:243] addon metrics-server should already be in state true
	I0917 18:32:49.309006   77819 host.go:66] Checking if "default-k8s-diff-port-438836" exists ...
	I0917 18:32:49.308895   77819 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-438836"
	W0917 18:32:49.308873   77819 addons.go:243] addon storage-provisioner should already be in state true
	I0917 18:32:49.309151   77819 host.go:66] Checking if "default-k8s-diff-port-438836" exists ...
	I0917 18:32:49.309458   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.309509   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.309544   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.309580   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.309585   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.309613   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.310410   77819 out.go:177] * Verifying Kubernetes components...
	I0917 18:32:49.311819   77819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:32:49.326762   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40995
	I0917 18:32:49.327055   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41171
	I0917 18:32:49.327287   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.327615   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.327869   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.327888   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.328171   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.328194   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.328215   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.328403   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetState
	I0917 18:32:49.328622   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.329285   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.329330   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.329573   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42919
	I0917 18:32:49.330145   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.330651   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.330674   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.331084   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.331715   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.331767   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.332232   77819 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-438836"
	W0917 18:32:49.332250   77819 addons.go:243] addon default-storageclass should already be in state true
	I0917 18:32:49.332278   77819 host.go:66] Checking if "default-k8s-diff-port-438836" exists ...
	I0917 18:32:49.332550   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.332595   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.346536   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35171
	I0917 18:32:49.347084   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.347712   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.347737   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.348229   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.348469   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetState
	I0917 18:32:49.350631   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35929
	I0917 18:32:49.351520   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.351581   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:32:49.352110   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.352138   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.352297   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35853
	I0917 18:32:49.352720   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.352736   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.353270   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.353310   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.353318   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.353334   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.353707   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.353861   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetState
	I0917 18:32:49.354855   77819 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0917 18:32:49.356031   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:32:49.356123   77819 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 18:32:49.356153   77819 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 18:32:49.356181   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:32:49.358023   77819 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:32:47.475181   77433 pod_ready.go:93] pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:47.475212   77433 pod_ready.go:82] duration metric: took 9.006699747s for pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:47.475230   77433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qv4pq" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.483276   77433 pod_ready.go:93] pod "coredns-7c65d6cfc9-qv4pq" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:48.483301   77433 pod_ready.go:82] duration metric: took 1.008063055s for pod "coredns-7c65d6cfc9-qv4pq" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.483310   77433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.488897   77433 pod_ready.go:93] pod "etcd-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:48.488922   77433 pod_ready.go:82] duration metric: took 5.605818ms for pod "etcd-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.488931   77433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.493809   77433 pod_ready.go:93] pod "kube-apiserver-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:48.493840   77433 pod_ready.go:82] duration metric: took 4.899361ms for pod "kube-apiserver-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.493853   77433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.498703   77433 pod_ready.go:93] pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:48.498730   77433 pod_ready.go:82] duration metric: took 4.869599ms for pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.498741   77433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2945m" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.673260   77433 pod_ready.go:93] pod "kube-proxy-2945m" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:48.673288   77433 pod_ready.go:82] duration metric: took 174.539603ms for pod "kube-proxy-2945m" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.673300   77433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:49.073094   77433 pod_ready.go:93] pod "kube-scheduler-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:49.073121   77433 pod_ready.go:82] duration metric: took 399.810804ms for pod "kube-scheduler-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:49.073132   77433 pod_ready.go:39] duration metric: took 10.627960333s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:49.073148   77433 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:32:49.073220   77433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:49.089310   77433 api_server.go:72] duration metric: took 10.960186006s to wait for apiserver process to appear ...
	I0917 18:32:49.089337   77433 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:32:49.089360   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:32:49.094838   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 200:
	ok
	I0917 18:32:49.095838   77433 api_server.go:141] control plane version: v1.31.1
	I0917 18:32:49.095862   77433 api_server.go:131] duration metric: took 6.516706ms to wait for apiserver health ...
	I0917 18:32:49.095872   77433 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:32:49.278262   77433 system_pods.go:59] 9 kube-system pods found
	I0917 18:32:49.278306   77433 system_pods.go:61] "coredns-7c65d6cfc9-gddwk" [57f85dd3-be48-4648-8d70-7a06aeaecdc2] Running
	I0917 18:32:49.278312   77433 system_pods.go:61] "coredns-7c65d6cfc9-qv4pq" [31f7e4b5-3870-41a1-96f8-8e13511fe684] Running
	I0917 18:32:49.278315   77433 system_pods.go:61] "etcd-no-preload-328741" [42b632f3-5576-4779-8895-3adcecfb278c] Running
	I0917 18:32:49.278319   77433 system_pods.go:61] "kube-apiserver-no-preload-328741" [ff2d44e3-dad5-4c24-a24d-2df425466747] Running
	I0917 18:32:49.278323   77433 system_pods.go:61] "kube-controller-manager-no-preload-328741" [eec3bebd-16ed-428e-8411-bca31800b36c] Running
	I0917 18:32:49.278326   77433 system_pods.go:61] "kube-proxy-2945m" [8a7b75b4-28c5-476a-b002-05313976c138] Running
	I0917 18:32:49.278329   77433 system_pods.go:61] "kube-scheduler-no-preload-328741" [06c97bf5-3ad3-45c5-8eaa-aa3cdbf51f12] Running
	I0917 18:32:49.278337   77433 system_pods.go:61] "metrics-server-6867b74b74-cvttg" [1b2d6700-2e3c-4a35-9794-0ec095eed0d4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:32:49.278341   77433 system_pods.go:61] "storage-provisioner" [03a8e7f5-ea70-4653-837b-5ad54de48136] Running
	I0917 18:32:49.278348   77433 system_pods.go:74] duration metric: took 182.470522ms to wait for pod list to return data ...
	I0917 18:32:49.278355   77433 default_sa.go:34] waiting for default service account to be created ...
	I0917 18:32:49.474126   77433 default_sa.go:45] found service account: "default"
	I0917 18:32:49.474155   77433 default_sa.go:55] duration metric: took 195.79307ms for default service account to be created ...
	I0917 18:32:49.474166   77433 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 18:32:49.678032   77433 system_pods.go:86] 9 kube-system pods found
	I0917 18:32:49.678062   77433 system_pods.go:89] "coredns-7c65d6cfc9-gddwk" [57f85dd3-be48-4648-8d70-7a06aeaecdc2] Running
	I0917 18:32:49.678068   77433 system_pods.go:89] "coredns-7c65d6cfc9-qv4pq" [31f7e4b5-3870-41a1-96f8-8e13511fe684] Running
	I0917 18:32:49.678072   77433 system_pods.go:89] "etcd-no-preload-328741" [42b632f3-5576-4779-8895-3adcecfb278c] Running
	I0917 18:32:49.678076   77433 system_pods.go:89] "kube-apiserver-no-preload-328741" [ff2d44e3-dad5-4c24-a24d-2df425466747] Running
	I0917 18:32:49.678080   77433 system_pods.go:89] "kube-controller-manager-no-preload-328741" [eec3bebd-16ed-428e-8411-bca31800b36c] Running
	I0917 18:32:49.678083   77433 system_pods.go:89] "kube-proxy-2945m" [8a7b75b4-28c5-476a-b002-05313976c138] Running
	I0917 18:32:49.678086   77433 system_pods.go:89] "kube-scheduler-no-preload-328741" [06c97bf5-3ad3-45c5-8eaa-aa3cdbf51f12] Running
	I0917 18:32:49.678095   77433 system_pods.go:89] "metrics-server-6867b74b74-cvttg" [1b2d6700-2e3c-4a35-9794-0ec095eed0d4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:32:49.678101   77433 system_pods.go:89] "storage-provisioner" [03a8e7f5-ea70-4653-837b-5ad54de48136] Running
	I0917 18:32:49.678111   77433 system_pods.go:126] duration metric: took 203.938016ms to wait for k8s-apps to be running ...
	I0917 18:32:49.678120   77433 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 18:32:49.678169   77433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:32:49.698139   77433 system_svc.go:56] duration metric: took 20.008261ms WaitForService to wait for kubelet
	I0917 18:32:49.698169   77433 kubeadm.go:582] duration metric: took 11.569050863s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 18:32:49.698188   77433 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:32:49.873214   77433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:32:49.873286   77433 node_conditions.go:123] node cpu capacity is 2
	I0917 18:32:49.873304   77433 node_conditions.go:105] duration metric: took 175.108582ms to run NodePressure ...
	I0917 18:32:49.873319   77433 start.go:241] waiting for startup goroutines ...
	I0917 18:32:49.873329   77433 start.go:246] waiting for cluster config update ...
	I0917 18:32:49.873342   77433 start.go:255] writing updated cluster config ...
	I0917 18:32:49.873719   77433 ssh_runner.go:195] Run: rm -f paused
	I0917 18:32:49.928157   77433 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 18:32:49.930136   77433 out.go:177] * Done! kubectl is now configured to use "no-preload-328741" cluster and "default" namespace by default
	I0917 18:32:47.158355   77264 pod_ready.go:82] duration metric: took 4m0.000722561s for pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace to be "Ready" ...
	E0917 18:32:47.158398   77264 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0917 18:32:47.158416   77264 pod_ready.go:39] duration metric: took 4m11.016184959s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:47.158443   77264 kubeadm.go:597] duration metric: took 4m19.974943276s to restartPrimaryControlPlane
	W0917 18:32:47.158508   77264 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 18:32:47.158539   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:32:49.359450   77819 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:32:49.359475   77819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 18:32:49.359496   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:32:49.360356   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.361125   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:32:49.360783   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:32:49.361427   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:32:49.361439   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.361615   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:32:49.361803   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:32:49.363091   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.363388   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:32:49.363420   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.363601   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:32:49.363803   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:32:49.363956   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:32:49.364108   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:32:49.374395   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40731
	I0917 18:32:49.374937   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.375474   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.375506   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.375858   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.376073   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetState
	I0917 18:32:49.377667   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:32:49.377884   77819 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 18:32:49.377899   77819 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 18:32:49.377912   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:32:49.381821   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.381992   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:32:49.382009   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.382202   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:32:49.382366   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:32:49.382534   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:32:49.382855   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:32:49.601072   77819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:32:49.657872   77819 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-438836" to be "Ready" ...
	I0917 18:32:49.669721   77819 node_ready.go:49] node "default-k8s-diff-port-438836" has status "Ready":"True"
	I0917 18:32:49.669750   77819 node_ready.go:38] duration metric: took 11.838649ms for node "default-k8s-diff-port-438836" to be "Ready" ...
	I0917 18:32:49.669761   77819 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:49.692344   77819 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8nrnc" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:49.774555   77819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:32:49.821754   77819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 18:32:49.826676   77819 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 18:32:49.826694   77819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0917 18:32:49.941685   77819 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 18:32:49.941712   77819 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 18:32:50.121418   77819 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:32:50.121444   77819 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 18:32:50.233586   77819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:32:50.948870   77819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.174278798s)
	I0917 18:32:50.948915   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:50.948926   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:50.948941   77819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.12715113s)
	I0917 18:32:50.948983   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:50.948997   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:50.949213   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:50.949240   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:50.949249   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:50.949257   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:50.949335   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:50.949346   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Closing plugin on server side
	I0917 18:32:50.949349   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:50.949367   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:50.949375   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:50.949484   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Closing plugin on server side
	I0917 18:32:50.949517   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:50.949530   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:50.949689   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Closing plugin on server side
	I0917 18:32:50.949700   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:50.949720   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:50.971989   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:50.972009   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:50.972307   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:50.972326   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:51.167019   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:51.167041   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:51.167324   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:51.167350   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:51.167358   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:51.167356   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Closing plugin on server side
	I0917 18:32:51.167366   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:51.167581   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:51.167593   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:51.167605   77819 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-438836"
	I0917 18:32:51.170208   77819 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0917 18:32:51.171345   77819 addons.go:510] duration metric: took 1.86260047s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0917 18:32:51.701056   77819 pod_ready.go:103] pod "coredns-7c65d6cfc9-8nrnc" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:53.199802   77819 pod_ready.go:93] pod "coredns-7c65d6cfc9-8nrnc" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:53.199832   77819 pod_ready.go:82] duration metric: took 3.507449551s for pod "coredns-7c65d6cfc9-8nrnc" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:53.199846   77819 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x4l48" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:54.116602   78008 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0917 18:32:54.116783   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:32:54.117004   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:32:55.207337   77819 pod_ready.go:103] pod "coredns-7c65d6cfc9-x4l48" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:56.207361   77819 pod_ready.go:93] pod "coredns-7c65d6cfc9-x4l48" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:56.207390   77819 pod_ready.go:82] duration metric: took 3.007535449s for pod "coredns-7c65d6cfc9-x4l48" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.207403   77819 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.212003   77819 pod_ready.go:93] pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:56.212025   77819 pod_ready.go:82] duration metric: took 4.613897ms for pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.212034   77819 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.216625   77819 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:56.216645   77819 pod_ready.go:82] duration metric: took 4.604444ms for pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.216654   77819 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.724223   77819 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:56.724257   77819 pod_ready.go:82] duration metric: took 507.594976ms for pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.724277   77819 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xwqtr" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.729284   77819 pod_ready.go:93] pod "kube-proxy-xwqtr" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:56.729312   77819 pod_ready.go:82] duration metric: took 5.025818ms for pod "kube-proxy-xwqtr" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.729324   77819 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:57.004900   77819 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:57.004926   77819 pod_ready.go:82] duration metric: took 275.593421ms for pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:57.004935   77819 pod_ready.go:39] duration metric: took 7.335162837s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:57.004951   77819 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:32:57.004999   77819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:57.020042   77819 api_server.go:72] duration metric: took 7.711410338s to wait for apiserver process to appear ...
	I0917 18:32:57.020070   77819 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:32:57.020095   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:32:57.024504   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 200:
	ok
	I0917 18:32:57.025722   77819 api_server.go:141] control plane version: v1.31.1
	I0917 18:32:57.025749   77819 api_server.go:131] duration metric: took 5.670742ms to wait for apiserver health ...
	I0917 18:32:57.025759   77819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:32:57.206512   77819 system_pods.go:59] 9 kube-system pods found
	I0917 18:32:57.206548   77819 system_pods.go:61] "coredns-7c65d6cfc9-8nrnc" [96eeb328-605e-468b-a022-dbb7b5b44501] Running
	I0917 18:32:57.206555   77819 system_pods.go:61] "coredns-7c65d6cfc9-x4l48" [12a20eeb-edd1-4f5b-bf64-ba3d2c8ae05b] Running
	I0917 18:32:57.206561   77819 system_pods.go:61] "etcd-default-k8s-diff-port-438836" [091ba47e-1133-4557-b3d7-eb39578840ab] Running
	I0917 18:32:57.206567   77819 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-438836" [cbb0e5fe-7583-4f3e-a0cd-dc32b00bb161] Running
	I0917 18:32:57.206573   77819 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-438836" [fe0a5927-2747-4e04-b9fc-c3071cb01ceb] Running
	I0917 18:32:57.206577   77819 system_pods.go:61] "kube-proxy-xwqtr" [5875ff28-7e41-4887-94da-d7632d8141e8] Running
	I0917 18:32:57.206582   77819 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-438836" [b25c5a55-a0e5-432a-a490-69b75d3a48d8] Running
	I0917 18:32:57.206593   77819 system_pods.go:61] "metrics-server-6867b74b74-qnfv2" [75be5ed8-b62d-42c8-8ea9-5809187be05a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:32:57.206599   77819 system_pods.go:61] "storage-provisioner" [a1ae1ecf-9311-4d61-a56d-9147876d4a9d] Running
	I0917 18:32:57.206609   77819 system_pods.go:74] duration metric: took 180.842325ms to wait for pod list to return data ...
	I0917 18:32:57.206619   77819 default_sa.go:34] waiting for default service account to be created ...
	I0917 18:32:57.404368   77819 default_sa.go:45] found service account: "default"
	I0917 18:32:57.404395   77819 default_sa.go:55] duration metric: took 197.770326ms for default service account to be created ...
	I0917 18:32:57.404404   77819 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 18:32:57.607472   77819 system_pods.go:86] 9 kube-system pods found
	I0917 18:32:57.607504   77819 system_pods.go:89] "coredns-7c65d6cfc9-8nrnc" [96eeb328-605e-468b-a022-dbb7b5b44501] Running
	I0917 18:32:57.607513   77819 system_pods.go:89] "coredns-7c65d6cfc9-x4l48" [12a20eeb-edd1-4f5b-bf64-ba3d2c8ae05b] Running
	I0917 18:32:57.607519   77819 system_pods.go:89] "etcd-default-k8s-diff-port-438836" [091ba47e-1133-4557-b3d7-eb39578840ab] Running
	I0917 18:32:57.607523   77819 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-438836" [cbb0e5fe-7583-4f3e-a0cd-dc32b00bb161] Running
	I0917 18:32:57.607529   77819 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-438836" [fe0a5927-2747-4e04-b9fc-c3071cb01ceb] Running
	I0917 18:32:57.607536   77819 system_pods.go:89] "kube-proxy-xwqtr" [5875ff28-7e41-4887-94da-d7632d8141e8] Running
	I0917 18:32:57.607542   77819 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-438836" [b25c5a55-a0e5-432a-a490-69b75d3a48d8] Running
	I0917 18:32:57.607552   77819 system_pods.go:89] "metrics-server-6867b74b74-qnfv2" [75be5ed8-b62d-42c8-8ea9-5809187be05a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:32:57.607558   77819 system_pods.go:89] "storage-provisioner" [a1ae1ecf-9311-4d61-a56d-9147876d4a9d] Running
	I0917 18:32:57.607573   77819 system_pods.go:126] duration metric: took 203.161716ms to wait for k8s-apps to be running ...
	I0917 18:32:57.607584   77819 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 18:32:57.607642   77819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:32:57.623570   77819 system_svc.go:56] duration metric: took 15.976138ms WaitForService to wait for kubelet
	I0917 18:32:57.623607   77819 kubeadm.go:582] duration metric: took 8.314980472s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 18:32:57.623629   77819 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:32:57.804485   77819 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:32:57.804510   77819 node_conditions.go:123] node cpu capacity is 2
	I0917 18:32:57.804520   77819 node_conditions.go:105] duration metric: took 180.885929ms to run NodePressure ...
	I0917 18:32:57.804532   77819 start.go:241] waiting for startup goroutines ...
	I0917 18:32:57.804539   77819 start.go:246] waiting for cluster config update ...
	I0917 18:32:57.804549   77819 start.go:255] writing updated cluster config ...
	I0917 18:32:57.804868   77819 ssh_runner.go:195] Run: rm -f paused
	I0917 18:32:57.854248   77819 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 18:32:57.856295   77819 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-438836" cluster and "default" namespace by default
	I0917 18:32:59.116802   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:32:59.117073   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:33:09.116772   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:33:09.117022   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:33:13.480418   77264 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.32185403s)
	I0917 18:33:13.480497   77264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:33:13.497676   77264 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:33:13.509036   77264 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:33:13.519901   77264 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:33:13.519927   77264 kubeadm.go:157] found existing configuration files:
	
	I0917 18:33:13.519985   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:33:13.530704   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:33:13.530784   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:33:13.541442   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:33:13.553771   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:33:13.553844   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:33:13.566357   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:33:13.576787   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:33:13.576871   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:33:13.587253   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:33:13.597253   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:33:13.597331   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:33:13.607686   77264 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:33:13.657294   77264 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 18:33:13.657416   77264 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:33:13.784063   77264 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:33:13.784228   77264 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:33:13.784388   77264 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 18:33:13.797531   77264 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:33:13.799464   77264 out.go:235]   - Generating certificates and keys ...
	I0917 18:33:13.799555   77264 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:33:13.799626   77264 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:33:13.799735   77264 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:33:13.799849   77264 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:33:13.799964   77264 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:33:13.800059   77264 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:33:13.800305   77264 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:33:13.800620   77264 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:33:13.800843   77264 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:33:13.801056   77264 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:33:13.801220   77264 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:33:13.801361   77264 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:33:13.949574   77264 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:33:14.002216   77264 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 18:33:14.113507   77264 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:33:14.328861   77264 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:33:14.452448   77264 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:33:14.452956   77264 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:33:14.456029   77264 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:33:14.458085   77264 out.go:235]   - Booting up control plane ...
	I0917 18:33:14.458197   77264 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:33:14.458298   77264 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:33:14.458418   77264 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:33:14.480556   77264 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:33:14.490011   77264 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:33:14.490108   77264 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:33:14.641550   77264 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 18:33:14.641680   77264 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 18:33:16.163986   77264 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.521637216s
	I0917 18:33:16.164081   77264 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 18:33:21.167283   77264 kubeadm.go:310] [api-check] The API server is healthy after 5.003926265s
	I0917 18:33:21.187439   77264 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 18:33:21.214590   77264 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 18:33:21.256056   77264 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 18:33:21.256319   77264 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-081863 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 18:33:21.274920   77264 kubeadm.go:310] [bootstrap-token] Using token: tkf10q.2xx4v0n14dywt5kc
	I0917 18:33:21.276557   77264 out.go:235]   - Configuring RBAC rules ...
	I0917 18:33:21.276707   77264 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 18:33:21.286544   77264 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 18:33:21.299514   77264 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 18:33:21.304466   77264 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 18:33:21.309218   77264 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 18:33:21.315113   77264 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 18:33:21.575303   77264 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 18:33:22.022249   77264 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 18:33:22.576184   77264 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 18:33:22.576211   77264 kubeadm.go:310] 
	I0917 18:33:22.576279   77264 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 18:33:22.576291   77264 kubeadm.go:310] 
	I0917 18:33:22.576360   77264 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 18:33:22.576367   77264 kubeadm.go:310] 
	I0917 18:33:22.576388   77264 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 18:33:22.576480   77264 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 18:33:22.576565   77264 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 18:33:22.576576   77264 kubeadm.go:310] 
	I0917 18:33:22.576640   77264 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 18:33:22.576649   77264 kubeadm.go:310] 
	I0917 18:33:22.576725   77264 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 18:33:22.576742   77264 kubeadm.go:310] 
	I0917 18:33:22.576802   77264 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 18:33:22.576884   77264 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 18:33:22.576987   77264 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 18:33:22.577008   77264 kubeadm.go:310] 
	I0917 18:33:22.577111   77264 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 18:33:22.577221   77264 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 18:33:22.577246   77264 kubeadm.go:310] 
	I0917 18:33:22.577361   77264 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tkf10q.2xx4v0n14dywt5kc \
	I0917 18:33:22.577505   77264 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 \
	I0917 18:33:22.577543   77264 kubeadm.go:310] 	--control-plane 
	I0917 18:33:22.577552   77264 kubeadm.go:310] 
	I0917 18:33:22.577660   77264 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 18:33:22.577671   77264 kubeadm.go:310] 
	I0917 18:33:22.577778   77264 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tkf10q.2xx4v0n14dywt5kc \
	I0917 18:33:22.577908   77264 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 
	I0917 18:33:22.579092   77264 kubeadm.go:310] W0917 18:33:13.630065    2521 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:33:22.579481   77264 kubeadm.go:310] W0917 18:33:13.630936    2521 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:33:22.579593   77264 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:33:22.579621   77264 cni.go:84] Creating CNI manager for ""
	I0917 18:33:22.579630   77264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:33:22.581566   77264 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:33:22.582849   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:33:22.595489   77264 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:33:22.627349   77264 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 18:33:22.627411   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:22.627448   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-081863 minikube.k8s.io/updated_at=2024_09_17T18_33_22_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=embed-certs-081863 minikube.k8s.io/primary=true
	I0917 18:33:22.862361   77264 ops.go:34] apiserver oom_adj: -16
	I0917 18:33:22.862491   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:23.362641   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:23.863054   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:24.363374   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:24.862744   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:25.362644   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:25.863065   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:25.974152   77264 kubeadm.go:1113] duration metric: took 3.346801442s to wait for elevateKubeSystemPrivileges
	I0917 18:33:25.974185   77264 kubeadm.go:394] duration metric: took 4m58.844504582s to StartCluster
	I0917 18:33:25.974203   77264 settings.go:142] acquiring lock: {Name:mk34898861b3ed534da4bd8404feab95fe690001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:33:25.974289   77264 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:33:25.976039   77264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:33:25.976296   77264 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.61 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 18:33:25.976407   77264 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 18:33:25.976517   77264 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-081863"
	I0917 18:33:25.976528   77264 addons.go:69] Setting default-storageclass=true in profile "embed-certs-081863"
	I0917 18:33:25.976535   77264 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-081863"
	W0917 18:33:25.976543   77264 addons.go:243] addon storage-provisioner should already be in state true
	I0917 18:33:25.976547   77264 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-081863"
	I0917 18:33:25.976573   77264 host.go:66] Checking if "embed-certs-081863" exists ...
	I0917 18:33:25.976624   77264 config.go:182] Loaded profile config "embed-certs-081863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:33:25.976662   77264 addons.go:69] Setting metrics-server=true in profile "embed-certs-081863"
	I0917 18:33:25.976672   77264 addons.go:234] Setting addon metrics-server=true in "embed-certs-081863"
	W0917 18:33:25.976679   77264 addons.go:243] addon metrics-server should already be in state true
	I0917 18:33:25.976698   77264 host.go:66] Checking if "embed-certs-081863" exists ...
	I0917 18:33:25.976964   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.976984   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.976997   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:25.977013   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:25.977030   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.977050   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:25.978439   77264 out.go:177] * Verifying Kubernetes components...
	I0917 18:33:25.980250   77264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:33:25.993034   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46451
	I0917 18:33:25.993038   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45343
	I0917 18:33:25.993551   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38487
	I0917 18:33:25.993589   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:25.993625   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:25.993887   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:25.994098   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:25.994122   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:25.994193   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:25.994211   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:25.994442   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:25.994466   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:25.994523   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:25.994523   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:25.994762   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetState
	I0917 18:33:25.994791   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:25.995118   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.995168   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:25.995251   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.995284   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:25.998228   77264 addons.go:234] Setting addon default-storageclass=true in "embed-certs-081863"
	W0917 18:33:25.998260   77264 addons.go:243] addon default-storageclass should already be in state true
	I0917 18:33:25.998301   77264 host.go:66] Checking if "embed-certs-081863" exists ...
	I0917 18:33:25.998642   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.998688   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:26.011862   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0917 18:33:26.012556   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:26.013142   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:26.013168   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:26.013578   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:26.014129   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36901
	I0917 18:33:26.014246   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43409
	I0917 18:33:26.014331   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetState
	I0917 18:33:26.014633   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:26.014703   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:26.015086   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:26.015108   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:26.015379   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:26.015396   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:26.015451   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:26.015895   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:26.016078   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:26.016113   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:26.016486   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetState
	I0917 18:33:26.016525   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:33:26.018385   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:33:26.019139   77264 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0917 18:33:26.020119   77264 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:33:26.020991   77264 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 18:33:26.021013   77264 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 18:33:26.021035   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:33:26.021810   77264 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:33:26.021825   77264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 18:33:26.021839   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:33:26.025804   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.026074   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:33:26.026097   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.025803   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.026468   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:33:26.026649   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:33:26.026937   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:33:26.026982   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:33:26.026991   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.027025   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:33:26.027114   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:33:26.027232   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:33:26.027417   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:33:26.027580   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:33:26.035905   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42799
	I0917 18:33:26.036621   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:26.037566   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:26.037597   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:26.038013   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:26.038317   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetState
	I0917 18:33:26.040464   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:33:26.040887   77264 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 18:33:26.040908   77264 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 18:33:26.040922   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:33:26.043857   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.044291   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:33:26.044325   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.044488   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:33:26.044682   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:33:26.044838   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:33:26.045034   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:33:26.155880   77264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:33:26.182293   77264 node_ready.go:35] waiting up to 6m0s for node "embed-certs-081863" to be "Ready" ...
	I0917 18:33:26.191336   77264 node_ready.go:49] node "embed-certs-081863" has status "Ready":"True"
	I0917 18:33:26.191358   77264 node_ready.go:38] duration metric: took 9.032061ms for node "embed-certs-081863" to be "Ready" ...
	I0917 18:33:26.191366   77264 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:33:26.196333   77264 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:26.260819   77264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:33:26.270678   77264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 18:33:26.270701   77264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0917 18:33:26.306169   77264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 18:33:26.310271   77264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 18:33:26.310300   77264 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 18:33:26.367576   77264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:33:26.367603   77264 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 18:33:26.424838   77264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:33:27.088293   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.088326   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.088329   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.088352   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.088726   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.088759   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.088782   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Closing plugin on server side
	I0917 18:33:27.088794   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.088831   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.088845   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.088853   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.088798   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.088923   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.089075   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.089088   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.089200   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.089210   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.089242   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Closing plugin on server side
	I0917 18:33:27.162204   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.162227   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.162597   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.162619   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.423795   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.423824   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.424110   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.424127   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.424136   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.424145   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.424369   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.424385   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.424395   77264 addons.go:475] Verifying addon metrics-server=true in "embed-certs-081863"
	I0917 18:33:27.424390   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Closing plugin on server side
	I0917 18:33:27.426548   77264 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0917 18:33:29.116398   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:33:29.116681   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:33:27.427684   77264 addons.go:510] duration metric: took 1.451280405s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0917 18:33:28.311561   77264 pod_ready.go:103] pod "etcd-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:33:30.703554   77264 pod_ready.go:103] pod "etcd-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:33:31.203018   77264 pod_ready.go:93] pod "etcd-embed-certs-081863" in "kube-system" namespace has status "Ready":"True"
	I0917 18:33:31.203047   77264 pod_ready.go:82] duration metric: took 5.006684537s for pod "etcd-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:31.203057   77264 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:31.207921   77264 pod_ready.go:93] pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace has status "Ready":"True"
	I0917 18:33:31.207949   77264 pod_ready.go:82] duration metric: took 4.88424ms for pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:31.207964   77264 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:31.212804   77264 pod_ready.go:93] pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace has status "Ready":"True"
	I0917 18:33:31.212830   77264 pod_ready.go:82] duration metric: took 4.856814ms for pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:31.212842   77264 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:32.221895   77264 pod_ready.go:93] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"True"
	I0917 18:33:32.221921   77264 pod_ready.go:82] duration metric: took 1.009071567s for pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:32.221929   77264 pod_ready.go:39] duration metric: took 6.030554324s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:33:32.221942   77264 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:33:32.221991   77264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:33:32.242087   77264 api_server.go:72] duration metric: took 6.265747566s to wait for apiserver process to appear ...
	I0917 18:33:32.242113   77264 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:33:32.242129   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:33:32.246960   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 200:
	ok
	I0917 18:33:32.248201   77264 api_server.go:141] control plane version: v1.31.1
	I0917 18:33:32.248223   77264 api_server.go:131] duration metric: took 6.105102ms to wait for apiserver health ...
	I0917 18:33:32.248231   77264 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:33:32.257513   77264 system_pods.go:59] 9 kube-system pods found
	I0917 18:33:32.257546   77264 system_pods.go:61] "coredns-7c65d6cfc9-662sf" [dc7d0fc2-f0dd-420c-b2f9-16ddb84bf3c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:33:32.257557   77264 system_pods.go:61] "coredns-7c65d6cfc9-dxjr7" [16ebe197-5fcf-4988-968b-c9edd71886ff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:33:32.257563   77264 system_pods.go:61] "etcd-embed-certs-081863" [305d6255-3a64-42e2-ad46-cfb94470289d] Running
	I0917 18:33:32.257569   77264 system_pods.go:61] "kube-apiserver-embed-certs-081863" [693ee853-314d-49fc-884c-aaaa2ac17a59] Running
	I0917 18:33:32.257575   77264 system_pods.go:61] "kube-controller-manager-embed-certs-081863" [ff8d98db-0214-405a-858d-e720dccd0492] Running
	I0917 18:33:32.257579   77264 system_pods.go:61] "kube-proxy-7w64h" [46f3bcbd-64c9-4b30-9aa9-6f6e1eb9833b] Running
	I0917 18:33:32.257585   77264 system_pods.go:61] "kube-scheduler-embed-certs-081863" [fb3b40eb-5f37-486c-a897-c7d3574ea408] Running
	I0917 18:33:32.257593   77264 system_pods.go:61] "metrics-server-6867b74b74-98t8z" [941996a1-2109-4c06-88d1-19c6987f81bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:33:32.257602   77264 system_pods.go:61] "storage-provisioner" [107868ba-cf29-42b0-bb0d-c0da9b6b4c8c] Running
	I0917 18:33:32.257612   77264 system_pods.go:74] duration metric: took 9.373269ms to wait for pod list to return data ...
	I0917 18:33:32.257625   77264 default_sa.go:34] waiting for default service account to be created ...
	I0917 18:33:32.264675   77264 default_sa.go:45] found service account: "default"
	I0917 18:33:32.264700   77264 default_sa.go:55] duration metric: took 7.064658ms for default service account to be created ...
	I0917 18:33:32.264711   77264 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 18:33:32.270932   77264 system_pods.go:86] 9 kube-system pods found
	I0917 18:33:32.270964   77264 system_pods.go:89] "coredns-7c65d6cfc9-662sf" [dc7d0fc2-f0dd-420c-b2f9-16ddb84bf3c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:33:32.270975   77264 system_pods.go:89] "coredns-7c65d6cfc9-dxjr7" [16ebe197-5fcf-4988-968b-c9edd71886ff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:33:32.270983   77264 system_pods.go:89] "etcd-embed-certs-081863" [305d6255-3a64-42e2-ad46-cfb94470289d] Running
	I0917 18:33:32.270990   77264 system_pods.go:89] "kube-apiserver-embed-certs-081863" [693ee853-314d-49fc-884c-aaaa2ac17a59] Running
	I0917 18:33:32.270996   77264 system_pods.go:89] "kube-controller-manager-embed-certs-081863" [ff8d98db-0214-405a-858d-e720dccd0492] Running
	I0917 18:33:32.271002   77264 system_pods.go:89] "kube-proxy-7w64h" [46f3bcbd-64c9-4b30-9aa9-6f6e1eb9833b] Running
	I0917 18:33:32.271009   77264 system_pods.go:89] "kube-scheduler-embed-certs-081863" [fb3b40eb-5f37-486c-a897-c7d3574ea408] Running
	I0917 18:33:32.271018   77264 system_pods.go:89] "metrics-server-6867b74b74-98t8z" [941996a1-2109-4c06-88d1-19c6987f81bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:33:32.271024   77264 system_pods.go:89] "storage-provisioner" [107868ba-cf29-42b0-bb0d-c0da9b6b4c8c] Running
	I0917 18:33:32.271037   77264 system_pods.go:126] duration metric: took 6.318783ms to wait for k8s-apps to be running ...
	I0917 18:33:32.271049   77264 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 18:33:32.271102   77264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:33:32.287483   77264 system_svc.go:56] duration metric: took 16.427006ms WaitForService to wait for kubelet
	I0917 18:33:32.287516   77264 kubeadm.go:582] duration metric: took 6.311184714s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 18:33:32.287535   77264 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:33:32.406700   77264 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:33:32.406738   77264 node_conditions.go:123] node cpu capacity is 2
	I0917 18:33:32.406754   77264 node_conditions.go:105] duration metric: took 119.213403ms to run NodePressure ...
	I0917 18:33:32.406767   77264 start.go:241] waiting for startup goroutines ...
	I0917 18:33:32.406777   77264 start.go:246] waiting for cluster config update ...
	I0917 18:33:32.406791   77264 start.go:255] writing updated cluster config ...
	I0917 18:33:32.407061   77264 ssh_runner.go:195] Run: rm -f paused
	I0917 18:33:32.455606   77264 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 18:33:32.457636   77264 out.go:177] * Done! kubectl is now configured to use "embed-certs-081863" cluster and "default" namespace by default
	I0917 18:34:09.116050   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:34:09.116348   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:34:09.116382   78008 kubeadm.go:310] 
	I0917 18:34:09.116437   78008 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0917 18:34:09.116522   78008 kubeadm.go:310] 		timed out waiting for the condition
	I0917 18:34:09.116546   78008 kubeadm.go:310] 
	I0917 18:34:09.116595   78008 kubeadm.go:310] 	This error is likely caused by:
	I0917 18:34:09.116645   78008 kubeadm.go:310] 		- The kubelet is not running
	I0917 18:34:09.116792   78008 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0917 18:34:09.116804   78008 kubeadm.go:310] 
	I0917 18:34:09.116949   78008 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0917 18:34:09.116993   78008 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0917 18:34:09.117053   78008 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0917 18:34:09.117070   78008 kubeadm.go:310] 
	I0917 18:34:09.117199   78008 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0917 18:34:09.117318   78008 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0917 18:34:09.117331   78008 kubeadm.go:310] 
	I0917 18:34:09.117467   78008 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0917 18:34:09.117585   78008 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0917 18:34:09.117689   78008 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0917 18:34:09.117782   78008 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0917 18:34:09.117793   78008 kubeadm.go:310] 
	I0917 18:34:09.118509   78008 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:34:09.118613   78008 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0917 18:34:09.118740   78008 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0917 18:34:09.118821   78008 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0917 18:34:09.118869   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:34:09.597153   78008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:34:09.614431   78008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:34:09.627627   78008 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:34:09.627653   78008 kubeadm.go:157] found existing configuration files:
	
	I0917 18:34:09.627702   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:34:09.639927   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:34:09.639997   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:34:09.651694   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:34:09.662886   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:34:09.662951   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:34:09.675194   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:34:09.686971   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:34:09.687040   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:34:09.699343   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:34:09.711202   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:34:09.711259   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:34:09.722049   78008 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:34:09.800536   78008 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0917 18:34:09.800589   78008 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:34:09.951244   78008 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:34:09.951389   78008 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:34:09.951517   78008 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0917 18:34:10.148311   78008 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:34:10.150656   78008 out.go:235]   - Generating certificates and keys ...
	I0917 18:34:10.150769   78008 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:34:10.150858   78008 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:34:10.150978   78008 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:34:10.151065   78008 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:34:10.151169   78008 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:34:10.151256   78008 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:34:10.151519   78008 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:34:10.151757   78008 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:34:10.152388   78008 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:34:10.152908   78008 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:34:10.153071   78008 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:34:10.153159   78008 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:34:10.298790   78008 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:34:10.463403   78008 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:34:10.699997   78008 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:34:10.983279   78008 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:34:11.006708   78008 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:34:11.008239   78008 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:34:11.008306   78008 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:34:11.173261   78008 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:34:11.175163   78008 out.go:235]   - Booting up control plane ...
	I0917 18:34:11.175324   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:34:11.188834   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:34:11.189874   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:34:11.190719   78008 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:34:11.193221   78008 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0917 18:34:51.193814   78008 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0917 18:34:51.194231   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:34:51.194466   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:34:56.194972   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:34:56.195214   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:35:06.195454   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:35:06.195700   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:35:26.196645   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:35:26.196867   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:36:06.199013   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:36:06.199291   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:36:06.199313   78008 kubeadm.go:310] 
	I0917 18:36:06.199374   78008 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0917 18:36:06.199427   78008 kubeadm.go:310] 		timed out waiting for the condition
	I0917 18:36:06.199434   78008 kubeadm.go:310] 
	I0917 18:36:06.199481   78008 kubeadm.go:310] 	This error is likely caused by:
	I0917 18:36:06.199514   78008 kubeadm.go:310] 		- The kubelet is not running
	I0917 18:36:06.199643   78008 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0917 18:36:06.199663   78008 kubeadm.go:310] 
	I0917 18:36:06.199785   78008 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0917 18:36:06.199835   78008 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0917 18:36:06.199878   78008 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0917 18:36:06.199882   78008 kubeadm.go:310] 
	I0917 18:36:06.200017   78008 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0917 18:36:06.200218   78008 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0917 18:36:06.200235   78008 kubeadm.go:310] 
	I0917 18:36:06.200391   78008 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0917 18:36:06.200515   78008 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0917 18:36:06.200640   78008 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0917 18:36:06.200746   78008 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0917 18:36:06.200763   78008 kubeadm.go:310] 
	I0917 18:36:06.201520   78008 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:36:06.201636   78008 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0917 18:36:06.201798   78008 kubeadm.go:394] duration metric: took 7m56.884157814s to StartCluster
	I0917 18:36:06.201852   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:36:06.201800   78008 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0917 18:36:06.201920   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:36:06.251742   78008 cri.go:89] found id: ""
	I0917 18:36:06.251773   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.251781   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:36:06.251787   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:36:06.251853   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:36:06.292437   78008 cri.go:89] found id: ""
	I0917 18:36:06.292471   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.292483   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:36:06.292490   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:36:06.292548   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:36:06.334539   78008 cri.go:89] found id: ""
	I0917 18:36:06.334571   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.334580   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:36:06.334590   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:36:06.334641   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:36:06.372231   78008 cri.go:89] found id: ""
	I0917 18:36:06.372267   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.372279   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:36:06.372287   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:36:06.372346   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:36:06.411995   78008 cri.go:89] found id: ""
	I0917 18:36:06.412023   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.412031   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:36:06.412036   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:36:06.412100   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:36:06.450809   78008 cri.go:89] found id: ""
	I0917 18:36:06.450834   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.450842   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:36:06.450848   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:36:06.450897   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:36:06.486772   78008 cri.go:89] found id: ""
	I0917 18:36:06.486802   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.486814   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:36:06.486831   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:36:06.486886   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:36:06.528167   78008 cri.go:89] found id: ""
	I0917 18:36:06.528198   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.528210   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:36:06.528222   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:36:06.528234   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:36:06.610415   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:36:06.610445   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:36:06.610461   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:36:06.745881   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:36:06.745921   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:36:06.788764   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:36:06.788802   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:36:06.843477   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:36:06.843514   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0917 18:36:06.858338   78008 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0917 18:36:06.858388   78008 out.go:270] * 
	W0917 18:36:06.858456   78008 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0917 18:36:06.858485   78008 out.go:270] * 
	W0917 18:36:06.859898   78008 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 18:36:06.863606   78008 out.go:201] 
	W0917 18:36:06.865246   78008 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0917 18:36:06.865293   78008 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0917 18:36:06.865313   78008 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0917 18:36:06.866942   78008 out.go:201] 
	
	
	==> CRI-O <==
	Sep 17 18:36:08 old-k8s-version-190698 crio[631]: time="2024-09-17 18:36:08.810072896Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598168810051668,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7872a344-3f8d-4d3f-9fbe-9357a3fdb9c5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:36:08 old-k8s-version-190698 crio[631]: time="2024-09-17 18:36:08.811700422Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6b01f505-7ffc-4379-9034-68e6ecdac050 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:36:08 old-k8s-version-190698 crio[631]: time="2024-09-17 18:36:08.811891251Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6b01f505-7ffc-4379-9034-68e6ecdac050 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:36:08 old-k8s-version-190698 crio[631]: time="2024-09-17 18:36:08.812203903Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6b01f505-7ffc-4379-9034-68e6ecdac050 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:36:08 old-k8s-version-190698 crio[631]: time="2024-09-17 18:36:08.855095012Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a0d4220d-c4b9-4672-9fb7-d68fd6b4ea07 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:36:08 old-k8s-version-190698 crio[631]: time="2024-09-17 18:36:08.855212199Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a0d4220d-c4b9-4672-9fb7-d68fd6b4ea07 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:36:08 old-k8s-version-190698 crio[631]: time="2024-09-17 18:36:08.857461527Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=96ee64fe-6881-4116-802c-e92770345d08 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:36:08 old-k8s-version-190698 crio[631]: time="2024-09-17 18:36:08.857944781Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598168857918604,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=96ee64fe-6881-4116-802c-e92770345d08 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:36:08 old-k8s-version-190698 crio[631]: time="2024-09-17 18:36:08.858523071Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=47be62bc-f15b-456f-94af-e8b57e685b37 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:36:08 old-k8s-version-190698 crio[631]: time="2024-09-17 18:36:08.858626046Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=47be62bc-f15b-456f-94af-e8b57e685b37 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:36:08 old-k8s-version-190698 crio[631]: time="2024-09-17 18:36:08.858665868Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=47be62bc-f15b-456f-94af-e8b57e685b37 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:36:08 old-k8s-version-190698 crio[631]: time="2024-09-17 18:36:08.891783441Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5303ccd5-96d3-44cd-94ab-bd7774f6590d name=/runtime.v1.RuntimeService/Version
	Sep 17 18:36:08 old-k8s-version-190698 crio[631]: time="2024-09-17 18:36:08.891865064Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5303ccd5-96d3-44cd-94ab-bd7774f6590d name=/runtime.v1.RuntimeService/Version
	Sep 17 18:36:08 old-k8s-version-190698 crio[631]: time="2024-09-17 18:36:08.892824361Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3966162e-2441-4762-ac8c-38ce22c060bb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:36:08 old-k8s-version-190698 crio[631]: time="2024-09-17 18:36:08.893232088Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598168893203145,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3966162e-2441-4762-ac8c-38ce22c060bb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:36:08 old-k8s-version-190698 crio[631]: time="2024-09-17 18:36:08.893756222Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b5cf8449-389c-4422-b1b7-4b3d2f92a9c9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:36:08 old-k8s-version-190698 crio[631]: time="2024-09-17 18:36:08.893803939Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b5cf8449-389c-4422-b1b7-4b3d2f92a9c9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:36:08 old-k8s-version-190698 crio[631]: time="2024-09-17 18:36:08.893834076Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b5cf8449-389c-4422-b1b7-4b3d2f92a9c9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:36:08 old-k8s-version-190698 crio[631]: time="2024-09-17 18:36:08.928132049Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b4818331-c27d-430a-be76-37f03071a78c name=/runtime.v1.RuntimeService/Version
	Sep 17 18:36:08 old-k8s-version-190698 crio[631]: time="2024-09-17 18:36:08.928205530Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b4818331-c27d-430a-be76-37f03071a78c name=/runtime.v1.RuntimeService/Version
	Sep 17 18:36:08 old-k8s-version-190698 crio[631]: time="2024-09-17 18:36:08.929404762Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e4452e87-b5b9-49c2-bc0c-dd1deb2dec89 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:36:08 old-k8s-version-190698 crio[631]: time="2024-09-17 18:36:08.929842228Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598168929817523,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e4452e87-b5b9-49c2-bc0c-dd1deb2dec89 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:36:08 old-k8s-version-190698 crio[631]: time="2024-09-17 18:36:08.930506363Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=875bc9f2-4e4d-4b51-8018-cecca5d95117 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:36:08 old-k8s-version-190698 crio[631]: time="2024-09-17 18:36:08.930619295Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=875bc9f2-4e4d-4b51-8018-cecca5d95117 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:36:08 old-k8s-version-190698 crio[631]: time="2024-09-17 18:36:08.930657660Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=875bc9f2-4e4d-4b51-8018-cecca5d95117 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep17 18:27] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054866] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.046421] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.149899] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.871080] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.681331] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep17 18:28] systemd-fstab-generator[559]: Ignoring "noauto" option for root device
	[  +0.066256] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072788] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.186947] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.145789] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.292905] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +6.819811] systemd-fstab-generator[885]: Ignoring "noauto" option for root device
	[  +0.084662] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.874135] systemd-fstab-generator[1008]: Ignoring "noauto" option for root device
	[ +13.062004] kauditd_printk_skb: 46 callbacks suppressed
	[Sep17 18:32] systemd-fstab-generator[5014]: Ignoring "noauto" option for root device
	[Sep17 18:34] systemd-fstab-generator[5292]: Ignoring "noauto" option for root device
	[  +0.068770] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:36:09 up 8 min,  0 users,  load average: 0.04, 0.13, 0.08
	Linux old-k8s-version-190698 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 17 18:36:05 old-k8s-version-190698 kubelet[5470]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Sep 17 18:36:05 old-k8s-version-190698 kubelet[5470]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Sep 17 18:36:05 old-k8s-version-190698 kubelet[5470]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Sep 17 18:36:05 old-k8s-version-190698 kubelet[5470]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00075a6f0)
	Sep 17 18:36:05 old-k8s-version-190698 kubelet[5470]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Sep 17 18:36:05 old-k8s-version-190698 kubelet[5470]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000a87ef0, 0x4f0ac20, 0xc000c1aaf0, 0x1, 0xc0001000c0)
	Sep 17 18:36:05 old-k8s-version-190698 kubelet[5470]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Sep 17 18:36:05 old-k8s-version-190698 kubelet[5470]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000255180, 0xc0001000c0)
	Sep 17 18:36:05 old-k8s-version-190698 kubelet[5470]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Sep 17 18:36:05 old-k8s-version-190698 kubelet[5470]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Sep 17 18:36:05 old-k8s-version-190698 kubelet[5470]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Sep 17 18:36:05 old-k8s-version-190698 kubelet[5470]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000b97100, 0xc000c24e80)
	Sep 17 18:36:05 old-k8s-version-190698 kubelet[5470]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Sep 17 18:36:05 old-k8s-version-190698 kubelet[5470]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Sep 17 18:36:05 old-k8s-version-190698 kubelet[5470]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Sep 17 18:36:05 old-k8s-version-190698 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 17 18:36:05 old-k8s-version-190698 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 17 18:36:06 old-k8s-version-190698 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Sep 17 18:36:06 old-k8s-version-190698 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 17 18:36:06 old-k8s-version-190698 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 17 18:36:06 old-k8s-version-190698 kubelet[5525]: I0917 18:36:06.712514    5525 server.go:416] Version: v1.20.0
	Sep 17 18:36:06 old-k8s-version-190698 kubelet[5525]: I0917 18:36:06.712925    5525 server.go:837] Client rotation is on, will bootstrap in background
	Sep 17 18:36:06 old-k8s-version-190698 kubelet[5525]: I0917 18:36:06.715326    5525 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 17 18:36:06 old-k8s-version-190698 kubelet[5525]: I0917 18:36:06.716549    5525 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Sep 17 18:36:06 old-k8s-version-190698 kubelet[5525]: W0917 18:36:06.716633    5525 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-190698 -n old-k8s-version-190698
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-190698 -n old-k8s-version-190698: exit status 2 (244.25724ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-190698" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (739.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-328741 -n no-preload-328741
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-17 18:41:50.503189365 +0000 UTC m=+6379.577761634
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-328741 -n no-preload-328741
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-328741 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-328741 logs -n 25: (2.37524628s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo containerd config dump                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	| delete  | -p                                                     | disable-driver-mounts-671774 | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | disable-driver-mounts-671774                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:20 UTC |
	|         | default-k8s-diff-port-438836                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-081863            | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-081863                                  | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-328741             | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:20 UTC | 17 Sep 24 18:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-328741                                   | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:20 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-438836  | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:21 UTC | 17 Sep 24 18:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:21 UTC |                     |
	|         | default-k8s-diff-port-438836                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-081863                 | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-190698        | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-081863                                  | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC | 17 Sep 24 18:33 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-328741                  | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-328741                                   | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC | 17 Sep 24 18:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-438836       | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC | 17 Sep 24 18:32 UTC |
	|         | default-k8s-diff-port-438836                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-190698                              | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC | 17 Sep 24 18:23 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-190698             | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC | 17 Sep 24 18:23 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-190698                              | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 18:23:50
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 18:23:50.674050   78008 out.go:345] Setting OutFile to fd 1 ...
	I0917 18:23:50.674338   78008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:23:50.674349   78008 out.go:358] Setting ErrFile to fd 2...
	I0917 18:23:50.674356   78008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:23:50.674556   78008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 18:23:50.675161   78008 out.go:352] Setting JSON to false
	I0917 18:23:50.676159   78008 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7546,"bootTime":1726589885,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 18:23:50.676252   78008 start.go:139] virtualization: kvm guest
	I0917 18:23:50.678551   78008 out.go:177] * [old-k8s-version-190698] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 18:23:50.679898   78008 notify.go:220] Checking for updates...
	I0917 18:23:50.679923   78008 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 18:23:50.681520   78008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 18:23:50.683062   78008 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:23:50.684494   78008 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 18:23:50.685988   78008 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 18:23:50.687372   78008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 18:23:50.689066   78008 config.go:182] Loaded profile config "old-k8s-version-190698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0917 18:23:50.689526   78008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:23:50.689604   78008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:23:50.704879   78008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41839
	I0917 18:23:50.705416   78008 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:23:50.705985   78008 main.go:141] libmachine: Using API Version  1
	I0917 18:23:50.706014   78008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:23:50.706318   78008 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:23:50.706508   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:23:50.708560   78008 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0917 18:23:50.709804   78008 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 18:23:50.710139   78008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:23:50.710185   78008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:23:50.725466   78008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41935
	I0917 18:23:50.725978   78008 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:23:50.726521   78008 main.go:141] libmachine: Using API Version  1
	I0917 18:23:50.726552   78008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:23:50.726874   78008 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:23:50.727047   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:23:50.764769   78008 out.go:177] * Using the kvm2 driver based on existing profile
	I0917 18:23:50.766378   78008 start.go:297] selected driver: kvm2
	I0917 18:23:50.766396   78008 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:23:50.766522   78008 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 18:23:50.767254   78008 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:23:50.767323   78008 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19662-11085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0917 18:23:50.783226   78008 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0917 18:23:50.783619   78008 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 18:23:50.783658   78008 cni.go:84] Creating CNI manager for ""
	I0917 18:23:50.783697   78008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:23:50.783745   78008 start.go:340] cluster config:
	{Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:23:50.783859   78008 iso.go:125] acquiring lock: {Name:mkee7131e09b1c5eb94d106bda884bcbdbf6f906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:23:48.141429   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:23:50.786173   78008 out.go:177] * Starting "old-k8s-version-190698" primary control-plane node in "old-k8s-version-190698" cluster
	I0917 18:23:50.787985   78008 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0917 18:23:50.788036   78008 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0917 18:23:50.788046   78008 cache.go:56] Caching tarball of preloaded images
	I0917 18:23:50.788122   78008 preload.go:172] Found /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 18:23:50.788132   78008 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0917 18:23:50.788236   78008 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/config.json ...
	I0917 18:23:50.788409   78008 start.go:360] acquireMachinesLock for old-k8s-version-190698: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 18:23:54.221530   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:23:57.293515   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:03.373505   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:06.445563   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:12.525534   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:15.597572   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:21.677533   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:24.749529   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:30.829519   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:33.901554   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:39.981533   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:43.053468   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:49.133556   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:52.205564   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:58.285562   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:01.357500   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:07.437467   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:10.509559   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:16.589464   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:19.661586   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:25.741498   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:28.813506   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:34.893488   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:37.965553   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:44.045546   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:47.117526   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:53.197534   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:56.269532   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:02.349528   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:05.421492   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:11.501470   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:14.573534   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:20.653500   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:23.725530   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:29.805601   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:32.877548   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:38.957496   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:42.029510   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:48.109547   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:51.181567   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:57.261480   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:27:00.333628   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:27:03.338059   77433 start.go:364] duration metric: took 4m21.061938866s to acquireMachinesLock for "no-preload-328741"
	I0917 18:27:03.338119   77433 start.go:96] Skipping create...Using existing machine configuration
	I0917 18:27:03.338127   77433 fix.go:54] fixHost starting: 
	I0917 18:27:03.338580   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:27:03.338627   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:27:03.353917   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38843
	I0917 18:27:03.354383   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:27:03.354859   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:27:03.354881   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:27:03.355169   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:27:03.355331   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:03.355481   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetState
	I0917 18:27:03.357141   77433 fix.go:112] recreateIfNeeded on no-preload-328741: state=Stopped err=<nil>
	I0917 18:27:03.357164   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	W0917 18:27:03.357305   77433 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 18:27:03.359125   77433 out.go:177] * Restarting existing kvm2 VM for "no-preload-328741" ...
	I0917 18:27:03.335549   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:27:03.335586   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetMachineName
	I0917 18:27:03.335955   77264 buildroot.go:166] provisioning hostname "embed-certs-081863"
	I0917 18:27:03.335984   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetMachineName
	I0917 18:27:03.336183   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:27:03.337915   77264 machine.go:96] duration metric: took 4m37.417759423s to provisionDockerMachine
	I0917 18:27:03.337964   77264 fix.go:56] duration metric: took 4m37.441049892s for fixHost
	I0917 18:27:03.337973   77264 start.go:83] releasing machines lock for "embed-certs-081863", held for 4m37.441075799s
	W0917 18:27:03.337995   77264 start.go:714] error starting host: provision: host is not running
	W0917 18:27:03.338098   77264 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0917 18:27:03.338107   77264 start.go:729] Will try again in 5 seconds ...
	I0917 18:27:03.360504   77433 main.go:141] libmachine: (no-preload-328741) Calling .Start
	I0917 18:27:03.360723   77433 main.go:141] libmachine: (no-preload-328741) Ensuring networks are active...
	I0917 18:27:03.361552   77433 main.go:141] libmachine: (no-preload-328741) Ensuring network default is active
	I0917 18:27:03.361892   77433 main.go:141] libmachine: (no-preload-328741) Ensuring network mk-no-preload-328741 is active
	I0917 18:27:03.362266   77433 main.go:141] libmachine: (no-preload-328741) Getting domain xml...
	I0917 18:27:03.362986   77433 main.go:141] libmachine: (no-preload-328741) Creating domain...
	I0917 18:27:04.605668   77433 main.go:141] libmachine: (no-preload-328741) Waiting to get IP...
	I0917 18:27:04.606667   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:04.607120   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:04.607206   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:04.607116   78688 retry.go:31] will retry after 233.634344ms: waiting for machine to come up
	I0917 18:27:04.842666   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:04.843211   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:04.843238   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:04.843149   78688 retry.go:31] will retry after 295.987515ms: waiting for machine to come up
	I0917 18:27:05.140821   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:05.141150   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:05.141173   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:05.141121   78688 retry.go:31] will retry after 482.890276ms: waiting for machine to come up
	I0917 18:27:05.625952   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:05.626401   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:05.626461   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:05.626347   78688 retry.go:31] will retry after 554.515102ms: waiting for machine to come up
	I0917 18:27:06.182038   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:06.182421   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:06.182448   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:06.182375   78688 retry.go:31] will retry after 484.48355ms: waiting for machine to come up
	I0917 18:27:06.668366   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:06.668886   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:06.668917   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:06.668862   78688 retry.go:31] will retry after 821.433387ms: waiting for machine to come up
	I0917 18:27:08.338629   77264 start.go:360] acquireMachinesLock for embed-certs-081863: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 18:27:07.491878   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:07.492313   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:07.492333   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:07.492274   78688 retry.go:31] will retry after 777.017059ms: waiting for machine to come up
	I0917 18:27:08.271320   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:08.271721   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:08.271748   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:08.271671   78688 retry.go:31] will retry after 1.033548419s: waiting for machine to come up
	I0917 18:27:09.307361   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:09.307889   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:09.307922   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:09.307826   78688 retry.go:31] will retry after 1.347955425s: waiting for machine to come up
	I0917 18:27:10.657426   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:10.657903   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:10.657927   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:10.657850   78688 retry.go:31] will retry after 1.52847221s: waiting for machine to come up
	I0917 18:27:12.188594   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:12.189069   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:12.189094   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:12.189031   78688 retry.go:31] will retry after 2.329019451s: waiting for machine to come up
	I0917 18:27:14.519240   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:14.519691   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:14.519718   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:14.519643   78688 retry.go:31] will retry after 2.547184893s: waiting for machine to come up
	I0917 18:27:17.068162   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:17.068621   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:17.068645   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:17.068577   78688 retry.go:31] will retry after 3.042534231s: waiting for machine to come up
	I0917 18:27:21.442547   77819 start.go:364] duration metric: took 3m42.844200352s to acquireMachinesLock for "default-k8s-diff-port-438836"
	I0917 18:27:21.442612   77819 start.go:96] Skipping create...Using existing machine configuration
	I0917 18:27:21.442620   77819 fix.go:54] fixHost starting: 
	I0917 18:27:21.443035   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:27:21.443089   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:27:21.462997   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38191
	I0917 18:27:21.463468   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:27:21.464035   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:27:21.464056   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:27:21.464377   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:27:21.464546   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:21.464703   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetState
	I0917 18:27:21.466460   77819 fix.go:112] recreateIfNeeded on default-k8s-diff-port-438836: state=Stopped err=<nil>
	I0917 18:27:21.466502   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	W0917 18:27:21.466643   77819 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 18:27:21.468932   77819 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-438836" ...
	I0917 18:27:20.113857   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.114336   77433 main.go:141] libmachine: (no-preload-328741) Found IP for machine: 192.168.72.182
	I0917 18:27:20.114359   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has current primary IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.114364   77433 main.go:141] libmachine: (no-preload-328741) Reserving static IP address...
	I0917 18:27:20.114774   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "no-preload-328741", mac: "52:54:00:de:bd:6d", ip: "192.168.72.182"} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.114792   77433 main.go:141] libmachine: (no-preload-328741) Reserved static IP address: 192.168.72.182
	I0917 18:27:20.114808   77433 main.go:141] libmachine: (no-preload-328741) DBG | skip adding static IP to network mk-no-preload-328741 - found existing host DHCP lease matching {name: "no-preload-328741", mac: "52:54:00:de:bd:6d", ip: "192.168.72.182"}
	I0917 18:27:20.114822   77433 main.go:141] libmachine: (no-preload-328741) DBG | Getting to WaitForSSH function...
	I0917 18:27:20.114831   77433 main.go:141] libmachine: (no-preload-328741) Waiting for SSH to be available...
	I0917 18:27:20.116945   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.117224   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.117268   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.117371   77433 main.go:141] libmachine: (no-preload-328741) DBG | Using SSH client type: external
	I0917 18:27:20.117396   77433 main.go:141] libmachine: (no-preload-328741) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa (-rw-------)
	I0917 18:27:20.117427   77433 main.go:141] libmachine: (no-preload-328741) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:27:20.117439   77433 main.go:141] libmachine: (no-preload-328741) DBG | About to run SSH command:
	I0917 18:27:20.117446   77433 main.go:141] libmachine: (no-preload-328741) DBG | exit 0
	I0917 18:27:20.241462   77433 main.go:141] libmachine: (no-preload-328741) DBG | SSH cmd err, output: <nil>: 
	I0917 18:27:20.241844   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetConfigRaw
	I0917 18:27:20.242520   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetIP
	I0917 18:27:20.245397   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.245786   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.245821   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.246121   77433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/config.json ...
	I0917 18:27:20.246346   77433 machine.go:93] provisionDockerMachine start ...
	I0917 18:27:20.246367   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:20.246573   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.248978   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.249318   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.249345   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.249489   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:20.249643   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.249795   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.249911   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:20.250048   77433 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:20.250301   77433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0917 18:27:20.250317   77433 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 18:27:20.357778   77433 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 18:27:20.357805   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetMachineName
	I0917 18:27:20.358058   77433 buildroot.go:166] provisioning hostname "no-preload-328741"
	I0917 18:27:20.358083   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetMachineName
	I0917 18:27:20.358261   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.361057   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.361463   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.361498   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.361617   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:20.361774   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.361948   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.362031   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:20.362157   77433 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:20.362321   77433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0917 18:27:20.362337   77433 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-328741 && echo "no-preload-328741" | sudo tee /etc/hostname
	I0917 18:27:20.486928   77433 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-328741
	
	I0917 18:27:20.486956   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.489814   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.490212   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.490245   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.490451   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:20.490627   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.490846   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.491105   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:20.491327   77433 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:20.491532   77433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0917 18:27:20.491553   77433 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-328741' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-328741/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-328741' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:27:20.607308   77433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:27:20.607336   77433 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:27:20.607379   77433 buildroot.go:174] setting up certificates
	I0917 18:27:20.607394   77433 provision.go:84] configureAuth start
	I0917 18:27:20.607407   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetMachineName
	I0917 18:27:20.607708   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetIP
	I0917 18:27:20.610353   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.610722   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.610751   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.610897   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.612874   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.613160   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.613196   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.613366   77433 provision.go:143] copyHostCerts
	I0917 18:27:20.613425   77433 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:27:20.613435   77433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:27:20.613508   77433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:27:20.613607   77433 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:27:20.613614   77433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:27:20.613645   77433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:27:20.613706   77433 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:27:20.613713   77433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:27:20.613734   77433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:27:20.613789   77433 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.no-preload-328741 san=[127.0.0.1 192.168.72.182 localhost minikube no-preload-328741]
	I0917 18:27:20.808567   77433 provision.go:177] copyRemoteCerts
	I0917 18:27:20.808634   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:27:20.808662   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.811568   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.811927   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.811954   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.812154   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:20.812347   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.812503   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:20.812627   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:27:20.895825   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0917 18:27:20.922489   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 18:27:20.948827   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:27:20.974824   77433 provision.go:87] duration metric: took 367.418792ms to configureAuth
	I0917 18:27:20.974852   77433 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:27:20.975023   77433 config.go:182] Loaded profile config "no-preload-328741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:27:20.975090   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.977758   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.978068   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.978105   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.978254   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:20.978473   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.978662   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.978784   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:20.978951   77433 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:20.979110   77433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0917 18:27:20.979126   77433 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:27:21.205095   77433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:27:21.205123   77433 machine.go:96] duration metric: took 958.76263ms to provisionDockerMachine
	I0917 18:27:21.205136   77433 start.go:293] postStartSetup for "no-preload-328741" (driver="kvm2")
	I0917 18:27:21.205148   77433 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:27:21.205167   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:21.205532   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:27:21.205565   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:21.208451   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.208840   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:21.208882   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.209046   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:21.209355   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:21.209578   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:21.209759   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:27:21.291918   77433 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:27:21.296054   77433 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:27:21.296077   77433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:27:21.296139   77433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:27:21.296215   77433 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:27:21.296313   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:27:21.305838   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:27:21.331220   77433 start.go:296] duration metric: took 126.069168ms for postStartSetup
	I0917 18:27:21.331261   77433 fix.go:56] duration metric: took 17.993134184s for fixHost
	I0917 18:27:21.331280   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:21.334290   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.334663   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:21.334688   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.334893   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:21.335134   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:21.335275   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:21.335443   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:21.335597   77433 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:21.335788   77433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0917 18:27:21.335803   77433 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:27:21.442323   77433 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726597641.413351440
	
	I0917 18:27:21.442375   77433 fix.go:216] guest clock: 1726597641.413351440
	I0917 18:27:21.442390   77433 fix.go:229] Guest: 2024-09-17 18:27:21.41335144 +0000 UTC Remote: 2024-09-17 18:27:21.331264373 +0000 UTC m=+279.198911017 (delta=82.087067ms)
	I0917 18:27:21.442423   77433 fix.go:200] guest clock delta is within tolerance: 82.087067ms
	I0917 18:27:21.442443   77433 start.go:83] releasing machines lock for "no-preload-328741", held for 18.10434208s
	I0917 18:27:21.442489   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:21.442775   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetIP
	I0917 18:27:21.445223   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.445561   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:21.445602   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.445710   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:21.446182   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:21.446357   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:21.446466   77433 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:27:21.446519   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:21.446551   77433 ssh_runner.go:195] Run: cat /version.json
	I0917 18:27:21.446574   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:21.449063   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.449340   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.449400   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:21.449435   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.449557   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:21.449699   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:21.449832   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:21.449833   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:21.449866   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.450010   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:21.450004   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:27:21.450104   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:21.450222   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:21.450352   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:27:21.552947   77433 ssh_runner.go:195] Run: systemctl --version
	I0917 18:27:21.559634   77433 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:27:21.707720   77433 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:27:21.714672   77433 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:27:21.714746   77433 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:27:21.731669   77433 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:27:21.731700   77433 start.go:495] detecting cgroup driver to use...
	I0917 18:27:21.731776   77433 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:27:21.749370   77433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:27:21.765181   77433 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:27:21.765284   77433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:27:21.782356   77433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:27:21.801216   77433 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:27:21.918587   77433 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:27:22.089578   77433 docker.go:233] disabling docker service ...
	I0917 18:27:22.089661   77433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:27:22.110533   77433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:27:22.125372   77433 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:27:22.241575   77433 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:27:22.367081   77433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:27:22.381835   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:27:22.402356   77433 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 18:27:22.402432   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.413980   77433 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:27:22.414051   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.426845   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.439426   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.451352   77433 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:27:22.463891   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.475686   77433 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.495380   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.507217   77433 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:27:22.517776   77433 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:27:22.517844   77433 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:27:22.537889   77433 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:27:22.549554   77433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:27:22.663258   77433 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:27:22.762619   77433 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:27:22.762693   77433 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:27:22.769911   77433 start.go:563] Will wait 60s for crictl version
	I0917 18:27:22.769967   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:22.775014   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:27:22.819750   77433 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:27:22.819864   77433 ssh_runner.go:195] Run: crio --version
	I0917 18:27:22.849303   77433 ssh_runner.go:195] Run: crio --version
	I0917 18:27:22.887418   77433 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 18:27:21.470362   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Start
	I0917 18:27:21.470570   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Ensuring networks are active...
	I0917 18:27:21.471316   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Ensuring network default is active
	I0917 18:27:21.471781   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Ensuring network mk-default-k8s-diff-port-438836 is active
	I0917 18:27:21.472151   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Getting domain xml...
	I0917 18:27:21.472856   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Creating domain...
	I0917 18:27:22.744436   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting to get IP...
	I0917 18:27:22.745314   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:22.745829   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:22.745899   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:22.745819   78807 retry.go:31] will retry after 201.903728ms: waiting for machine to come up
	I0917 18:27:22.949838   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:22.951570   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:22.951596   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:22.951537   78807 retry.go:31] will retry after 376.852856ms: waiting for machine to come up
	I0917 18:27:23.330165   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:23.330685   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:23.330706   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:23.330633   78807 retry.go:31] will retry after 415.874344ms: waiting for machine to come up
	I0917 18:27:22.888728   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetIP
	I0917 18:27:22.891793   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:22.892111   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:22.892130   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:22.892513   77433 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0917 18:27:22.897071   77433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:27:22.911118   77433 kubeadm.go:883] updating cluster {Name:no-preload-328741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-328741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:27:22.911279   77433 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 18:27:22.911333   77433 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:27:22.949155   77433 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0917 18:27:22.949180   77433 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0917 18:27:22.949270   77433 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:22.949289   77433 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:22.949319   77433 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0917 18:27:22.949298   77433 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:22.949398   77433 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:22.949424   77433 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:22.949449   77433 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:22.949339   77433 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:22.950952   77433 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:22.951106   77433 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:22.951113   77433 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:22.951238   77433 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:22.951257   77433 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0917 18:27:22.951257   77433 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:22.951343   77433 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:22.951426   77433 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.145473   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:23.155577   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:23.167187   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:23.169154   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:23.171736   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.196199   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:23.225029   77433 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0917 18:27:23.225085   77433 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:23.225133   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.233185   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0917 18:27:23.269008   77433 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0917 18:27:23.269045   77433 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:23.269092   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.307273   77433 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0917 18:27:23.307319   77433 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:23.307374   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.345906   77433 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0917 18:27:23.345949   77433 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.345999   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.346222   77433 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0917 18:27:23.346259   77433 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:23.346316   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.362612   77433 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0917 18:27:23.362657   77433 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:23.362684   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:23.362707   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.464589   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:23.464684   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:23.464742   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:23.464815   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.464903   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:23.464911   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:23.616289   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:23.616349   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:23.616400   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:23.616459   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.616514   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:23.616548   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:23.752643   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:23.752754   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:23.752754   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:23.761857   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.761945   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0917 18:27:23.762041   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0917 18:27:23.768641   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:23.883181   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0917 18:27:23.883181   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0917 18:27:23.883230   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0917 18:27:23.883294   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0917 18:27:23.883301   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0917 18:27:23.883302   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0917 18:27:23.883314   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0917 18:27:23.883371   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0917 18:27:23.883388   77433 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0917 18:27:23.883401   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0917 18:27:23.883413   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0917 18:27:23.883680   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0917 18:27:23.883758   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0917 18:27:23.894354   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0917 18:27:23.894539   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0917 18:27:23.901735   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0917 18:27:23.901990   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0917 18:27:23.909116   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:26.450360   77433 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.566575076s)
	I0917 18:27:26.450405   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0917 18:27:26.450360   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.566921389s)
	I0917 18:27:26.450422   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0917 18:27:26.450429   77433 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.541282746s)
	I0917 18:27:26.450444   77433 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0917 18:27:26.450492   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0917 18:27:26.450485   77433 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0917 18:27:26.450524   77433 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:26.450567   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.748331   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:23.748832   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:23.748862   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:23.748765   78807 retry.go:31] will retry after 515.370863ms: waiting for machine to come up
	I0917 18:27:24.265477   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:24.265902   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:24.265939   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:24.265859   78807 retry.go:31] will retry after 629.410487ms: waiting for machine to come up
	I0917 18:27:24.896939   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:24.897469   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:24.897500   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:24.897415   78807 retry.go:31] will retry after 846.873676ms: waiting for machine to come up
	I0917 18:27:25.745594   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:25.746228   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:25.746254   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:25.746167   78807 retry.go:31] will retry after 1.192058073s: waiting for machine to come up
	I0917 18:27:26.940216   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:26.940678   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:26.940702   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:26.940637   78807 retry.go:31] will retry after 1.449067435s: waiting for machine to come up
	I0917 18:27:28.392247   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:28.392711   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:28.392753   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:28.392665   78807 retry.go:31] will retry after 1.444723582s: waiting for machine to come up
	I0917 18:27:29.730898   77433 ssh_runner.go:235] Completed: which crictl: (3.280308944s)
	I0917 18:27:29.730988   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:29.731032   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.280407278s)
	I0917 18:27:29.731069   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0917 18:27:29.731121   77433 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0917 18:27:29.731164   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0917 18:27:29.781214   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:32.016162   77433 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.234900005s)
	I0917 18:27:32.016246   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:32.016175   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.284993422s)
	I0917 18:27:32.016331   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0917 18:27:32.016382   77433 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0917 18:27:32.016431   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0917 18:27:32.062774   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0917 18:27:32.062903   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0917 18:27:29.839565   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:29.840118   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:29.840154   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:29.840044   78807 retry.go:31] will retry after 1.984255207s: waiting for machine to come up
	I0917 18:27:31.825642   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:31.826059   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:31.826105   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:31.826027   78807 retry.go:31] will retry after 1.870760766s: waiting for machine to come up
	I0917 18:27:34.201435   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.18496735s)
	I0917 18:27:34.201470   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0917 18:27:34.201493   77433 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0917 18:27:34.201506   77433 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.138578181s)
	I0917 18:27:34.201545   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0917 18:27:34.201547   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0917 18:27:36.281470   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.079903331s)
	I0917 18:27:36.281515   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0917 18:27:36.281539   77433 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0917 18:27:36.281581   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0917 18:27:33.698947   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:33.699358   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:33.699389   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:33.699308   78807 retry.go:31] will retry after 2.194557575s: waiting for machine to come up
	I0917 18:27:35.896774   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:35.897175   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:35.897215   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:35.897139   78807 retry.go:31] will retry after 3.232409388s: waiting for machine to come up
	I0917 18:27:40.422552   78008 start.go:364] duration metric: took 3m49.634084682s to acquireMachinesLock for "old-k8s-version-190698"
	I0917 18:27:40.422631   78008 start.go:96] Skipping create...Using existing machine configuration
	I0917 18:27:40.422641   78008 fix.go:54] fixHost starting: 
	I0917 18:27:40.423075   78008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:27:40.423129   78008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:27:40.444791   78008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38471
	I0917 18:27:40.445363   78008 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:27:40.446028   78008 main.go:141] libmachine: Using API Version  1
	I0917 18:27:40.446063   78008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:27:40.446445   78008 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:27:40.446690   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:27:40.446844   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetState
	I0917 18:27:40.448698   78008 fix.go:112] recreateIfNeeded on old-k8s-version-190698: state=Stopped err=<nil>
	I0917 18:27:40.448743   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	W0917 18:27:40.448912   78008 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 18:27:40.451316   78008 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-190698" ...
	I0917 18:27:40.452694   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .Start
	I0917 18:27:40.452899   78008 main.go:141] libmachine: (old-k8s-version-190698) Ensuring networks are active...
	I0917 18:27:40.453913   78008 main.go:141] libmachine: (old-k8s-version-190698) Ensuring network default is active
	I0917 18:27:40.454353   78008 main.go:141] libmachine: (old-k8s-version-190698) Ensuring network mk-old-k8s-version-190698 is active
	I0917 18:27:40.454806   78008 main.go:141] libmachine: (old-k8s-version-190698) Getting domain xml...
	I0917 18:27:40.455606   78008 main.go:141] libmachine: (old-k8s-version-190698) Creating domain...
	I0917 18:27:39.131665   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.132199   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Found IP for machine: 192.168.39.58
	I0917 18:27:39.132224   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Reserving static IP address...
	I0917 18:27:39.132241   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has current primary IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.132683   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-438836", mac: "52:54:00:78:fb:fd", ip: "192.168.39.58"} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.132716   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | skip adding static IP to network mk-default-k8s-diff-port-438836 - found existing host DHCP lease matching {name: "default-k8s-diff-port-438836", mac: "52:54:00:78:fb:fd", ip: "192.168.39.58"}
	I0917 18:27:39.132729   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Reserved static IP address: 192.168.39.58
	I0917 18:27:39.132744   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for SSH to be available...
	I0917 18:27:39.132759   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Getting to WaitForSSH function...
	I0917 18:27:39.135223   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.135590   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.135612   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.135797   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Using SSH client type: external
	I0917 18:27:39.135825   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa (-rw-------)
	I0917 18:27:39.135871   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:27:39.135888   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | About to run SSH command:
	I0917 18:27:39.135899   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | exit 0
	I0917 18:27:39.261644   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | SSH cmd err, output: <nil>: 
	I0917 18:27:39.261978   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetConfigRaw
	I0917 18:27:39.262594   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetIP
	I0917 18:27:39.265005   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.265308   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.265376   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.265576   77819 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/config.json ...
	I0917 18:27:39.265817   77819 machine.go:93] provisionDockerMachine start ...
	I0917 18:27:39.265835   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:39.266039   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.268290   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.268616   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.268646   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.268846   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:39.269019   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.269159   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.269333   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:39.269497   77819 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:39.269689   77819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0917 18:27:39.269701   77819 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 18:27:39.378024   77819 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 18:27:39.378050   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetMachineName
	I0917 18:27:39.378284   77819 buildroot.go:166] provisioning hostname "default-k8s-diff-port-438836"
	I0917 18:27:39.378322   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetMachineName
	I0917 18:27:39.378529   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.381247   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.381574   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.381614   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.381765   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:39.381938   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.382057   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.382169   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:39.382311   77819 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:39.382546   77819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0917 18:27:39.382567   77819 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-438836 && echo "default-k8s-diff-port-438836" | sudo tee /etc/hostname
	I0917 18:27:39.516431   77819 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-438836
	
	I0917 18:27:39.516462   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.519542   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.519934   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.519966   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.520172   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:39.520405   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.520594   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.520773   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:39.520927   77819 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:39.521094   77819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0917 18:27:39.521111   77819 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-438836' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-438836/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-438836' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:27:39.640608   77819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:27:39.640656   77819 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:27:39.640717   77819 buildroot.go:174] setting up certificates
	I0917 18:27:39.640731   77819 provision.go:84] configureAuth start
	I0917 18:27:39.640750   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetMachineName
	I0917 18:27:39.641038   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetIP
	I0917 18:27:39.643698   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.644026   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.644085   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.644374   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.646822   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.647198   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.647227   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.647360   77819 provision.go:143] copyHostCerts
	I0917 18:27:39.647428   77819 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:27:39.647441   77819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:27:39.647516   77819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:27:39.647637   77819 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:27:39.647658   77819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:27:39.647693   77819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:27:39.647782   77819 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:27:39.647790   77819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:27:39.647817   77819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:27:39.647883   77819 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-438836 san=[127.0.0.1 192.168.39.58 default-k8s-diff-port-438836 localhost minikube]
	I0917 18:27:39.751962   77819 provision.go:177] copyRemoteCerts
	I0917 18:27:39.752028   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:27:39.752053   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.754975   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.755348   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.755381   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.755541   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:39.755725   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.755872   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:39.755988   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:27:39.840071   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0917 18:27:39.866175   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 18:27:39.896353   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:27:39.924332   77819 provision.go:87] duration metric: took 283.582838ms to configureAuth
	I0917 18:27:39.924363   77819 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:27:39.924606   77819 config.go:182] Loaded profile config "default-k8s-diff-port-438836": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:27:39.924701   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.927675   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.928027   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.928058   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.928307   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:39.928545   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.928710   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.928854   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:39.929011   77819 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:39.929244   77819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0917 18:27:39.929272   77819 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:27:40.170729   77819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:27:40.170763   77819 machine.go:96] duration metric: took 904.932975ms to provisionDockerMachine
	I0917 18:27:40.170776   77819 start.go:293] postStartSetup for "default-k8s-diff-port-438836" (driver="kvm2")
	I0917 18:27:40.170789   77819 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:27:40.170810   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:40.171145   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:27:40.171187   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:40.173980   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.174451   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:40.174480   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.174739   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:40.174926   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:40.175096   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:40.175261   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:27:40.263764   77819 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:27:40.269500   77819 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:27:40.269528   77819 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:27:40.269611   77819 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:27:40.269711   77819 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:27:40.269838   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:27:40.280672   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:27:40.309608   77819 start.go:296] duration metric: took 138.819033ms for postStartSetup
	I0917 18:27:40.309648   77819 fix.go:56] duration metric: took 18.867027995s for fixHost
	I0917 18:27:40.309668   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:40.312486   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.313018   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:40.313042   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.313201   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:40.313408   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:40.313574   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:40.313691   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:40.313853   77819 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:40.314037   77819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0917 18:27:40.314050   77819 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:27:40.422393   77819 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726597660.391833807
	
	I0917 18:27:40.422417   77819 fix.go:216] guest clock: 1726597660.391833807
	I0917 18:27:40.422424   77819 fix.go:229] Guest: 2024-09-17 18:27:40.391833807 +0000 UTC Remote: 2024-09-17 18:27:40.309651352 +0000 UTC m=+241.856499140 (delta=82.182455ms)
	I0917 18:27:40.422443   77819 fix.go:200] guest clock delta is within tolerance: 82.182455ms
	I0917 18:27:40.422448   77819 start.go:83] releasing machines lock for "default-k8s-diff-port-438836", held for 18.97986821s
	I0917 18:27:40.422473   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:40.422745   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetIP
	I0917 18:27:40.425463   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.425856   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:40.425885   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.426048   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:40.426529   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:40.426665   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:40.426742   77819 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:27:40.426807   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:40.426910   77819 ssh_runner.go:195] Run: cat /version.json
	I0917 18:27:40.426936   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:40.429570   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.429639   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.429967   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:40.430004   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:40.430031   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.430047   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.430161   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:40.430297   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:40.430376   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:40.430470   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:40.430662   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:40.430664   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:40.430841   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:27:40.430837   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:27:40.532536   77819 ssh_runner.go:195] Run: systemctl --version
	I0917 18:27:40.540125   77819 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:27:40.697991   77819 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:27:40.705336   77819 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:27:40.705427   77819 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:27:40.723038   77819 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:27:40.723065   77819 start.go:495] detecting cgroup driver to use...
	I0917 18:27:40.723135   77819 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:27:40.745561   77819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:27:40.765884   77819 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:27:40.765955   77819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:27:40.786769   77819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:27:40.805655   77819 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:27:40.935895   77819 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:27:41.121556   77819 docker.go:233] disabling docker service ...
	I0917 18:27:41.121638   77819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:27:41.144711   77819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:27:41.164782   77819 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:27:41.308439   77819 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:27:41.467525   77819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:27:41.485989   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:27:41.510198   77819 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 18:27:41.510282   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.526458   77819 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:27:41.526566   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.543334   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.558978   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.574621   77819 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:27:41.587226   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.603144   77819 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.627410   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.639981   77819 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:27:41.651547   77819 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:27:41.651615   77819 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:27:41.669534   77819 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:27:41.684429   77819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:27:41.839270   77819 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:27:41.974151   77819 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:27:41.974230   77819 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:27:41.980491   77819 start.go:563] Will wait 60s for crictl version
	I0917 18:27:41.980563   77819 ssh_runner.go:195] Run: which crictl
	I0917 18:27:41.985802   77819 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:27:42.033141   77819 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:27:42.033247   77819 ssh_runner.go:195] Run: crio --version
	I0917 18:27:42.076192   77819 ssh_runner.go:195] Run: crio --version
	I0917 18:27:42.118442   77819 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 18:27:37.750960   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.469353165s)
	I0917 18:27:37.750995   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0917 18:27:37.751021   77433 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0917 18:27:37.751074   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0917 18:27:38.415240   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0917 18:27:38.415308   77433 cache_images.go:123] Successfully loaded all cached images
	I0917 18:27:38.415317   77433 cache_images.go:92] duration metric: took 15.466122195s to LoadCachedImages
	I0917 18:27:38.415338   77433 kubeadm.go:934] updating node { 192.168.72.182 8443 v1.31.1 crio true true} ...
	I0917 18:27:38.415428   77433 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-328741 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-328741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:27:38.415536   77433 ssh_runner.go:195] Run: crio config
	I0917 18:27:38.466849   77433 cni.go:84] Creating CNI manager for ""
	I0917 18:27:38.466880   77433 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:27:38.466893   77433 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:27:38.466921   77433 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.182 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-328741 NodeName:no-preload-328741 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 18:27:38.467090   77433 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-328741"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.182
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.182"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:27:38.467166   77433 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 18:27:38.478263   77433 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:27:38.478345   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:27:38.488938   77433 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0917 18:27:38.509613   77433 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:27:38.529224   77433 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0917 18:27:38.549010   77433 ssh_runner.go:195] Run: grep 192.168.72.182	control-plane.minikube.internal$ /etc/hosts
	I0917 18:27:38.553381   77433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:27:38.566215   77433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:27:38.688671   77433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:27:38.708655   77433 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741 for IP: 192.168.72.182
	I0917 18:27:38.708677   77433 certs.go:194] generating shared ca certs ...
	I0917 18:27:38.708693   77433 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:27:38.708860   77433 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:27:38.708916   77433 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:27:38.708930   77433 certs.go:256] generating profile certs ...
	I0917 18:27:38.709038   77433 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/client.key
	I0917 18:27:38.709130   77433 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/apiserver.key.843ed40b
	I0917 18:27:38.709199   77433 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/proxy-client.key
	I0917 18:27:38.709384   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:27:38.709421   77433 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:27:38.709435   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:27:38.709471   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:27:38.709519   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:27:38.709552   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:27:38.709606   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:27:38.710412   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:27:38.754736   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:27:38.792703   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:27:38.826420   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:27:38.869433   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0917 18:27:38.897601   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 18:27:38.928694   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:27:38.953856   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 18:27:38.978643   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:27:39.004382   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:27:39.031548   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:27:39.057492   77433 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:27:39.075095   77433 ssh_runner.go:195] Run: openssl version
	I0917 18:27:39.081033   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:27:39.092196   77433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:27:39.097013   77433 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:27:39.097070   77433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:27:39.103104   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:27:39.114377   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:27:39.125639   77433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:27:39.130757   77433 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:27:39.130828   77433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:27:39.137857   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:27:39.150215   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:27:39.161792   77433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:39.166467   77433 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:39.166528   77433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:39.172262   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:27:39.183793   77433 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:27:39.188442   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 18:27:39.194477   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 18:27:39.200688   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 18:27:39.207092   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 18:27:39.213451   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 18:27:39.220286   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 18:27:39.226642   77433 kubeadm.go:392] StartCluster: {Name:no-preload-328741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-328741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:27:39.226747   77433 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:27:39.226814   77433 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:27:39.273929   77433 cri.go:89] found id: ""
	I0917 18:27:39.274001   77433 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:27:39.286519   77433 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 18:27:39.286543   77433 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 18:27:39.286584   77433 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 18:27:39.298955   77433 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 18:27:39.300296   77433 kubeconfig.go:125] found "no-preload-328741" server: "https://192.168.72.182:8443"
	I0917 18:27:39.303500   77433 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 18:27:39.316866   77433 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.182
	I0917 18:27:39.316904   77433 kubeadm.go:1160] stopping kube-system containers ...
	I0917 18:27:39.316917   77433 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0917 18:27:39.316980   77433 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:27:39.356519   77433 cri.go:89] found id: ""
	I0917 18:27:39.356608   77433 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 18:27:39.373894   77433 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:27:39.387121   77433 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:27:39.387140   77433 kubeadm.go:157] found existing configuration files:
	
	I0917 18:27:39.387183   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:27:39.397807   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:27:39.397867   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:27:39.408393   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:27:39.420103   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:27:39.420175   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:27:39.432123   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:27:39.442237   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:27:39.442308   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:27:39.452902   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:27:39.462802   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:27:39.462857   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:27:39.473035   77433 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:27:39.483824   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:39.603594   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:40.540682   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:40.798278   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:40.876550   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:41.006410   77433 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:27:41.006504   77433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:41.507355   77433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:42.006707   77433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:42.054395   77433 api_server.go:72] duration metric: took 1.047984188s to wait for apiserver process to appear ...
	I0917 18:27:42.054448   77433 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:27:42.054473   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:42.054949   77433 api_server.go:269] stopped: https://192.168.72.182:8443/healthz: Get "https://192.168.72.182:8443/healthz": dial tcp 192.168.72.182:8443: connect: connection refused
	I0917 18:27:42.119537   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetIP
	I0917 18:27:42.122908   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:42.123378   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:42.123409   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:42.123739   77819 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0917 18:27:42.129654   77819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:27:42.144892   77819 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-438836 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-438836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:27:42.145015   77819 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 18:27:42.145054   77819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:27:42.191002   77819 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0917 18:27:42.191086   77819 ssh_runner.go:195] Run: which lz4
	I0917 18:27:42.196979   77819 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 18:27:42.203024   77819 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 18:27:42.203079   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0917 18:27:41.874915   78008 main.go:141] libmachine: (old-k8s-version-190698) Waiting to get IP...
	I0917 18:27:41.875882   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:41.876350   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:41.876438   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:41.876337   78975 retry.go:31] will retry after 221.467702ms: waiting for machine to come up
	I0917 18:27:42.100196   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:42.100848   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:42.100869   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:42.100798   78975 retry.go:31] will retry after 339.25287ms: waiting for machine to come up
	I0917 18:27:42.441407   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:42.442029   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:42.442057   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:42.441987   78975 retry.go:31] will retry after 471.576193ms: waiting for machine to come up
	I0917 18:27:42.915529   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:42.916159   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:42.916187   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:42.916123   78975 retry.go:31] will retry after 502.97146ms: waiting for machine to come up
	I0917 18:27:43.420795   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:43.421214   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:43.421256   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:43.421163   78975 retry.go:31] will retry after 660.138027ms: waiting for machine to come up
	I0917 18:27:44.082653   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:44.083225   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:44.083255   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:44.083166   78975 retry.go:31] will retry after 656.142121ms: waiting for machine to come up
	I0917 18:27:44.740700   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:44.741167   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:44.741193   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:44.741129   78975 retry.go:31] will retry after 928.613341ms: waiting for machine to come up
	I0917 18:27:45.671934   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:45.672452   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:45.672489   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:45.672370   78975 retry.go:31] will retry after 980.051509ms: waiting for machine to come up
	I0917 18:27:42.554732   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:45.472618   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:27:45.472651   77433 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:27:45.472667   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:45.491418   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:27:45.491447   77433 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:27:45.554728   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:45.562047   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:27:45.562083   77433 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:27:46.054709   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:46.077483   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:27:46.077533   77433 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:27:46.555249   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:46.570200   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:27:46.570242   77433 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:27:47.054604   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:47.062637   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 200:
	ok
	I0917 18:27:47.074075   77433 api_server.go:141] control plane version: v1.31.1
	I0917 18:27:47.074107   77433 api_server.go:131] duration metric: took 5.019651057s to wait for apiserver health ...
	I0917 18:27:47.074118   77433 cni.go:84] Creating CNI manager for ""
	I0917 18:27:47.074127   77433 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:27:47.275236   77433 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:27:43.762089   77819 crio.go:462] duration metric: took 1.565150626s to copy over tarball
	I0917 18:27:43.762183   77819 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 18:27:46.222613   77819 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.460401071s)
	I0917 18:27:46.222640   77819 crio.go:469] duration metric: took 2.460522168s to extract the tarball
	I0917 18:27:46.222649   77819 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 18:27:46.260257   77819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:27:46.314982   77819 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 18:27:46.315007   77819 cache_images.go:84] Images are preloaded, skipping loading
	I0917 18:27:46.315017   77819 kubeadm.go:934] updating node { 192.168.39.58 8444 v1.31.1 crio true true} ...
	I0917 18:27:46.315159   77819 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-438836 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-438836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:27:46.315267   77819 ssh_runner.go:195] Run: crio config
	I0917 18:27:46.372511   77819 cni.go:84] Creating CNI manager for ""
	I0917 18:27:46.372534   77819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:27:46.372545   77819 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:27:46.372564   77819 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-438836 NodeName:default-k8s-diff-port-438836 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 18:27:46.372684   77819 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-438836"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.58
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:27:46.372742   77819 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 18:27:46.383855   77819 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:27:46.383950   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:27:46.394588   77819 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0917 18:27:46.416968   77819 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:27:46.438389   77819 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0917 18:27:46.461630   77819 ssh_runner.go:195] Run: grep 192.168.39.58	control-plane.minikube.internal$ /etc/hosts
	I0917 18:27:46.467126   77819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:27:46.484625   77819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:27:46.614753   77819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:27:46.638959   77819 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836 for IP: 192.168.39.58
	I0917 18:27:46.638984   77819 certs.go:194] generating shared ca certs ...
	I0917 18:27:46.639004   77819 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:27:46.639166   77819 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:27:46.639228   77819 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:27:46.639240   77819 certs.go:256] generating profile certs ...
	I0917 18:27:46.639349   77819 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/client.key
	I0917 18:27:46.639420   77819 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/apiserver.key.06041009
	I0917 18:27:46.639484   77819 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/proxy-client.key
	I0917 18:27:46.639636   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:27:46.639695   77819 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:27:46.639708   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:27:46.639740   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:27:46.639773   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:27:46.639807   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:27:46.639904   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:27:46.640789   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:27:46.681791   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:27:46.715575   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:27:46.746415   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:27:46.780380   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 18:27:46.805518   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 18:27:46.841727   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:27:46.881056   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 18:27:46.918589   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:27:46.947113   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:27:46.977741   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:27:47.015143   77819 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:27:47.036837   77819 ssh_runner.go:195] Run: openssl version
	I0917 18:27:47.043152   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:27:47.057503   77819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:47.063479   77819 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:47.063554   77819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:47.072746   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:27:47.090698   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:27:47.105125   77819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:27:47.110617   77819 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:27:47.110690   77819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:27:47.117267   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:27:47.131593   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:27:47.145726   77819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:27:47.151245   77819 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:27:47.151350   77819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:27:47.157996   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:27:47.171327   77819 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:27:47.178058   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 18:27:47.185068   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 18:27:47.191776   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 18:27:47.198740   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 18:27:47.206057   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 18:27:47.212608   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 18:27:47.219345   77819 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-438836 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-438836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:27:47.219459   77819 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:27:47.219518   77819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:27:47.259853   77819 cri.go:89] found id: ""
	I0917 18:27:47.259944   77819 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:27:47.271127   77819 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 18:27:47.271146   77819 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 18:27:47.271197   77819 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 18:27:47.283724   77819 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 18:27:47.284834   77819 kubeconfig.go:125] found "default-k8s-diff-port-438836" server: "https://192.168.39.58:8444"
	I0917 18:27:47.287040   77819 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 18:27:47.298429   77819 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.58
	I0917 18:27:47.298462   77819 kubeadm.go:1160] stopping kube-system containers ...
	I0917 18:27:47.298481   77819 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0917 18:27:47.298535   77819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:27:47.341739   77819 cri.go:89] found id: ""
	I0917 18:27:47.341820   77819 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 18:27:47.361539   77819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:27:47.377218   77819 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:27:47.377254   77819 kubeadm.go:157] found existing configuration files:
	
	I0917 18:27:47.377301   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0917 18:27:47.390846   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:27:47.390913   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:27:47.401363   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0917 18:27:47.411412   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:27:47.411490   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:27:47.422596   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0917 18:27:47.438021   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:27:47.438102   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:27:47.450085   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0917 18:27:47.461269   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:27:47.461343   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:27:47.472893   77819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:27:47.484393   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:47.620947   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:46.654519   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:46.654962   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:46.655001   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:46.654927   78975 retry.go:31] will retry after 1.346541235s: waiting for machine to come up
	I0917 18:27:48.003569   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:48.004084   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:48.004118   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:48.004017   78975 retry.go:31] will retry after 2.098571627s: waiting for machine to come up
	I0917 18:27:50.105422   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:50.106073   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:50.106096   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:50.105998   78975 retry.go:31] will retry after 1.995584656s: waiting for machine to come up
	I0917 18:27:47.424559   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:27:47.441071   77433 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:27:47.462954   77433 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:27:47.636311   77433 system_pods.go:59] 8 kube-system pods found
	I0917 18:27:47.636361   77433 system_pods.go:61] "coredns-7c65d6cfc9-cgmx9" [e539dfc7-82f3-4e3a-b4d8-262c528fa5bf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:27:47.636373   77433 system_pods.go:61] "etcd-no-preload-328741" [16eed9ef-b991-4760-a116-af9716a70d71] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 18:27:47.636388   77433 system_pods.go:61] "kube-apiserver-no-preload-328741" [ed952dd4-6a99-4ad8-9cdb-c47a5f9d8e46] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 18:27:47.636397   77433 system_pods.go:61] "kube-controller-manager-no-preload-328741" [5da59a8e-4ce3-41f0-a8a0-d022f8788ce1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 18:27:47.636407   77433 system_pods.go:61] "kube-proxy-kpzxv" [eae9f1b2-95bf-44bf-9752-92e34a863520] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 18:27:47.636415   77433 system_pods.go:61] "kube-scheduler-no-preload-328741" [54c4a13c-e03c-4ccb-993b-7b454a66f266] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 18:27:47.636428   77433 system_pods.go:61] "metrics-server-6867b74b74-l8n57" [06210da2-3da4-4082-a966-7a808d762db9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:27:47.636434   77433 system_pods.go:61] "storage-provisioner" [c7501af5-63e1-499f-acfe-48c569e460dd] Running
	I0917 18:27:47.636445   77433 system_pods.go:74] duration metric: took 173.469578ms to wait for pod list to return data ...
	I0917 18:27:47.636458   77433 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:27:47.642831   77433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:27:47.642863   77433 node_conditions.go:123] node cpu capacity is 2
	I0917 18:27:47.642876   77433 node_conditions.go:105] duration metric: took 6.413638ms to run NodePressure ...
	I0917 18:27:47.642898   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:49.172338   77433 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.529413888s)
	I0917 18:27:49.172374   77433 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0917 18:27:49.181467   77433 kubeadm.go:739] kubelet initialised
	I0917 18:27:49.181492   77433 kubeadm.go:740] duration metric: took 9.106065ms waiting for restarted kubelet to initialise ...
	I0917 18:27:49.181504   77433 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:27:49.188444   77433 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:51.196629   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace has status "Ready":"False"
	I0917 18:27:48.837267   77819 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.216281013s)
	I0917 18:27:48.837303   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:49.079443   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:49.184248   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:49.270646   77819 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:27:49.270739   77819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:49.771210   77819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:50.270888   77819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:50.300440   77819 api_server.go:72] duration metric: took 1.029792788s to wait for apiserver process to appear ...
	I0917 18:27:50.300472   77819 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:27:50.300497   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:50.301150   77819 api_server.go:269] stopped: https://192.168.39.58:8444/healthz: Get "https://192.168.39.58:8444/healthz": dial tcp 192.168.39.58:8444: connect: connection refused
	I0917 18:27:50.800904   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:53.830413   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:27:53.830444   77819 api_server.go:103] status: https://192.168.39.58:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:27:53.830466   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:53.863997   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:27:53.864040   77819 api_server.go:103] status: https://192.168.39.58:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:27:54.301188   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:54.308708   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:27:54.308744   77819 api_server.go:103] status: https://192.168.39.58:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:27:54.801293   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:54.810135   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:27:54.810165   77819 api_server.go:103] status: https://192.168.39.58:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:27:55.300669   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:55.306598   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 200:
	ok
	I0917 18:27:55.314062   77819 api_server.go:141] control plane version: v1.31.1
	I0917 18:27:55.314089   77819 api_server.go:131] duration metric: took 5.013610515s to wait for apiserver health ...
	I0917 18:27:55.314098   77819 cni.go:84] Creating CNI manager for ""
	I0917 18:27:55.314105   77819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:27:55.315933   77819 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:27:52.103970   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:52.104598   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:52.104668   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:52.104610   78975 retry.go:31] will retry after 3.302824s: waiting for machine to come up
	I0917 18:27:55.410506   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:55.410967   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:55.410993   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:55.410917   78975 retry.go:31] will retry after 3.790367729s: waiting for machine to come up
	I0917 18:27:53.697650   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace has status "Ready":"False"
	I0917 18:27:56.195779   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace has status "Ready":"False"
	I0917 18:27:55.317026   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:27:55.328593   77819 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:27:55.353710   77819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:27:55.364593   77819 system_pods.go:59] 8 kube-system pods found
	I0917 18:27:55.364637   77819 system_pods.go:61] "coredns-7c65d6cfc9-5wm4j" [af3267b8-4da2-4e95-802e-981814415f7d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:27:55.364649   77819 system_pods.go:61] "etcd-default-k8s-diff-port-438836" [72235e11-dd9c-4560-a258-84ae2fefc0ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 18:27:55.364659   77819 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-438836" [606ffa55-26de-426a-b101-3e5db2329146] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 18:27:55.364682   77819 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-438836" [a9ef6aae-54f9-4ac7-959f-3fb9dcf6019d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 18:27:55.364694   77819 system_pods.go:61] "kube-proxy-pbjlc" [de4d4161-64cd-4794-9eaa-d42b1b13e4a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 18:27:55.364702   77819 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-438836" [ba637ee3-77ca-4b12-8936-3e8616be80d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 18:27:55.364712   77819 system_pods.go:61] "metrics-server-6867b74b74-gpdsn" [4d3193f7-7912-40c6-b86e-402935023601] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:27:55.364722   77819 system_pods.go:61] "storage-provisioner" [5dbf57a2-126c-46e2-9be5-eb2974b84720] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 18:27:55.364739   77819 system_pods.go:74] duration metric: took 10.995638ms to wait for pod list to return data ...
	I0917 18:27:55.364752   77819 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:27:55.369115   77819 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:27:55.369145   77819 node_conditions.go:123] node cpu capacity is 2
	I0917 18:27:55.369159   77819 node_conditions.go:105] duration metric: took 4.401118ms to run NodePressure ...
	I0917 18:27:55.369179   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:55.688791   77819 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0917 18:27:55.694004   77819 kubeadm.go:739] kubelet initialised
	I0917 18:27:55.694035   77819 kubeadm.go:740] duration metric: took 5.21454ms waiting for restarted kubelet to initialise ...
	I0917 18:27:55.694045   77819 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:27:55.700066   77819 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5wm4j" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.706889   77819 pod_ready.go:103] pod "coredns-7c65d6cfc9-5wm4j" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:00.566518   77264 start.go:364] duration metric: took 52.227841633s to acquireMachinesLock for "embed-certs-081863"
	I0917 18:28:00.566588   77264 start.go:96] Skipping create...Using existing machine configuration
	I0917 18:28:00.566596   77264 fix.go:54] fixHost starting: 
	I0917 18:28:00.567020   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:28:00.567055   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:28:00.585812   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46167
	I0917 18:28:00.586338   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:28:00.586855   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:28:00.586878   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:28:00.587201   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:28:00.587368   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:00.587552   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetState
	I0917 18:28:00.589641   77264 fix.go:112] recreateIfNeeded on embed-certs-081863: state=Stopped err=<nil>
	I0917 18:28:00.589668   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	W0917 18:28:00.589827   77264 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 18:28:00.591622   77264 out.go:177] * Restarting existing kvm2 VM for "embed-certs-081863" ...
	I0917 18:27:59.203551   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.204119   78008 main.go:141] libmachine: (old-k8s-version-190698) Found IP for machine: 192.168.61.143
	I0917 18:27:59.204145   78008 main.go:141] libmachine: (old-k8s-version-190698) Reserving static IP address...
	I0917 18:27:59.204160   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has current primary IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.204580   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "old-k8s-version-190698", mac: "52:54:00:72:8a:43", ip: "192.168.61.143"} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.204623   78008 main.go:141] libmachine: (old-k8s-version-190698) Reserved static IP address: 192.168.61.143
	I0917 18:27:59.204642   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | skip adding static IP to network mk-old-k8s-version-190698 - found existing host DHCP lease matching {name: "old-k8s-version-190698", mac: "52:54:00:72:8a:43", ip: "192.168.61.143"}
	I0917 18:27:59.204660   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | Getting to WaitForSSH function...
	I0917 18:27:59.204675   78008 main.go:141] libmachine: (old-k8s-version-190698) Waiting for SSH to be available...
	I0917 18:27:59.206831   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.207248   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.207277   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.207563   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | Using SSH client type: external
	I0917 18:27:59.207591   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa (-rw-------)
	I0917 18:27:59.207628   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.143 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:27:59.207648   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | About to run SSH command:
	I0917 18:27:59.207660   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | exit 0
	I0917 18:27:59.334284   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | SSH cmd err, output: <nil>: 
	I0917 18:27:59.334712   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetConfigRaw
	I0917 18:27:59.335400   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:27:59.337795   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.338175   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.338199   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.338448   78008 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/config.json ...
	I0917 18:27:59.338675   78008 machine.go:93] provisionDockerMachine start ...
	I0917 18:27:59.338696   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:27:59.338932   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.340943   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.341313   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.341338   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.341517   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:27:59.341695   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.341821   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.341953   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:27:59.342138   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:59.342349   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:27:59.342366   78008 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 18:27:59.449958   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 18:27:59.449986   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetMachineName
	I0917 18:27:59.450245   78008 buildroot.go:166] provisioning hostname "old-k8s-version-190698"
	I0917 18:27:59.450275   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetMachineName
	I0917 18:27:59.450449   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.453653   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.454015   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.454044   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.454246   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:27:59.454451   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.454608   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.454777   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:27:59.454978   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:59.455195   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:27:59.455212   78008 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-190698 && echo "old-k8s-version-190698" | sudo tee /etc/hostname
	I0917 18:27:59.576721   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-190698
	
	I0917 18:27:59.576758   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.579821   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.580176   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.580211   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.580420   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:27:59.580601   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.580774   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.580920   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:27:59.581097   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:59.581292   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:27:59.581313   78008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-190698' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-190698/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-190698' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:27:59.696335   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:27:59.696366   78008 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:27:59.696387   78008 buildroot.go:174] setting up certificates
	I0917 18:27:59.696396   78008 provision.go:84] configureAuth start
	I0917 18:27:59.696405   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetMachineName
	I0917 18:27:59.696689   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:27:59.699694   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.700052   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.700079   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.700251   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.702492   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.702870   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.702897   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.703098   78008 provision.go:143] copyHostCerts
	I0917 18:27:59.703211   78008 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:27:59.703228   78008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:27:59.703308   78008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:27:59.703494   78008 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:27:59.703511   78008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:27:59.703557   78008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:27:59.703696   78008 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:27:59.703711   78008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:27:59.703743   78008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:27:59.703843   78008 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-190698 san=[127.0.0.1 192.168.61.143 localhost minikube old-k8s-version-190698]
	I0917 18:27:59.881199   78008 provision.go:177] copyRemoteCerts
	I0917 18:27:59.881281   78008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:27:59.881319   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.884199   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.884526   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.884559   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.884808   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:27:59.885004   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.885174   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:27:59.885311   78008 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:27:59.972021   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:27:59.999996   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0917 18:28:00.028759   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 18:28:00.062167   78008 provision.go:87] duration metric: took 365.752983ms to configureAuth
	I0917 18:28:00.062224   78008 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:28:00.062431   78008 config.go:182] Loaded profile config "old-k8s-version-190698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0917 18:28:00.062530   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.065903   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.066354   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.066387   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.066851   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.067080   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.067272   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.067551   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.067782   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:00.068031   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:28:00.068058   78008 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:28:00.310378   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:28:00.310410   78008 machine.go:96] duration metric: took 971.72114ms to provisionDockerMachine
	I0917 18:28:00.310424   78008 start.go:293] postStartSetup for "old-k8s-version-190698" (driver="kvm2")
	I0917 18:28:00.310444   78008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:28:00.310465   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.310788   78008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:28:00.310822   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.313609   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.313975   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.314004   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.314158   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.314364   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.314518   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.314672   78008 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:28:00.402352   78008 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:28:00.407061   78008 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:28:00.407091   78008 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:28:00.407183   78008 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:28:00.407295   78008 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:28:00.407435   78008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:28:00.419527   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:28:00.449686   78008 start.go:296] duration metric: took 139.247596ms for postStartSetup
	I0917 18:28:00.449739   78008 fix.go:56] duration metric: took 20.027097941s for fixHost
	I0917 18:28:00.449764   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.452672   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.453033   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.453080   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.453218   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.453433   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.453637   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.453793   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.454001   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:00.454175   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:28:00.454185   78008 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:28:00.566377   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726597680.523257617
	
	I0917 18:28:00.566403   78008 fix.go:216] guest clock: 1726597680.523257617
	I0917 18:28:00.566413   78008 fix.go:229] Guest: 2024-09-17 18:28:00.523257617 +0000 UTC Remote: 2024-09-17 18:28:00.449744487 +0000 UTC m=+249.811602656 (delta=73.51313ms)
	I0917 18:28:00.566439   78008 fix.go:200] guest clock delta is within tolerance: 73.51313ms
	I0917 18:28:00.566445   78008 start.go:83] releasing machines lock for "old-k8s-version-190698", held for 20.143843614s
	I0917 18:28:00.566478   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.566748   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:28:00.570065   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.570491   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.570520   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.570731   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.571320   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.571497   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.571584   78008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:28:00.571649   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.571803   78008 ssh_runner.go:195] Run: cat /version.json
	I0917 18:28:00.571830   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.574802   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.575083   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.575343   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.575382   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.575506   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.575574   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.575600   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.575664   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.575881   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.575941   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.576030   78008 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:28:00.576082   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.576278   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.576430   78008 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:28:00.592850   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Start
	I0917 18:28:00.593044   77264 main.go:141] libmachine: (embed-certs-081863) Ensuring networks are active...
	I0917 18:28:00.593996   77264 main.go:141] libmachine: (embed-certs-081863) Ensuring network default is active
	I0917 18:28:00.594404   77264 main.go:141] libmachine: (embed-certs-081863) Ensuring network mk-embed-certs-081863 is active
	I0917 18:28:00.594855   77264 main.go:141] libmachine: (embed-certs-081863) Getting domain xml...
	I0917 18:28:00.595603   77264 main.go:141] libmachine: (embed-certs-081863) Creating domain...
	I0917 18:28:00.685146   78008 ssh_runner.go:195] Run: systemctl --version
	I0917 18:28:00.692059   78008 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:28:00.844888   78008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:28:00.852326   78008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:28:00.852438   78008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:28:00.869907   78008 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:28:00.869934   78008 start.go:495] detecting cgroup driver to use...
	I0917 18:28:00.870010   78008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:28:00.888992   78008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:28:00.905438   78008 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:28:00.905495   78008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:28:00.920872   78008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:28:00.939154   78008 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:28:01.067061   78008 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:28:01.220976   78008 docker.go:233] disabling docker service ...
	I0917 18:28:01.221068   78008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:28:01.240350   78008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:28:01.257396   78008 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:28:01.407317   78008 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:28:01.552256   78008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:28:01.567151   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:28:01.589401   78008 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0917 18:28:01.589465   78008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:01.604462   78008 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:28:01.604527   78008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:01.617293   78008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:01.629766   78008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:01.643336   78008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:28:01.656308   78008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:28:01.667116   78008 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:28:01.667187   78008 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:28:01.683837   78008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:28:01.697438   78008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:28:01.843288   78008 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:28:01.951590   78008 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:28:01.951666   78008 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:28:01.957158   78008 start.go:563] Will wait 60s for crictl version
	I0917 18:28:01.957240   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:01.961218   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:28:02.001679   78008 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:28:02.001772   78008 ssh_runner.go:195] Run: crio --version
	I0917 18:28:02.032619   78008 ssh_runner.go:195] Run: crio --version
	I0917 18:28:02.064108   78008 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0917 18:27:57.695202   77433 pod_ready.go:93] pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:57.695235   77433 pod_ready.go:82] duration metric: took 8.506750324s for pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.695249   77433 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.700040   77433 pod_ready.go:93] pod "etcd-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:57.700062   77433 pod_ready.go:82] duration metric: took 4.804815ms for pod "etcd-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.700070   77433 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.705836   77433 pod_ready.go:93] pod "kube-apiserver-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:57.705867   77433 pod_ready.go:82] duration metric: took 5.789446ms for pod "kube-apiserver-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.705880   77433 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.215156   77433 pod_ready.go:93] pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:58.215180   77433 pod_ready.go:82] duration metric: took 509.29189ms for pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.215193   77433 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kpzxv" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.221031   77433 pod_ready.go:93] pod "kube-proxy-kpzxv" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:58.221054   77433 pod_ready.go:82] duration metric: took 5.853831ms for pod "kube-proxy-kpzxv" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.221065   77433 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.493958   77433 pod_ready.go:93] pod "kube-scheduler-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:58.493984   77433 pod_ready.go:82] duration metric: took 272.911397ms for pod "kube-scheduler-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.493994   77433 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:00.501591   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:27:59.707995   77819 pod_ready.go:93] pod "coredns-7c65d6cfc9-5wm4j" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:59.708017   77819 pod_ready.go:82] duration metric: took 4.007926053s for pod "coredns-7c65d6cfc9-5wm4j" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:59.708026   77819 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:01.716326   77819 pod_ready.go:103] pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:02.065336   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:28:02.068703   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:02.069066   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:02.069094   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:02.069321   78008 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0917 18:28:02.074550   78008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:28:02.091863   78008 kubeadm.go:883] updating cluster {Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:28:02.092006   78008 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0917 18:28:02.092069   78008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:28:02.152944   78008 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0917 18:28:02.153024   78008 ssh_runner.go:195] Run: which lz4
	I0917 18:28:02.157664   78008 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 18:28:02.162231   78008 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 18:28:02.162290   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0917 18:28:04.015315   78008 crio.go:462] duration metric: took 1.857697544s to copy over tarball
	I0917 18:28:04.015398   78008 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 18:28:01.931491   77264 main.go:141] libmachine: (embed-certs-081863) Waiting to get IP...
	I0917 18:28:01.932448   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:01.932939   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:01.933006   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:01.932914   79167 retry.go:31] will retry after 232.498944ms: waiting for machine to come up
	I0917 18:28:02.167642   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:02.168159   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:02.168187   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:02.168114   79167 retry.go:31] will retry after 297.644768ms: waiting for machine to come up
	I0917 18:28:02.467583   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:02.468395   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:02.468422   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:02.468356   79167 retry.go:31] will retry after 486.22753ms: waiting for machine to come up
	I0917 18:28:02.956719   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:02.957187   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:02.957212   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:02.957151   79167 retry.go:31] will retry after 602.094874ms: waiting for machine to come up
	I0917 18:28:03.560509   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:03.561150   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:03.561177   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:03.561102   79167 retry.go:31] will retry after 732.31608ms: waiting for machine to come up
	I0917 18:28:04.294713   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:04.295264   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:04.295306   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:04.295212   79167 retry.go:31] will retry after 826.461309ms: waiting for machine to come up
	I0917 18:28:05.123086   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:05.123570   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:05.123596   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:05.123528   79167 retry.go:31] will retry after 785.524779ms: waiting for machine to come up
	I0917 18:28:02.503063   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:05.002750   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:03.716871   77819 pod_ready.go:103] pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:05.718652   77819 pod_ready.go:93] pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:05.718685   77819 pod_ready.go:82] duration metric: took 6.010651123s for pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:05.718697   77819 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:07.727355   77819 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:07.199571   78008 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.184141166s)
	I0917 18:28:07.199605   78008 crio.go:469] duration metric: took 3.184259546s to extract the tarball
	I0917 18:28:07.199625   78008 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 18:28:07.247308   78008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:28:07.290580   78008 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0917 18:28:07.290605   78008 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0917 18:28:07.290641   78008 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:28:07.290664   78008 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.290685   78008 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.290705   78008 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.290772   78008 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.290865   78008 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.290898   78008 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0917 18:28:07.290896   78008 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.292426   78008 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.292473   78008 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.292479   78008 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.292525   78008 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:28:07.292555   78008 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.292544   78008 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.292594   78008 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.292796   78008 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0917 18:28:07.460802   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.466278   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.466439   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.473442   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.484306   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.490062   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.517285   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0917 18:28:07.550668   78008 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0917 18:28:07.550730   78008 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.550779   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.598383   78008 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0917 18:28:07.598426   78008 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.598468   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.627615   78008 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0917 18:28:07.627665   78008 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.627737   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.675687   78008 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0917 18:28:07.675733   78008 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.675769   78008 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0917 18:28:07.675806   78008 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.675848   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.675809   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.689052   78008 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0917 18:28:07.689106   78008 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.689141   78008 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0917 18:28:07.689169   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.689186   78008 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0917 18:28:07.689200   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.689224   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.689252   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.689296   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.689336   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.689374   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.782923   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0917 18:28:07.783204   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.833121   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.833205   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.833278   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.833316   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.833343   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.880054   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0917 18:28:07.885156   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.982007   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.990252   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:08.005351   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:08.008118   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0917 18:28:08.008319   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:08.066339   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0917 18:28:08.066388   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0917 18:28:08.173842   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0917 18:28:08.173884   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0917 18:28:08.173951   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:08.181801   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0917 18:28:08.181832   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0917 18:28:08.181952   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0917 18:28:08.196666   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:28:08.219844   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0917 18:28:08.351645   78008 cache_images.go:92] duration metric: took 1.061022994s to LoadCachedImages
	W0917 18:28:08.351739   78008 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0917 18:28:08.351760   78008 kubeadm.go:934] updating node { 192.168.61.143 8443 v1.20.0 crio true true} ...
	I0917 18:28:08.351869   78008 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-190698 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.143
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:28:08.351947   78008 ssh_runner.go:195] Run: crio config
	I0917 18:28:08.404304   78008 cni.go:84] Creating CNI manager for ""
	I0917 18:28:08.404333   78008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:28:08.404347   78008 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:28:08.404369   78008 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.143 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-190698 NodeName:old-k8s-version-190698 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.143"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.143 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0917 18:28:08.404554   78008 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.143
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-190698"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.143
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.143"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:28:08.404636   78008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0917 18:28:08.415712   78008 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:28:08.415788   78008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:28:08.426074   78008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0917 18:28:08.446765   78008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:28:08.467884   78008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0917 18:28:08.489565   78008 ssh_runner.go:195] Run: grep 192.168.61.143	control-plane.minikube.internal$ /etc/hosts
	I0917 18:28:08.494030   78008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.143	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:28:08.510100   78008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:28:08.667598   78008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:28:08.686416   78008 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698 for IP: 192.168.61.143
	I0917 18:28:08.686453   78008 certs.go:194] generating shared ca certs ...
	I0917 18:28:08.686477   78008 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:28:08.686680   78008 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:28:08.686743   78008 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:28:08.686762   78008 certs.go:256] generating profile certs ...
	I0917 18:28:08.686886   78008 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/client.key
	I0917 18:28:08.686962   78008 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.key.8ffdb4af
	I0917 18:28:08.687069   78008 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/proxy-client.key
	I0917 18:28:08.687256   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:28:08.687302   78008 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:28:08.687318   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:28:08.687360   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:28:08.687397   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:28:08.687441   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:28:08.687511   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:28:08.688412   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:28:08.729318   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:28:08.772932   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:28:08.815329   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:28:08.866305   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 18:28:08.910004   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 18:28:08.950902   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:28:08.993679   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 18:28:09.021272   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:28:09.046848   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:28:09.078938   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:28:09.110919   78008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:28:09.134493   78008 ssh_runner.go:195] Run: openssl version
	I0917 18:28:09.142920   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:28:09.157440   78008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:28:09.163382   78008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:28:09.163460   78008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:28:09.170446   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:28:09.182690   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:28:09.195144   78008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:28:09.200544   78008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:28:09.200612   78008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:28:09.207418   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:28:09.219931   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:28:09.234765   78008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:09.240859   78008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:09.240930   78008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:09.249168   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:28:09.262225   78008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:28:09.267923   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 18:28:09.276136   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 18:28:09.284356   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 18:28:09.292809   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 18:28:09.301175   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 18:28:09.309486   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 18:28:09.317652   78008 kubeadm.go:392] StartCluster: {Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:28:09.317788   78008 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:28:09.317862   78008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:28:09.367633   78008 cri.go:89] found id: ""
	I0917 18:28:09.367714   78008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:28:09.378721   78008 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 18:28:09.378751   78008 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 18:28:09.378823   78008 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 18:28:09.389949   78008 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 18:28:09.391438   78008 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-190698" does not appear in /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:28:09.392494   78008 kubeconfig.go:62] /home/jenkins/minikube-integration/19662-11085/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-190698" cluster setting kubeconfig missing "old-k8s-version-190698" context setting]
	I0917 18:28:09.393951   78008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:28:09.396482   78008 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 18:28:09.407488   78008 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.143
	I0917 18:28:09.407541   78008 kubeadm.go:1160] stopping kube-system containers ...
	I0917 18:28:09.407555   78008 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0917 18:28:09.407617   78008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:28:09.454529   78008 cri.go:89] found id: ""
	I0917 18:28:09.454609   78008 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 18:28:09.473001   78008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:28:09.483455   78008 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:28:09.483478   78008 kubeadm.go:157] found existing configuration files:
	
	I0917 18:28:09.483524   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:28:09.492941   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:28:09.493015   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:28:09.503733   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:28:09.513646   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:28:09.513744   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:28:09.523852   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:28:09.533964   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:28:09.534023   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:28:09.544196   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:28:09.554778   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:28:09.554867   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:28:09.565305   78008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:28:09.576177   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:09.717093   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:10.376689   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:10.619407   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:05.910824   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:05.911297   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:05.911326   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:05.911249   79167 retry.go:31] will retry after 994.146737ms: waiting for machine to come up
	I0917 18:28:06.906856   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:06.907429   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:06.907489   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:06.907376   79167 retry.go:31] will retry after 1.592998284s: waiting for machine to come up
	I0917 18:28:08.502438   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:08.502946   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:08.502969   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:08.502894   79167 retry.go:31] will retry after 1.71066586s: waiting for machine to come up
	I0917 18:28:10.215620   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:10.216060   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:10.216088   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:10.216019   79167 retry.go:31] will retry after 2.640762654s: waiting for machine to come up
	I0917 18:28:07.502981   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:10.000910   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:12.002029   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:09.068583   77819 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:09.068620   77819 pod_ready.go:82] duration metric: took 3.349915006s for pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.068634   77819 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.104652   77819 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:09.104685   77819 pod_ready.go:82] duration metric: took 36.042715ms for pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.104698   77819 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-pbjlc" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.111983   77819 pod_ready.go:93] pod "kube-proxy-pbjlc" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:09.112010   77819 pod_ready.go:82] duration metric: took 7.304378ms for pod "kube-proxy-pbjlc" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.112022   77819 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.118242   77819 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:09.118270   77819 pod_ready.go:82] duration metric: took 6.238909ms for pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.118284   77819 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:11.128221   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:10.743928   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:10.832172   78008 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:28:10.832275   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:11.333365   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:11.832631   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:12.332364   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:12.832978   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:13.333348   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:13.833325   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:14.333130   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:14.833200   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:15.333019   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:12.859438   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:12.859907   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:12.859933   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:12.859855   79167 retry.go:31] will retry after 2.872904917s: waiting for machine to come up
	I0917 18:28:15.734778   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:15.735248   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:15.735276   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:15.735204   79167 retry.go:31] will retry after 3.980802088s: waiting for machine to come up
	I0917 18:28:14.002604   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:16.501220   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:13.625926   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:16.124315   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:18.125564   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:15.832326   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:16.333353   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:16.833183   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:17.332967   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:17.833315   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:18.333025   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:18.832727   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:19.333388   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:19.833387   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:20.332777   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:19.720378   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.720874   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has current primary IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.720895   77264 main.go:141] libmachine: (embed-certs-081863) Found IP for machine: 192.168.50.61
	I0917 18:28:19.720909   77264 main.go:141] libmachine: (embed-certs-081863) Reserving static IP address...
	I0917 18:28:19.721385   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "embed-certs-081863", mac: "52:54:00:3f:17:3d", ip: "192.168.50.61"} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:19.721428   77264 main.go:141] libmachine: (embed-certs-081863) DBG | skip adding static IP to network mk-embed-certs-081863 - found existing host DHCP lease matching {name: "embed-certs-081863", mac: "52:54:00:3f:17:3d", ip: "192.168.50.61"}
	I0917 18:28:19.721444   77264 main.go:141] libmachine: (embed-certs-081863) Reserved static IP address: 192.168.50.61
	I0917 18:28:19.721461   77264 main.go:141] libmachine: (embed-certs-081863) Waiting for SSH to be available...
	I0917 18:28:19.721478   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Getting to WaitForSSH function...
	I0917 18:28:19.723623   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.723932   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:19.723960   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.724082   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Using SSH client type: external
	I0917 18:28:19.724109   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa (-rw-------)
	I0917 18:28:19.724139   77264 main.go:141] libmachine: (embed-certs-081863) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.61 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:28:19.724161   77264 main.go:141] libmachine: (embed-certs-081863) DBG | About to run SSH command:
	I0917 18:28:19.724173   77264 main.go:141] libmachine: (embed-certs-081863) DBG | exit 0
	I0917 18:28:19.849714   77264 main.go:141] libmachine: (embed-certs-081863) DBG | SSH cmd err, output: <nil>: 
	I0917 18:28:19.850124   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetConfigRaw
	I0917 18:28:19.850841   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetIP
	I0917 18:28:19.853490   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.853866   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:19.853891   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.854193   77264 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/config.json ...
	I0917 18:28:19.854396   77264 machine.go:93] provisionDockerMachine start ...
	I0917 18:28:19.854414   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:19.854653   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:19.857041   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.857395   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:19.857423   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.857547   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:19.857729   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:19.857863   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:19.857975   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:19.858079   77264 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:19.858237   77264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I0917 18:28:19.858247   77264 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 18:28:19.965775   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 18:28:19.965805   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetMachineName
	I0917 18:28:19.966057   77264 buildroot.go:166] provisioning hostname "embed-certs-081863"
	I0917 18:28:19.966091   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetMachineName
	I0917 18:28:19.966278   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:19.968957   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.969277   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:19.969308   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.969469   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:19.969656   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:19.969816   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:19.969923   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:19.970068   77264 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:19.970294   77264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I0917 18:28:19.970314   77264 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-081863 && echo "embed-certs-081863" | sudo tee /etc/hostname
	I0917 18:28:20.096717   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-081863
	
	I0917 18:28:20.096753   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.099788   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.100162   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.100195   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.100351   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.100571   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.100731   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.100864   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.101043   77264 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:20.101273   77264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I0917 18:28:20.101297   77264 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-081863' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-081863/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-081863' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:28:20.224405   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:28:20.224447   77264 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:28:20.224468   77264 buildroot.go:174] setting up certificates
	I0917 18:28:20.224476   77264 provision.go:84] configureAuth start
	I0917 18:28:20.224487   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetMachineName
	I0917 18:28:20.224796   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetIP
	I0917 18:28:20.227642   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.227990   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.228020   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.228128   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.230411   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.230785   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.230819   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.230945   77264 provision.go:143] copyHostCerts
	I0917 18:28:20.231012   77264 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:28:20.231026   77264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:28:20.231097   77264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:28:20.231220   77264 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:28:20.231232   77264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:28:20.231263   77264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:28:20.231349   77264 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:28:20.231361   77264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:28:20.231387   77264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:28:20.231460   77264 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.embed-certs-081863 san=[127.0.0.1 192.168.50.61 embed-certs-081863 localhost minikube]
	I0917 18:28:20.293317   77264 provision.go:177] copyRemoteCerts
	I0917 18:28:20.293370   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:28:20.293395   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.296247   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.296611   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.296649   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.296878   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.297065   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.297251   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.297411   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:28:20.384577   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:28:20.409805   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0917 18:28:20.436199   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 18:28:20.463040   77264 provision.go:87] duration metric: took 238.548615ms to configureAuth
	I0917 18:28:20.463072   77264 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:28:20.463270   77264 config.go:182] Loaded profile config "embed-certs-081863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:28:20.463371   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.466291   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.466656   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.466688   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.466942   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.467172   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.467363   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.467511   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.467661   77264 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:20.467850   77264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I0917 18:28:20.467864   77264 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:28:20.713934   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:28:20.713961   77264 machine.go:96] duration metric: took 859.552656ms to provisionDockerMachine
	I0917 18:28:20.713975   77264 start.go:293] postStartSetup for "embed-certs-081863" (driver="kvm2")
	I0917 18:28:20.713989   77264 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:28:20.714017   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:20.714338   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:28:20.714366   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.717415   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.717784   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.717810   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.717979   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.718181   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.718334   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.718489   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:28:18.501410   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:21.001625   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:20.808582   77264 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:28:20.812874   77264 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:28:20.812903   77264 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:28:20.812985   77264 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:28:20.813082   77264 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:28:20.813202   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:28:20.823533   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:28:20.853907   77264 start.go:296] duration metric: took 139.917603ms for postStartSetup
	I0917 18:28:20.853950   77264 fix.go:56] duration metric: took 20.287354242s for fixHost
	I0917 18:28:20.853974   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.856746   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.857114   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.857141   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.857324   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.857572   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.857749   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.857925   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.858084   77264 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:20.858314   77264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I0917 18:28:20.858329   77264 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:28:20.970530   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726597700.949100009
	
	I0917 18:28:20.970553   77264 fix.go:216] guest clock: 1726597700.949100009
	I0917 18:28:20.970561   77264 fix.go:229] Guest: 2024-09-17 18:28:20.949100009 +0000 UTC Remote: 2024-09-17 18:28:20.853955257 +0000 UTC m=+355.105413575 (delta=95.144752ms)
	I0917 18:28:20.970581   77264 fix.go:200] guest clock delta is within tolerance: 95.144752ms
	I0917 18:28:20.970586   77264 start.go:83] releasing machines lock for "embed-certs-081863", held for 20.404030588s
	I0917 18:28:20.970604   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:20.970874   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetIP
	I0917 18:28:20.973477   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.973786   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.973813   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.973938   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:20.974529   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:20.974733   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:20.974825   77264 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:28:20.974881   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.974945   77264 ssh_runner.go:195] Run: cat /version.json
	I0917 18:28:20.974973   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.977671   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.977994   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.978044   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.978074   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.978203   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.978365   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.978517   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.978555   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.978590   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.978659   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:28:20.978775   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.978915   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.979042   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.979161   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:28:21.080649   77264 ssh_runner.go:195] Run: systemctl --version
	I0917 18:28:21.087412   77264 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:28:21.241355   77264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:28:21.249173   77264 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:28:21.249245   77264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:28:21.266337   77264 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:28:21.266369   77264 start.go:495] detecting cgroup driver to use...
	I0917 18:28:21.266441   77264 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:28:21.284535   77264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:28:21.300191   77264 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:28:21.300262   77264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:28:21.315687   77264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:28:21.331132   77264 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:28:21.469564   77264 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:28:21.618385   77264 docker.go:233] disabling docker service ...
	I0917 18:28:21.618465   77264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:28:21.635746   77264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:28:21.653011   77264 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:28:21.806397   77264 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:28:21.942768   77264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:28:21.957319   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:28:21.977409   77264 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 18:28:21.977479   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:21.989090   77264 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:28:21.989165   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.001555   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.013044   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.024634   77264 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:28:22.036482   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.048082   77264 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.067971   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.079429   77264 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:28:22.089772   77264 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:28:22.089841   77264 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:28:22.104492   77264 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:28:22.116429   77264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:28:22.250299   77264 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:28:22.353115   77264 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:28:22.353195   77264 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:28:22.359475   77264 start.go:563] Will wait 60s for crictl version
	I0917 18:28:22.359527   77264 ssh_runner.go:195] Run: which crictl
	I0917 18:28:22.363627   77264 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:28:22.402802   77264 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:28:22.402902   77264 ssh_runner.go:195] Run: crio --version
	I0917 18:28:22.432389   77264 ssh_runner.go:195] Run: crio --version
	I0917 18:28:22.463277   77264 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 18:28:20.625519   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:23.126788   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:20.832698   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:21.332644   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:21.832955   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:22.332859   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:22.832393   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:23.333067   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:23.833266   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:24.332837   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:24.832669   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:25.332772   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:22.464498   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetIP
	I0917 18:28:22.467595   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:22.468070   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:22.468104   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:22.468400   77264 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0917 18:28:22.473355   77264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:28:22.487043   77264 kubeadm.go:883] updating cluster {Name:embed-certs-081863 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-081863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.61 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:28:22.487162   77264 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 18:28:22.487204   77264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:28:22.525877   77264 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0917 18:28:22.525947   77264 ssh_runner.go:195] Run: which lz4
	I0917 18:28:22.530318   77264 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 18:28:22.534779   77264 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 18:28:22.534821   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0917 18:28:24.007808   77264 crio.go:462] duration metric: took 1.477544842s to copy over tarball
	I0917 18:28:24.007895   77264 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 18:28:23.002565   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:25.501068   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:25.627993   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:28.126373   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:25.832772   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:26.332949   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:26.833016   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:27.332604   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:27.833127   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:28.332337   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:28.832430   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.332564   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.833193   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:30.333057   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:26.210912   77264 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.202977006s)
	I0917 18:28:26.210942   77264 crio.go:469] duration metric: took 2.203106209s to extract the tarball
	I0917 18:28:26.210950   77264 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 18:28:26.249979   77264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:28:26.297086   77264 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 18:28:26.297112   77264 cache_images.go:84] Images are preloaded, skipping loading
	I0917 18:28:26.297122   77264 kubeadm.go:934] updating node { 192.168.50.61 8443 v1.31.1 crio true true} ...
	I0917 18:28:26.297238   77264 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-081863 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-081863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:28:26.297323   77264 ssh_runner.go:195] Run: crio config
	I0917 18:28:26.343491   77264 cni.go:84] Creating CNI manager for ""
	I0917 18:28:26.343516   77264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:28:26.343526   77264 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:28:26.343547   77264 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.61 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-081863 NodeName:embed-certs-081863 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 18:28:26.343711   77264 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-081863"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:28:26.343786   77264 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 18:28:26.354782   77264 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:28:26.354863   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:28:26.365347   77264 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0917 18:28:26.383377   77264 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:28:26.401629   77264 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0917 18:28:26.420595   77264 ssh_runner.go:195] Run: grep 192.168.50.61	control-plane.minikube.internal$ /etc/hosts
	I0917 18:28:26.424760   77264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.61	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:28:26.439152   77264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:28:26.582540   77264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:28:26.600662   77264 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863 for IP: 192.168.50.61
	I0917 18:28:26.600684   77264 certs.go:194] generating shared ca certs ...
	I0917 18:28:26.600701   77264 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:28:26.600877   77264 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:28:26.600932   77264 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:28:26.600946   77264 certs.go:256] generating profile certs ...
	I0917 18:28:26.601065   77264 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/client.key
	I0917 18:28:26.601154   77264 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/apiserver.key.b407faea
	I0917 18:28:26.601218   77264 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/proxy-client.key
	I0917 18:28:26.601382   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:28:26.601423   77264 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:28:26.601438   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:28:26.601501   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:28:26.601537   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:28:26.601568   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:28:26.601625   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:28:26.602482   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:28:26.641066   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:28:26.665154   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:28:26.699573   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:28:26.749625   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0917 18:28:26.790757   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 18:28:26.818331   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:28:26.848575   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 18:28:26.875901   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:28:26.902547   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:28:26.929873   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:28:26.954674   77264 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:28:26.972433   77264 ssh_runner.go:195] Run: openssl version
	I0917 18:28:26.978761   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:28:26.991752   77264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:28:26.996704   77264 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:28:26.996771   77264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:28:27.003567   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:28:27.015305   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:28:27.027052   77264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:27.032815   77264 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:27.032880   77264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:27.039495   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:28:27.051331   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:28:27.062771   77264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:28:27.067404   77264 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:28:27.067461   77264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:28:27.073663   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:28:27.085283   77264 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:28:27.090171   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 18:28:27.096537   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 18:28:27.103011   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 18:28:27.110516   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 18:28:27.116647   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 18:28:27.123087   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 18:28:27.129689   77264 kubeadm.go:392] StartCluster: {Name:embed-certs-081863 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-081863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.61 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:28:27.129958   77264 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:28:27.130021   77264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:28:27.171240   77264 cri.go:89] found id: ""
	I0917 18:28:27.171312   77264 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:28:27.183474   77264 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 18:28:27.183494   77264 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 18:28:27.183555   77264 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 18:28:27.195418   77264 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 18:28:27.196485   77264 kubeconfig.go:125] found "embed-certs-081863" server: "https://192.168.50.61:8443"
	I0917 18:28:27.198613   77264 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 18:28:27.210454   77264 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.61
	I0917 18:28:27.210489   77264 kubeadm.go:1160] stopping kube-system containers ...
	I0917 18:28:27.210503   77264 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0917 18:28:27.210560   77264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:28:27.249423   77264 cri.go:89] found id: ""
	I0917 18:28:27.249495   77264 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 18:28:27.270900   77264 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:28:27.283556   77264 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:28:27.283577   77264 kubeadm.go:157] found existing configuration files:
	
	I0917 18:28:27.283636   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:28:27.293555   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:28:27.293619   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:28:27.303876   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:28:27.313465   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:28:27.313533   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:28:27.323675   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:28:27.333753   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:28:27.333828   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:28:27.345276   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:28:27.356223   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:28:27.356278   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:28:27.366916   77264 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:28:27.380179   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:27.518193   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:28.381642   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:28.600807   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:28.674888   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:28.751910   77264 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:28:28.752037   77264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.252499   77264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.752690   77264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.792406   77264 api_server.go:72] duration metric: took 1.040494132s to wait for apiserver process to appear ...
	I0917 18:28:29.792439   77264 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:28:29.792463   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:29.793008   77264 api_server.go:269] stopped: https://192.168.50.61:8443/healthz: Get "https://192.168.50.61:8443/healthz": dial tcp 192.168.50.61:8443: connect: connection refused
	I0917 18:28:30.292587   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:27.501185   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:29.501753   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:32.000659   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:30.626195   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:33.126180   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:30.832853   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:31.332521   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:31.832513   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:32.332347   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:32.833201   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:33.332485   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:33.833002   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:34.333150   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:34.832985   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:35.332584   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:32.308247   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:28:32.308273   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:28:32.308286   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:32.327248   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:28:32.327283   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:28:32.792628   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:32.798368   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:32.798399   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:33.292887   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:33.298137   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:33.298162   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:33.792634   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:33.797062   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:33.797095   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:34.292626   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:34.297161   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:34.297198   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:34.792621   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:34.797092   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:34.797124   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:35.292693   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:35.298774   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:35.298806   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:35.793350   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:35.798559   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 200:
	ok
	I0917 18:28:35.805421   77264 api_server.go:141] control plane version: v1.31.1
	I0917 18:28:35.805455   77264 api_server.go:131] duration metric: took 6.013008084s to wait for apiserver health ...
	I0917 18:28:35.805467   77264 cni.go:84] Creating CNI manager for ""
	I0917 18:28:35.805476   77264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:28:35.807270   77264 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:28:34.500180   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:36.501455   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:35.625916   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:38.124412   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:35.833375   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:36.332518   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:36.833057   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:37.333093   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:37.832449   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:38.333260   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:38.832592   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:39.332352   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:39.833094   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:40.332401   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:35.808509   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:28:35.820438   77264 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:28:35.843308   77264 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:28:35.858341   77264 system_pods.go:59] 8 kube-system pods found
	I0917 18:28:35.858375   77264 system_pods.go:61] "coredns-7c65d6cfc9-fv5t2" [6d147703-1be6-4e14-b00a-00563bb9f05d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:28:35.858383   77264 system_pods.go:61] "etcd-embed-certs-081863" [e7da3a2f-02a8-4fb8-bcc1-2057560e2a99] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 18:28:35.858390   77264 system_pods.go:61] "kube-apiserver-embed-certs-081863" [f576f758-867b-45ff-83e7-c7ec010c784d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 18:28:35.858396   77264 system_pods.go:61] "kube-controller-manager-embed-certs-081863" [864cfdcd-bba9-41ef-a014-9b44f90d10fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 18:28:35.858400   77264 system_pods.go:61] "kube-proxy-5ctps" [adbf43b1-986e-4bef-b515-9bf20e847369] Running
	I0917 18:28:35.858407   77264 system_pods.go:61] "kube-scheduler-embed-certs-081863" [1c6dc904-888a-43e2-9edf-ad87025d9cd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 18:28:35.858425   77264 system_pods.go:61] "metrics-server-6867b74b74-g2ttm" [dbb935ab-664c-420e-8b8e-4c033c3e07d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:28:35.858438   77264 system_pods.go:61] "storage-provisioner" [3a81abf3-c894-4279-91ce-6a66e4517de9] Running
	I0917 18:28:35.858446   77264 system_pods.go:74] duration metric: took 15.115932ms to wait for pod list to return data ...
	I0917 18:28:35.858459   77264 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:28:35.865686   77264 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:28:35.865715   77264 node_conditions.go:123] node cpu capacity is 2
	I0917 18:28:35.865728   77264 node_conditions.go:105] duration metric: took 7.262354ms to run NodePressure ...
	I0917 18:28:35.865747   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:36.133217   77264 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0917 18:28:36.142193   77264 kubeadm.go:739] kubelet initialised
	I0917 18:28:36.142216   77264 kubeadm.go:740] duration metric: took 8.957553ms waiting for restarted kubelet to initialise ...
	I0917 18:28:36.142223   77264 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:28:36.148365   77264 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-fv5t2" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.154605   77264 pod_ready.go:98] node "embed-certs-081863" hosting pod "coredns-7c65d6cfc9-fv5t2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.154633   77264 pod_ready.go:82] duration metric: took 6.241589ms for pod "coredns-7c65d6cfc9-fv5t2" in "kube-system" namespace to be "Ready" ...
	E0917 18:28:36.154644   77264 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-081863" hosting pod "coredns-7c65d6cfc9-fv5t2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.154654   77264 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.160864   77264 pod_ready.go:98] node "embed-certs-081863" hosting pod "etcd-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.160888   77264 pod_ready.go:82] duration metric: took 6.224743ms for pod "etcd-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	E0917 18:28:36.160899   77264 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-081863" hosting pod "etcd-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.160906   77264 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.167006   77264 pod_ready.go:98] node "embed-certs-081863" hosting pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.167038   77264 pod_ready.go:82] duration metric: took 6.114714ms for pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	E0917 18:28:36.167049   77264 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-081863" hosting pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.167058   77264 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.247310   77264 pod_ready.go:98] node "embed-certs-081863" hosting pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.247349   77264 pod_ready.go:82] duration metric: took 80.274557ms for pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	E0917 18:28:36.247361   77264 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-081863" hosting pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.247368   77264 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5ctps" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.647989   77264 pod_ready.go:93] pod "kube-proxy-5ctps" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:36.648012   77264 pod_ready.go:82] duration metric: took 400.635503ms for pod "kube-proxy-5ctps" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.648022   77264 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:38.654947   77264 pod_ready.go:103] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:40.658044   77264 pod_ready.go:103] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:39.000917   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:41.001794   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:40.124879   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:42.125939   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:40.832609   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:41.332438   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:41.832456   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:42.332846   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:42.832374   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:43.332703   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:43.832502   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:44.332845   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:44.832341   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:45.333377   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:43.154904   77264 pod_ready.go:103] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:45.155253   77264 pod_ready.go:103] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:43.001900   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:45.501989   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:44.625492   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:47.124276   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:45.832541   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:46.332842   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:46.832446   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:47.333344   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:47.833087   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:48.332527   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:48.832377   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:49.332937   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:49.833254   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:50.332394   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:47.157575   77264 pod_ready.go:93] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:47.157603   77264 pod_ready.go:82] duration metric: took 10.509573459s for pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:47.157614   77264 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:49.163957   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:48.000696   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:50.001527   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:49.627381   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:52.125550   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:50.833049   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:51.333314   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:51.832959   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:52.332830   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:52.832394   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:53.333004   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:53.832841   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:54.333310   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:54.832648   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:55.332487   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:51.164376   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:53.164866   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:55.165065   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:52.501375   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:54.501792   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:57.006451   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:54.624863   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:57.125005   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:55.832339   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:56.333257   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:56.833293   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:57.332665   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:57.833189   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:58.332409   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:58.833030   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:59.333251   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:59.832903   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:00.333365   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:57.664921   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:00.165972   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:59.500173   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:01.501014   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:59.125299   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:01.125883   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:00.833018   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:01.332976   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:01.832860   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:02.332401   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:02.832409   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:03.333273   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:03.832435   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:04.332572   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:04.832618   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:05.333051   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:02.166251   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:04.665729   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:04.000731   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:06.000850   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:03.624799   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:05.625817   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:08.124471   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:05.833109   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:06.332870   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:06.833248   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:07.332856   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:07.832795   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:08.332779   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:08.832356   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:09.333340   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:09.832899   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:10.332646   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:06.666037   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:09.163623   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:08.501863   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:10.504311   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:10.125479   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:12.625676   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:10.833153   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:10.833224   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:10.877318   78008 cri.go:89] found id: ""
	I0917 18:29:10.877347   78008 logs.go:276] 0 containers: []
	W0917 18:29:10.877356   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:10.877363   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:10.877433   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:10.913506   78008 cri.go:89] found id: ""
	I0917 18:29:10.913532   78008 logs.go:276] 0 containers: []
	W0917 18:29:10.913540   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:10.913546   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:10.913607   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:10.952648   78008 cri.go:89] found id: ""
	I0917 18:29:10.952679   78008 logs.go:276] 0 containers: []
	W0917 18:29:10.952689   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:10.952699   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:10.952761   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:10.992819   78008 cri.go:89] found id: ""
	I0917 18:29:10.992851   78008 logs.go:276] 0 containers: []
	W0917 18:29:10.992863   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:10.992870   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:10.992923   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:11.032717   78008 cri.go:89] found id: ""
	I0917 18:29:11.032752   78008 logs.go:276] 0 containers: []
	W0917 18:29:11.032764   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:11.032772   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:11.032831   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:11.070909   78008 cri.go:89] found id: ""
	I0917 18:29:11.070934   78008 logs.go:276] 0 containers: []
	W0917 18:29:11.070944   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:11.070953   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:11.071005   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:11.111115   78008 cri.go:89] found id: ""
	I0917 18:29:11.111146   78008 logs.go:276] 0 containers: []
	W0917 18:29:11.111157   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:11.111164   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:11.111233   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:11.147704   78008 cri.go:89] found id: ""
	I0917 18:29:11.147738   78008 logs.go:276] 0 containers: []
	W0917 18:29:11.147751   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:11.147770   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:11.147783   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:11.222086   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:11.222131   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:11.268572   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:11.268598   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:11.320140   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:11.320179   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:11.336820   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:11.336862   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:11.476726   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:13.977359   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:13.991780   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:13.991861   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:14.029657   78008 cri.go:89] found id: ""
	I0917 18:29:14.029686   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.029697   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:14.029703   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:14.029761   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:14.070673   78008 cri.go:89] found id: ""
	I0917 18:29:14.070707   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.070716   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:14.070722   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:14.070781   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:14.109826   78008 cri.go:89] found id: ""
	I0917 18:29:14.109862   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.109872   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:14.109880   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:14.109938   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:14.156812   78008 cri.go:89] found id: ""
	I0917 18:29:14.156839   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.156848   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:14.156853   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:14.156909   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:14.203877   78008 cri.go:89] found id: ""
	I0917 18:29:14.203906   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.203915   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:14.203921   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:14.203973   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:14.263366   78008 cri.go:89] found id: ""
	I0917 18:29:14.263395   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.263403   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:14.263408   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:14.263469   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:14.305300   78008 cri.go:89] found id: ""
	I0917 18:29:14.305324   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.305331   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:14.305337   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:14.305393   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:14.342838   78008 cri.go:89] found id: ""
	I0917 18:29:14.342874   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.342888   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:14.342900   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:14.342915   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:14.394814   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:14.394864   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:14.410058   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:14.410084   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:14.497503   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:14.497532   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:14.497547   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:14.578545   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:14.578582   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:11.164670   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:13.664310   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:15.664728   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:13.001122   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:15.001204   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:15.124476   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:17.125696   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:17.119953   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:17.134019   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:17.134078   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:17.174236   78008 cri.go:89] found id: ""
	I0917 18:29:17.174259   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.174268   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:17.174273   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:17.174317   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:17.208678   78008 cri.go:89] found id: ""
	I0917 18:29:17.208738   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.208749   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:17.208757   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:17.208820   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:17.242890   78008 cri.go:89] found id: ""
	I0917 18:29:17.242915   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.242923   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:17.242929   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:17.242983   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:17.281990   78008 cri.go:89] found id: ""
	I0917 18:29:17.282013   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.282038   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:17.282046   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:17.282105   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:17.320104   78008 cri.go:89] found id: ""
	I0917 18:29:17.320140   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.320153   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:17.320160   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:17.320220   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:17.361959   78008 cri.go:89] found id: ""
	I0917 18:29:17.361993   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.362004   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:17.362012   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:17.362120   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:17.400493   78008 cri.go:89] found id: ""
	I0917 18:29:17.400531   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.400543   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:17.400550   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:17.400611   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:17.435549   78008 cri.go:89] found id: ""
	I0917 18:29:17.435574   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.435582   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:17.435590   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:17.435605   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:17.483883   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:17.483919   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:17.498771   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:17.498801   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:17.583654   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:17.583680   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:17.583695   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:17.670903   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:17.670935   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:20.213963   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:20.228410   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:20.228487   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:20.268252   78008 cri.go:89] found id: ""
	I0917 18:29:20.268290   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.268301   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:20.268308   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:20.268385   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:20.307725   78008 cri.go:89] found id: ""
	I0917 18:29:20.307765   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.307774   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:20.307779   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:20.307840   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:20.350112   78008 cri.go:89] found id: ""
	I0917 18:29:20.350138   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.350146   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:20.350151   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:20.350209   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:20.386658   78008 cri.go:89] found id: ""
	I0917 18:29:20.386683   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.386692   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:20.386697   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:20.386758   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:20.427135   78008 cri.go:89] found id: ""
	I0917 18:29:20.427168   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.427180   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:20.427186   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:20.427253   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:20.464054   78008 cri.go:89] found id: ""
	I0917 18:29:20.464081   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.464091   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:20.464098   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:20.464162   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:20.503008   78008 cri.go:89] found id: ""
	I0917 18:29:20.503034   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.503043   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:20.503048   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:20.503107   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:20.539095   78008 cri.go:89] found id: ""
	I0917 18:29:20.539125   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.539137   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:20.539149   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:20.539165   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:20.552429   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:20.552457   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:20.631977   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:20.632000   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:20.632012   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:18.164593   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:20.164968   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:17.501184   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:19.503422   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:22.001605   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:19.624854   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:21.625397   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:20.709917   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:20.709950   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:20.752312   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:20.752349   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:23.310520   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:23.327230   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:23.327296   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:23.369648   78008 cri.go:89] found id: ""
	I0917 18:29:23.369677   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.369687   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:23.369692   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:23.369756   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:23.406968   78008 cri.go:89] found id: ""
	I0917 18:29:23.407002   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.407010   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:23.407017   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:23.407079   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:23.448246   78008 cri.go:89] found id: ""
	I0917 18:29:23.448275   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.448285   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:23.448290   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:23.448350   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:23.486975   78008 cri.go:89] found id: ""
	I0917 18:29:23.487006   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.487016   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:23.487024   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:23.487077   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:23.523614   78008 cri.go:89] found id: ""
	I0917 18:29:23.523645   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.523656   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:23.523672   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:23.523751   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:23.567735   78008 cri.go:89] found id: ""
	I0917 18:29:23.567763   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.567774   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:23.567781   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:23.567846   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:23.610952   78008 cri.go:89] found id: ""
	I0917 18:29:23.610985   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.610995   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:23.611002   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:23.611063   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:23.647601   78008 cri.go:89] found id: ""
	I0917 18:29:23.647633   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.647645   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:23.647657   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:23.647674   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:23.720969   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:23.720998   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:23.721014   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:23.802089   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:23.802124   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:23.847641   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:23.847673   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:23.901447   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:23.901488   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:22.663696   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:25.164022   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:24.001853   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:26.002572   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:24.124362   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:26.125485   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:26.416524   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:26.432087   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:26.432148   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:26.473403   78008 cri.go:89] found id: ""
	I0917 18:29:26.473435   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.473446   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:26.473453   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:26.473516   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:26.510736   78008 cri.go:89] found id: ""
	I0917 18:29:26.510764   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.510774   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:26.510780   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:26.510847   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:26.549732   78008 cri.go:89] found id: ""
	I0917 18:29:26.549766   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.549779   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:26.549789   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:26.549857   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:26.586548   78008 cri.go:89] found id: ""
	I0917 18:29:26.586580   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.586592   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:26.586599   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:26.586664   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:26.624246   78008 cri.go:89] found id: ""
	I0917 18:29:26.624276   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.624286   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:26.624294   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:26.624353   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:26.662535   78008 cri.go:89] found id: ""
	I0917 18:29:26.662565   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.662576   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:26.662584   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:26.662648   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:26.697775   78008 cri.go:89] found id: ""
	I0917 18:29:26.697810   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.697820   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:26.697826   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:26.697885   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:26.734181   78008 cri.go:89] found id: ""
	I0917 18:29:26.734209   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.734218   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:26.734228   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:26.734239   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:26.783128   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:26.783163   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:26.797674   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:26.797713   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:26.873548   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:26.873570   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:26.873581   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:26.954031   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:26.954066   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:29.494364   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:29.508545   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:29.508616   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:29.545854   78008 cri.go:89] found id: ""
	I0917 18:29:29.545880   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.545888   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:29.545893   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:29.545941   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:29.581646   78008 cri.go:89] found id: ""
	I0917 18:29:29.581680   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.581691   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:29.581698   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:29.581770   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:29.627071   78008 cri.go:89] found id: ""
	I0917 18:29:29.627101   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.627112   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:29.627119   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:29.627176   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:29.662514   78008 cri.go:89] found id: ""
	I0917 18:29:29.662544   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.662555   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:29.662562   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:29.662622   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:29.699246   78008 cri.go:89] found id: ""
	I0917 18:29:29.699278   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.699291   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:29.699299   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:29.699359   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:29.736018   78008 cri.go:89] found id: ""
	I0917 18:29:29.736057   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.736070   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:29.736077   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:29.736138   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:29.773420   78008 cri.go:89] found id: ""
	I0917 18:29:29.773449   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.773459   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:29.773467   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:29.773527   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:29.811530   78008 cri.go:89] found id: ""
	I0917 18:29:29.811556   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.811568   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:29.811578   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:29.811592   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:29.870083   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:29.870123   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:29.885471   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:29.885500   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:29.964699   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:29.964730   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:29.964754   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:30.048858   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:30.048899   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:27.165404   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:29.166367   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:28.500007   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:30.500594   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:28.626043   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:31.125419   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:33.125872   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:32.597013   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:32.611613   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:32.611691   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:32.648043   78008 cri.go:89] found id: ""
	I0917 18:29:32.648074   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.648086   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:32.648093   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:32.648159   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:32.686471   78008 cri.go:89] found id: ""
	I0917 18:29:32.686514   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.686526   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:32.686533   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:32.686594   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:32.721495   78008 cri.go:89] found id: ""
	I0917 18:29:32.721521   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.721530   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:32.721536   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:32.721595   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:32.757916   78008 cri.go:89] found id: ""
	I0917 18:29:32.757949   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.757960   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:32.757968   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:32.758035   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:32.793880   78008 cri.go:89] found id: ""
	I0917 18:29:32.793913   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.793925   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:32.793933   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:32.794006   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:32.834944   78008 cri.go:89] found id: ""
	I0917 18:29:32.834965   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.834973   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:32.834983   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:32.835044   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:32.872852   78008 cri.go:89] found id: ""
	I0917 18:29:32.872875   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.872883   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:32.872888   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:32.872939   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:32.913506   78008 cri.go:89] found id: ""
	I0917 18:29:32.913530   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.913538   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:32.913547   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:32.913562   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:32.928726   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:32.928751   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:33.001220   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:33.001259   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:33.001274   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:33.080268   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:33.080304   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:33.123977   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:33.124008   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:31.664513   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:34.164735   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:33.001341   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:35.500975   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:35.625484   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:37.625964   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:35.678936   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:35.692953   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:35.693036   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:35.736947   78008 cri.go:89] found id: ""
	I0917 18:29:35.736984   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.737004   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:35.737012   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:35.737076   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:35.776148   78008 cri.go:89] found id: ""
	I0917 18:29:35.776173   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.776184   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:35.776191   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:35.776253   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:35.814136   78008 cri.go:89] found id: ""
	I0917 18:29:35.814167   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.814179   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:35.814189   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:35.814252   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:35.854451   78008 cri.go:89] found id: ""
	I0917 18:29:35.854480   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.854492   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:35.854505   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:35.854573   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:35.893068   78008 cri.go:89] found id: ""
	I0917 18:29:35.893091   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.893102   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:35.893108   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:35.893174   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:35.929116   78008 cri.go:89] found id: ""
	I0917 18:29:35.929140   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.929148   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:35.929153   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:35.929211   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:35.964253   78008 cri.go:89] found id: ""
	I0917 18:29:35.964284   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.964294   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:35.964300   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:35.964364   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:36.002761   78008 cri.go:89] found id: ""
	I0917 18:29:36.002790   78008 logs.go:276] 0 containers: []
	W0917 18:29:36.002800   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:36.002810   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:36.002825   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:36.017581   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:36.017614   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:36.086982   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:36.087008   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:36.087024   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:36.169886   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:36.169919   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:36.215327   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:36.215355   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:38.768619   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:38.781979   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:38.782049   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:38.818874   78008 cri.go:89] found id: ""
	I0917 18:29:38.818903   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.818911   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:38.818918   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:38.818967   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:38.857619   78008 cri.go:89] found id: ""
	I0917 18:29:38.857648   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.857664   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:38.857670   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:38.857747   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:38.896861   78008 cri.go:89] found id: ""
	I0917 18:29:38.896896   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.896907   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:38.896914   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:38.896977   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:38.934593   78008 cri.go:89] found id: ""
	I0917 18:29:38.934616   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.934625   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:38.934632   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:38.934707   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:38.972359   78008 cri.go:89] found id: ""
	I0917 18:29:38.972383   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.972394   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:38.972400   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:38.972468   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:39.007529   78008 cri.go:89] found id: ""
	I0917 18:29:39.007554   78008 logs.go:276] 0 containers: []
	W0917 18:29:39.007561   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:39.007567   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:39.007613   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:39.042646   78008 cri.go:89] found id: ""
	I0917 18:29:39.042679   78008 logs.go:276] 0 containers: []
	W0917 18:29:39.042690   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:39.042697   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:39.042758   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:39.080077   78008 cri.go:89] found id: ""
	I0917 18:29:39.080106   78008 logs.go:276] 0 containers: []
	W0917 18:29:39.080118   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:39.080128   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:39.080144   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:39.094785   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:39.094812   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:39.168149   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:39.168173   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:39.168184   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:39.258912   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:39.258958   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:39.303103   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:39.303133   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:36.664761   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:38.664881   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:37.501339   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:40.001032   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:42.001645   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:40.124869   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:42.125730   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:41.860904   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:41.875574   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:41.875644   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:41.916576   78008 cri.go:89] found id: ""
	I0917 18:29:41.916603   78008 logs.go:276] 0 containers: []
	W0917 18:29:41.916615   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:41.916623   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:41.916674   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:41.952222   78008 cri.go:89] found id: ""
	I0917 18:29:41.952284   78008 logs.go:276] 0 containers: []
	W0917 18:29:41.952298   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:41.952307   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:41.952374   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:41.992584   78008 cri.go:89] found id: ""
	I0917 18:29:41.992611   78008 logs.go:276] 0 containers: []
	W0917 18:29:41.992621   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:41.992627   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:41.992689   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:42.030490   78008 cri.go:89] found id: ""
	I0917 18:29:42.030522   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.030534   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:42.030542   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:42.030621   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:42.067240   78008 cri.go:89] found id: ""
	I0917 18:29:42.067274   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.067287   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:42.067312   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:42.067394   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:42.106093   78008 cri.go:89] found id: ""
	I0917 18:29:42.106124   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.106137   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:42.106148   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:42.106227   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:42.148581   78008 cri.go:89] found id: ""
	I0917 18:29:42.148623   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.148635   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:42.148643   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:42.148729   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:42.188248   78008 cri.go:89] found id: ""
	I0917 18:29:42.188277   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.188286   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:42.188294   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:42.188308   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:42.276866   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:42.276906   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:42.325636   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:42.325671   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:42.379370   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:42.379406   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:42.396321   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:42.396357   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:42.481770   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:44.982800   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:44.996898   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:44.997053   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:45.036594   78008 cri.go:89] found id: ""
	I0917 18:29:45.036623   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.036632   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:45.036638   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:45.036699   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:45.073760   78008 cri.go:89] found id: ""
	I0917 18:29:45.073788   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.073799   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:45.073807   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:45.073868   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:45.111080   78008 cri.go:89] found id: ""
	I0917 18:29:45.111106   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.111116   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:45.111127   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:45.111196   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:45.149986   78008 cri.go:89] found id: ""
	I0917 18:29:45.150017   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.150027   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:45.150035   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:45.150099   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:45.187597   78008 cri.go:89] found id: ""
	I0917 18:29:45.187620   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.187629   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:45.187635   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:45.187701   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:45.234149   78008 cri.go:89] found id: ""
	I0917 18:29:45.234174   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.234182   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:45.234188   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:45.234236   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:45.269840   78008 cri.go:89] found id: ""
	I0917 18:29:45.269867   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.269875   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:45.269882   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:45.269944   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:45.306377   78008 cri.go:89] found id: ""
	I0917 18:29:45.306407   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.306418   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:45.306427   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:45.306441   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:45.388767   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:45.388788   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:45.388799   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:45.470114   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:45.470147   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:45.516157   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:45.516185   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:45.573857   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:45.573895   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:41.166141   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:43.664951   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:44.501916   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:47.000980   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:44.626656   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:47.124445   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:48.090706   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:48.105691   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:48.105776   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:48.150986   78008 cri.go:89] found id: ""
	I0917 18:29:48.151013   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.151024   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:48.151032   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:48.151100   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:48.192061   78008 cri.go:89] found id: ""
	I0917 18:29:48.192090   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.192099   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:48.192104   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:48.192161   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:48.229101   78008 cri.go:89] found id: ""
	I0917 18:29:48.229131   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.229148   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:48.229157   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:48.229220   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:48.265986   78008 cri.go:89] found id: ""
	I0917 18:29:48.266016   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.266027   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:48.266034   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:48.266095   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:48.303726   78008 cri.go:89] found id: ""
	I0917 18:29:48.303766   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.303776   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:48.303781   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:48.303830   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:48.339658   78008 cri.go:89] found id: ""
	I0917 18:29:48.339686   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.339696   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:48.339704   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:48.339774   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:48.379115   78008 cri.go:89] found id: ""
	I0917 18:29:48.379140   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.379157   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:48.379164   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:48.379218   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:48.414414   78008 cri.go:89] found id: ""
	I0917 18:29:48.414449   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.414461   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:48.414472   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:48.414488   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:48.428450   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:48.428477   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:48.514098   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:48.514125   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:48.514140   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:48.593472   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:48.593505   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:48.644071   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:48.644108   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:46.165499   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:48.166008   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:50.663751   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:49.001133   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:51.001465   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:49.125957   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:51.126670   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:51.202414   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:51.216803   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:51.216880   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:51.258947   78008 cri.go:89] found id: ""
	I0917 18:29:51.258982   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.259000   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:51.259009   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:51.259075   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:51.298904   78008 cri.go:89] found id: ""
	I0917 18:29:51.298937   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.298949   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:51.298957   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:51.299019   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:51.340714   78008 cri.go:89] found id: ""
	I0917 18:29:51.340743   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.340755   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:51.340761   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:51.340823   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:51.382480   78008 cri.go:89] found id: ""
	I0917 18:29:51.382518   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.382527   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:51.382532   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:51.382584   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:51.423788   78008 cri.go:89] found id: ""
	I0917 18:29:51.423818   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.423829   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:51.423836   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:51.423905   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:51.459714   78008 cri.go:89] found id: ""
	I0917 18:29:51.459740   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.459755   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:51.459762   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:51.459810   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:51.495817   78008 cri.go:89] found id: ""
	I0917 18:29:51.495850   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.495862   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:51.495870   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:51.495926   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:51.531481   78008 cri.go:89] found id: ""
	I0917 18:29:51.531521   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.531538   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:51.531550   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:51.531566   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:51.547085   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:51.547120   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:51.622717   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:51.622743   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:51.622758   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:51.701363   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:51.701404   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:51.749746   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:51.749779   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:54.306208   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:54.320659   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:54.320737   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:54.365488   78008 cri.go:89] found id: ""
	I0917 18:29:54.365513   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.365521   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:54.365527   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:54.365588   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:54.417659   78008 cri.go:89] found id: ""
	I0917 18:29:54.417689   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.417700   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:54.417706   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:54.417773   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:54.460760   78008 cri.go:89] found id: ""
	I0917 18:29:54.460795   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.460806   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:54.460814   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:54.460865   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:54.501371   78008 cri.go:89] found id: ""
	I0917 18:29:54.501405   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.501419   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:54.501428   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:54.501501   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:54.549810   78008 cri.go:89] found id: ""
	I0917 18:29:54.549844   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.549853   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:54.549859   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:54.549910   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:54.586837   78008 cri.go:89] found id: ""
	I0917 18:29:54.586860   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.586867   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:54.586881   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:54.586942   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:54.623858   78008 cri.go:89] found id: ""
	I0917 18:29:54.623887   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.623898   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:54.623905   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:54.623967   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:54.660913   78008 cri.go:89] found id: ""
	I0917 18:29:54.660945   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.660955   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:54.660965   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:54.660979   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:54.716523   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:54.716560   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:54.731846   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:54.731877   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:54.812288   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:54.812311   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:54.812323   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:54.892779   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:54.892819   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:52.663861   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:54.664903   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:53.501802   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:56.001407   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:53.624682   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:56.124445   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:57.440435   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:57.454886   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:57.454964   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:57.491408   78008 cri.go:89] found id: ""
	I0917 18:29:57.491440   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.491453   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:57.491461   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:57.491523   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:57.535786   78008 cri.go:89] found id: ""
	I0917 18:29:57.535814   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.535829   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:57.535837   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:57.535904   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:57.578014   78008 cri.go:89] found id: ""
	I0917 18:29:57.578043   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.578051   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:57.578057   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:57.578108   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:57.615580   78008 cri.go:89] found id: ""
	I0917 18:29:57.615615   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.615626   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:57.615634   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:57.615699   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:57.660250   78008 cri.go:89] found id: ""
	I0917 18:29:57.660285   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.660296   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:57.660305   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:57.660366   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:57.700495   78008 cri.go:89] found id: ""
	I0917 18:29:57.700526   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.700536   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:57.700542   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:57.700600   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:57.740580   78008 cri.go:89] found id: ""
	I0917 18:29:57.740616   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.740627   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:57.740635   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:57.740694   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:57.776982   78008 cri.go:89] found id: ""
	I0917 18:29:57.777012   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.777024   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:57.777035   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:57.777049   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:57.877144   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:57.877184   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:57.923875   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:57.923912   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:57.976988   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:57.977025   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:57.992196   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:57.992223   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:58.071161   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:00.571930   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:00.586999   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:00.587083   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:00.625833   78008 cri.go:89] found id: ""
	I0917 18:30:00.625856   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.625864   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:00.625869   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:00.625924   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:00.669976   78008 cri.go:89] found id: ""
	I0917 18:30:00.669999   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.670007   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:00.670012   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:00.670072   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:56.665386   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:59.163695   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:58.002576   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:00.500510   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:58.624759   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:00.633084   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:03.124695   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:00.708223   78008 cri.go:89] found id: ""
	I0917 18:30:00.708249   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.708257   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:00.708263   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:00.708315   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:00.743322   78008 cri.go:89] found id: ""
	I0917 18:30:00.743352   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.743364   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:00.743371   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:00.743508   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:00.778595   78008 cri.go:89] found id: ""
	I0917 18:30:00.778625   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.778635   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:00.778643   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:00.778706   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:00.816878   78008 cri.go:89] found id: ""
	I0917 18:30:00.816911   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.816923   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:00.816930   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:00.816983   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:00.855841   78008 cri.go:89] found id: ""
	I0917 18:30:00.855876   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.855889   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:00.855898   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:00.855974   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:00.897170   78008 cri.go:89] found id: ""
	I0917 18:30:00.897195   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.897203   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:00.897210   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:00.897236   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:00.949640   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:00.949680   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:00.963799   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:00.963825   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:01.050102   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:01.050123   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:01.050135   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:01.129012   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:01.129061   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:03.672160   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:03.687572   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:03.687648   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:03.729586   78008 cri.go:89] found id: ""
	I0917 18:30:03.729615   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.729626   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:03.729632   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:03.729692   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:03.766993   78008 cri.go:89] found id: ""
	I0917 18:30:03.767022   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.767032   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:03.767039   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:03.767104   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:03.804340   78008 cri.go:89] found id: ""
	I0917 18:30:03.804368   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.804378   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:03.804385   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:03.804451   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:03.847020   78008 cri.go:89] found id: ""
	I0917 18:30:03.847050   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.847061   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:03.847068   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:03.847158   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:03.885900   78008 cri.go:89] found id: ""
	I0917 18:30:03.885927   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.885938   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:03.885946   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:03.886009   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:03.925137   78008 cri.go:89] found id: ""
	I0917 18:30:03.925167   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.925178   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:03.925184   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:03.925259   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:03.962225   78008 cri.go:89] found id: ""
	I0917 18:30:03.962261   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.962275   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:03.962283   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:03.962352   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:04.005866   78008 cri.go:89] found id: ""
	I0917 18:30:04.005892   78008 logs.go:276] 0 containers: []
	W0917 18:30:04.005902   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:04.005909   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:04.005921   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:04.057578   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:04.057615   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:04.072178   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:04.072213   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:04.145219   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:04.145251   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:04.145285   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:04.234230   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:04.234282   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:01.165075   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:03.666085   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:05.672830   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:03.000954   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:05.501361   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:05.124840   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:07.126821   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:06.777988   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:06.793426   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:06.793500   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:06.833313   78008 cri.go:89] found id: ""
	I0917 18:30:06.833352   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.833360   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:06.833365   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:06.833424   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:06.870020   78008 cri.go:89] found id: ""
	I0917 18:30:06.870047   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.870056   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:06.870062   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:06.870124   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:06.906682   78008 cri.go:89] found id: ""
	I0917 18:30:06.906716   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.906728   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:06.906735   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:06.906810   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:06.946328   78008 cri.go:89] found id: ""
	I0917 18:30:06.946356   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.946365   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:06.946371   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:06.946418   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:06.983832   78008 cri.go:89] found id: ""
	I0917 18:30:06.983856   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.983865   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:06.983871   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:06.983918   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:07.024526   78008 cri.go:89] found id: ""
	I0917 18:30:07.024560   78008 logs.go:276] 0 containers: []
	W0917 18:30:07.024571   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:07.024579   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:07.024637   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:07.066891   78008 cri.go:89] found id: ""
	I0917 18:30:07.066917   78008 logs.go:276] 0 containers: []
	W0917 18:30:07.066928   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:07.066935   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:07.066997   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:07.105669   78008 cri.go:89] found id: ""
	I0917 18:30:07.105709   78008 logs.go:276] 0 containers: []
	W0917 18:30:07.105721   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:07.105732   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:07.105754   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:07.120771   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:07.120802   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:07.195243   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:07.195272   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:07.195287   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:07.284377   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:07.284428   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:07.326894   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:07.326924   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:09.886998   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:09.900710   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:09.900773   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:09.943198   78008 cri.go:89] found id: ""
	I0917 18:30:09.943225   78008 logs.go:276] 0 containers: []
	W0917 18:30:09.943234   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:09.943240   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:09.943300   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:09.980113   78008 cri.go:89] found id: ""
	I0917 18:30:09.980148   78008 logs.go:276] 0 containers: []
	W0917 18:30:09.980160   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:09.980167   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:09.980226   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:10.017582   78008 cri.go:89] found id: ""
	I0917 18:30:10.017613   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.017625   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:10.017632   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:10.017681   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:10.053698   78008 cri.go:89] found id: ""
	I0917 18:30:10.053722   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.053731   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:10.053736   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:10.053784   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:10.091391   78008 cri.go:89] found id: ""
	I0917 18:30:10.091421   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.091433   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:10.091439   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:10.091496   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:10.130636   78008 cri.go:89] found id: ""
	I0917 18:30:10.130668   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.130677   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:10.130682   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:10.130736   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:10.168175   78008 cri.go:89] found id: ""
	I0917 18:30:10.168203   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.168214   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:10.168222   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:10.168313   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:10.207085   78008 cri.go:89] found id: ""
	I0917 18:30:10.207109   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.207118   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:10.207126   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:10.207139   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:10.245978   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:10.246007   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:10.298522   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:10.298569   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:10.312878   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:10.312904   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:10.387530   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:10.387553   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:10.387565   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:08.165955   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:10.663887   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:08.000401   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:10.000928   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:12.001022   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:09.625405   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:12.124546   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:12.967663   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:12.982157   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:12.982215   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:13.020177   78008 cri.go:89] found id: ""
	I0917 18:30:13.020224   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.020235   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:13.020241   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:13.020310   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:13.056317   78008 cri.go:89] found id: ""
	I0917 18:30:13.056342   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.056351   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:13.056356   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:13.056404   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:13.091799   78008 cri.go:89] found id: ""
	I0917 18:30:13.091823   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.091832   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:13.091838   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:13.091888   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:13.130421   78008 cri.go:89] found id: ""
	I0917 18:30:13.130450   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.130460   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:13.130465   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:13.130518   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:13.170623   78008 cri.go:89] found id: ""
	I0917 18:30:13.170654   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.170664   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:13.170672   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:13.170732   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:13.206396   78008 cri.go:89] found id: ""
	I0917 18:30:13.206441   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.206452   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:13.206460   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:13.206514   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:13.243090   78008 cri.go:89] found id: ""
	I0917 18:30:13.243121   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.243132   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:13.243139   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:13.243192   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:13.285690   78008 cri.go:89] found id: ""
	I0917 18:30:13.285730   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.285740   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:13.285747   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:13.285759   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:13.361992   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:13.362021   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:13.362043   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:13.448424   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:13.448467   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:13.489256   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:13.489284   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:13.544698   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:13.544735   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:12.665127   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:15.164296   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:14.501748   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:17.001119   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:14.124965   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:16.625638   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:16.060014   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:16.073504   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:16.073564   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:16.110538   78008 cri.go:89] found id: ""
	I0917 18:30:16.110567   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.110579   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:16.110587   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:16.110648   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:16.148521   78008 cri.go:89] found id: ""
	I0917 18:30:16.148551   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.148562   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:16.148570   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:16.148640   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:16.182772   78008 cri.go:89] found id: ""
	I0917 18:30:16.182796   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.182804   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:16.182809   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:16.182858   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:16.219617   78008 cri.go:89] found id: ""
	I0917 18:30:16.219642   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.219653   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:16.219660   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:16.219714   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:16.257320   78008 cri.go:89] found id: ""
	I0917 18:30:16.257345   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.257354   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:16.257359   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:16.257419   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:16.295118   78008 cri.go:89] found id: ""
	I0917 18:30:16.295150   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.295161   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:16.295168   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:16.295234   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:16.332448   78008 cri.go:89] found id: ""
	I0917 18:30:16.332482   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.332493   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:16.332500   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:16.332564   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:16.370155   78008 cri.go:89] found id: ""
	I0917 18:30:16.370182   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.370189   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:16.370197   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:16.370208   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:16.410230   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:16.410260   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:16.462306   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:16.462342   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:16.476472   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:16.476506   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:16.550449   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:16.550479   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:16.550497   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:19.129550   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:19.143333   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:19.143415   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:19.184184   78008 cri.go:89] found id: ""
	I0917 18:30:19.184213   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.184224   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:19.184231   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:19.184289   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:19.219455   78008 cri.go:89] found id: ""
	I0917 18:30:19.219489   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.219501   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:19.219508   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:19.219568   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:19.257269   78008 cri.go:89] found id: ""
	I0917 18:30:19.257303   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.257315   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:19.257328   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:19.257405   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:19.293898   78008 cri.go:89] found id: ""
	I0917 18:30:19.293931   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.293943   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:19.293951   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:19.294009   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:19.339154   78008 cri.go:89] found id: ""
	I0917 18:30:19.339183   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.339194   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:19.339201   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:19.339268   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:19.378608   78008 cri.go:89] found id: ""
	I0917 18:30:19.378634   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.378646   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:19.378653   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:19.378720   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:19.415280   78008 cri.go:89] found id: ""
	I0917 18:30:19.415311   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.415322   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:19.415330   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:19.415396   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:19.454025   78008 cri.go:89] found id: ""
	I0917 18:30:19.454066   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.454079   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:19.454089   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:19.454107   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:19.505918   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:19.505950   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:19.520996   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:19.521027   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:19.597408   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:19.597431   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:19.597442   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:19.678454   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:19.678487   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:17.165495   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:19.665976   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:19.001210   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:21.001549   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:19.123461   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:21.124423   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:23.124646   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:22.223094   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:22.238644   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:22.238722   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:22.279497   78008 cri.go:89] found id: ""
	I0917 18:30:22.279529   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.279541   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:22.279554   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:22.279616   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:22.315953   78008 cri.go:89] found id: ""
	I0917 18:30:22.315980   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.315990   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:22.315997   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:22.316061   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:22.355157   78008 cri.go:89] found id: ""
	I0917 18:30:22.355191   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.355204   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:22.355212   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:22.355278   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:22.393304   78008 cri.go:89] found id: ""
	I0917 18:30:22.393335   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.393346   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:22.393353   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:22.393405   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:22.437541   78008 cri.go:89] found id: ""
	I0917 18:30:22.437567   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.437576   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:22.437582   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:22.437637   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:22.478560   78008 cri.go:89] found id: ""
	I0917 18:30:22.478588   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.478596   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:22.478601   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:22.478661   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:22.516049   78008 cri.go:89] found id: ""
	I0917 18:30:22.516084   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.516093   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:22.516099   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:22.516151   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:22.554321   78008 cri.go:89] found id: ""
	I0917 18:30:22.554350   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.554359   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:22.554367   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:22.554377   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:22.613073   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:22.613110   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:22.627768   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:22.627797   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:22.710291   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:22.710318   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:22.710333   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:22.807999   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:22.808035   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:25.350639   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:25.366302   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:25.366405   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:25.411585   78008 cri.go:89] found id: ""
	I0917 18:30:25.411613   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.411625   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:25.411632   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:25.411694   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:25.453414   78008 cri.go:89] found id: ""
	I0917 18:30:25.453441   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.453461   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:25.453467   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:25.453529   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:25.489776   78008 cri.go:89] found id: ""
	I0917 18:30:25.489803   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.489812   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:25.489817   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:25.489868   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:25.531594   78008 cri.go:89] found id: ""
	I0917 18:30:25.531624   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.531633   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:25.531638   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:25.531686   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:25.568796   78008 cri.go:89] found id: ""
	I0917 18:30:25.568820   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.568831   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:25.568837   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:25.568888   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:25.605612   78008 cri.go:89] found id: ""
	I0917 18:30:25.605643   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.605654   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:25.605661   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:25.605719   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:25.647673   78008 cri.go:89] found id: ""
	I0917 18:30:25.647698   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.647708   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:25.647713   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:25.647772   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:22.164631   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:24.165353   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:23.500355   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:25.503250   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:25.125192   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:27.125540   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:25.686943   78008 cri.go:89] found id: ""
	I0917 18:30:25.686976   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.686989   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:25.687000   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:25.687022   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:25.728440   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:25.728477   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:25.778211   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:25.778254   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:25.792519   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:25.792547   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:25.879452   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:25.879477   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:25.879492   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:28.460531   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:28.474595   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:28.474689   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:28.531065   78008 cri.go:89] found id: ""
	I0917 18:30:28.531099   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.531108   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:28.531117   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:28.531184   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:28.571952   78008 cri.go:89] found id: ""
	I0917 18:30:28.571991   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.572002   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:28.572012   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:28.572081   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:28.608315   78008 cri.go:89] found id: ""
	I0917 18:30:28.608348   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.608364   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:28.608371   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:28.608433   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:28.647882   78008 cri.go:89] found id: ""
	I0917 18:30:28.647913   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.647925   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:28.647932   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:28.647997   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:28.684998   78008 cri.go:89] found id: ""
	I0917 18:30:28.685021   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.685030   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:28.685036   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:28.685098   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:28.724249   78008 cri.go:89] found id: ""
	I0917 18:30:28.724274   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.724282   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:28.724287   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:28.724348   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:28.765932   78008 cri.go:89] found id: ""
	I0917 18:30:28.765965   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.765976   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:28.765982   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:28.766047   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:28.803857   78008 cri.go:89] found id: ""
	I0917 18:30:28.803888   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.803899   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:28.803910   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:28.803923   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:28.863667   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:28.863703   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:28.878148   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:28.878187   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:28.956714   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:28.956743   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:28.956760   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:29.036303   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:29.036342   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:26.664369   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:28.665390   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:28.001973   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:30.500284   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:29.126782   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:31.626235   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:31.581741   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:31.595509   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:31.595592   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:31.631185   78008 cri.go:89] found id: ""
	I0917 18:30:31.631215   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.631227   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:31.631234   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:31.631286   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:31.668059   78008 cri.go:89] found id: ""
	I0917 18:30:31.668091   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.668102   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:31.668109   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:31.668168   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:31.705807   78008 cri.go:89] found id: ""
	I0917 18:30:31.705838   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.705849   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:31.705856   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:31.705925   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:31.750168   78008 cri.go:89] found id: ""
	I0917 18:30:31.750198   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.750212   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:31.750220   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:31.750282   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:31.792032   78008 cri.go:89] found id: ""
	I0917 18:30:31.792054   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.792063   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:31.792069   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:31.792130   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:31.828596   78008 cri.go:89] found id: ""
	I0917 18:30:31.828632   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.828646   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:31.828654   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:31.828708   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:31.871963   78008 cri.go:89] found id: ""
	I0917 18:30:31.872000   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.872013   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:31.872023   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:31.872094   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:31.906688   78008 cri.go:89] found id: ""
	I0917 18:30:31.906718   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.906727   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:31.906735   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:31.906746   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:31.920311   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:31.920339   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:32.009966   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:32.009992   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:32.010006   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:32.088409   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:32.088447   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:32.132771   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:32.132806   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:34.686159   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:34.700133   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:34.700211   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:34.739392   78008 cri.go:89] found id: ""
	I0917 18:30:34.739431   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.739445   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:34.739453   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:34.739522   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:34.779141   78008 cri.go:89] found id: ""
	I0917 18:30:34.779175   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.779188   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:34.779195   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:34.779260   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:34.819883   78008 cri.go:89] found id: ""
	I0917 18:30:34.819907   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.819915   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:34.819920   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:34.819967   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:34.855886   78008 cri.go:89] found id: ""
	I0917 18:30:34.855912   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.855923   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:34.855931   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:34.855999   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:34.903919   78008 cri.go:89] found id: ""
	I0917 18:30:34.903956   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.903968   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:34.903975   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:34.904042   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:34.951895   78008 cri.go:89] found id: ""
	I0917 18:30:34.951925   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.951936   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:34.951943   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:34.952007   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:35.013084   78008 cri.go:89] found id: ""
	I0917 18:30:35.013124   78008 logs.go:276] 0 containers: []
	W0917 18:30:35.013132   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:35.013137   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:35.013189   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:35.051565   78008 cri.go:89] found id: ""
	I0917 18:30:35.051589   78008 logs.go:276] 0 containers: []
	W0917 18:30:35.051598   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:35.051606   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:35.051616   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:35.092723   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:35.092753   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:35.147996   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:35.148037   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:35.164989   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:35.165030   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:35.246216   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:35.246239   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:35.246252   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:31.163920   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:33.664255   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:32.500662   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:35.002015   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:34.124883   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:36.125144   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:38.125514   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:37.828811   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:37.846467   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:37.846534   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:37.884725   78008 cri.go:89] found id: ""
	I0917 18:30:37.884758   78008 logs.go:276] 0 containers: []
	W0917 18:30:37.884769   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:37.884777   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:37.884836   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:37.923485   78008 cri.go:89] found id: ""
	I0917 18:30:37.923517   78008 logs.go:276] 0 containers: []
	W0917 18:30:37.923525   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:37.923531   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:37.923597   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:37.962829   78008 cri.go:89] found id: ""
	I0917 18:30:37.962857   78008 logs.go:276] 0 containers: []
	W0917 18:30:37.962867   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:37.962873   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:37.962938   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:38.003277   78008 cri.go:89] found id: ""
	I0917 18:30:38.003305   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.003313   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:38.003319   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:38.003380   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:38.047919   78008 cri.go:89] found id: ""
	I0917 18:30:38.047952   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.047963   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:38.047971   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:38.048043   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:38.084853   78008 cri.go:89] found id: ""
	I0917 18:30:38.084883   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.084896   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:38.084904   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:38.084967   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:38.122340   78008 cri.go:89] found id: ""
	I0917 18:30:38.122369   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.122379   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:38.122387   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:38.122446   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:38.163071   78008 cri.go:89] found id: ""
	I0917 18:30:38.163101   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.163112   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:38.163121   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:38.163134   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:38.243772   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:38.243812   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:38.291744   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:38.291777   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:38.346738   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:38.346778   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:38.361908   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:38.361953   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:38.441730   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:36.165051   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:38.165173   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:40.664192   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:37.500496   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:39.501199   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:42.000608   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:40.626165   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:43.125533   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:40.942693   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:40.960643   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:40.960713   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:41.016226   78008 cri.go:89] found id: ""
	I0917 18:30:41.016255   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.016265   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:41.016270   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:41.016328   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:41.054315   78008 cri.go:89] found id: ""
	I0917 18:30:41.054342   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.054353   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:41.054360   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:41.054426   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:41.092946   78008 cri.go:89] found id: ""
	I0917 18:30:41.092978   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.092991   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:41.092998   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:41.093058   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:41.133385   78008 cri.go:89] found id: ""
	I0917 18:30:41.133415   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.133423   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:41.133430   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:41.133487   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:41.173993   78008 cri.go:89] found id: ""
	I0917 18:30:41.174017   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.174025   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:41.174030   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:41.174083   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:41.211127   78008 cri.go:89] found id: ""
	I0917 18:30:41.211154   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.211168   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:41.211174   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:41.211244   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:41.248607   78008 cri.go:89] found id: ""
	I0917 18:30:41.248632   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.248645   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:41.248652   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:41.248714   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:41.284580   78008 cri.go:89] found id: ""
	I0917 18:30:41.284612   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.284621   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:41.284629   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:41.284640   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:41.336573   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:41.336613   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:41.352134   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:41.352167   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:41.419061   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:41.419085   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:41.419099   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:41.499758   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:41.499792   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:44.043361   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:44.057270   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:44.057339   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:44.096130   78008 cri.go:89] found id: ""
	I0917 18:30:44.096165   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.096176   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:44.096184   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:44.096238   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:44.134483   78008 cri.go:89] found id: ""
	I0917 18:30:44.134514   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.134526   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:44.134534   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:44.134601   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:44.172723   78008 cri.go:89] found id: ""
	I0917 18:30:44.172759   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.172774   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:44.172782   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:44.172855   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:44.208478   78008 cri.go:89] found id: ""
	I0917 18:30:44.208506   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.208514   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:44.208519   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:44.208577   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:44.249352   78008 cri.go:89] found id: ""
	I0917 18:30:44.249381   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.249391   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:44.249398   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:44.249457   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:44.291156   78008 cri.go:89] found id: ""
	I0917 18:30:44.291180   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.291188   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:44.291194   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:44.291243   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:44.331580   78008 cri.go:89] found id: ""
	I0917 18:30:44.331612   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.331623   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:44.331632   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:44.331705   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:44.370722   78008 cri.go:89] found id: ""
	I0917 18:30:44.370750   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.370763   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:44.370774   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:44.370797   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:44.421126   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:44.421161   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:44.478581   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:44.478624   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:44.493492   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:44.493522   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:44.566317   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:44.566347   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:44.566358   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:42.664631   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:44.664871   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:44.001209   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:46.003437   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:45.625415   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:47.626515   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:47.147466   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:47.162590   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:47.162654   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:47.201382   78008 cri.go:89] found id: ""
	I0917 18:30:47.201409   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.201418   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:47.201423   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:47.201474   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:47.249536   78008 cri.go:89] found id: ""
	I0917 18:30:47.249561   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.249569   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:47.249574   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:47.249631   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:47.292337   78008 cri.go:89] found id: ""
	I0917 18:30:47.292361   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.292369   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:47.292376   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:47.292438   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:47.341387   78008 cri.go:89] found id: ""
	I0917 18:30:47.341421   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.341433   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:47.341447   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:47.341531   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:47.382687   78008 cri.go:89] found id: ""
	I0917 18:30:47.382719   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.382748   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:47.382762   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:47.382827   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:47.419598   78008 cri.go:89] found id: ""
	I0917 18:30:47.419632   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.419644   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:47.419650   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:47.419717   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:47.456104   78008 cri.go:89] found id: ""
	I0917 18:30:47.456131   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.456141   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:47.456148   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:47.456210   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:47.498610   78008 cri.go:89] found id: ""
	I0917 18:30:47.498643   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.498654   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:47.498665   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:47.498706   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:47.573796   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:47.573819   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:47.573830   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:47.651234   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:47.651271   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:47.692875   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:47.692902   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:47.747088   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:47.747128   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:50.262789   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:50.277262   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:50.277415   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:50.314866   78008 cri.go:89] found id: ""
	I0917 18:30:50.314902   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.314911   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:50.314916   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:50.314971   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:50.353490   78008 cri.go:89] found id: ""
	I0917 18:30:50.353527   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.353536   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:50.353542   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:50.353590   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:50.391922   78008 cri.go:89] found id: ""
	I0917 18:30:50.391944   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.391952   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:50.391957   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:50.392003   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:50.431088   78008 cri.go:89] found id: ""
	I0917 18:30:50.431118   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.431129   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:50.431136   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:50.431186   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:50.469971   78008 cri.go:89] found id: ""
	I0917 18:30:50.469999   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.470010   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:50.470018   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:50.470083   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:50.509121   78008 cri.go:89] found id: ""
	I0917 18:30:50.509153   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.509165   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:50.509172   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:50.509256   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:50.546569   78008 cri.go:89] found id: ""
	I0917 18:30:50.546594   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.546602   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:50.546607   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:50.546656   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:50.586045   78008 cri.go:89] found id: ""
	I0917 18:30:50.586071   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.586080   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:50.586088   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:50.586098   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:50.642994   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:50.643040   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:50.658018   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:50.658050   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 18:30:46.665597   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:49.164714   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:48.501502   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:51.001554   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:50.124526   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:52.625006   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	W0917 18:30:50.730760   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:50.730792   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:50.730808   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:50.810154   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:50.810185   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:53.356859   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:53.371313   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:53.371406   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:53.412822   78008 cri.go:89] found id: ""
	I0917 18:30:53.412847   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.412858   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:53.412865   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:53.412931   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:53.448900   78008 cri.go:89] found id: ""
	I0917 18:30:53.448932   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.448943   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:53.448950   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:53.449014   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:53.487141   78008 cri.go:89] found id: ""
	I0917 18:30:53.487167   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.487176   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:53.487182   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:53.487251   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:53.528899   78008 cri.go:89] found id: ""
	I0917 18:30:53.528928   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.528940   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:53.528947   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:53.529008   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:53.564795   78008 cri.go:89] found id: ""
	I0917 18:30:53.564827   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.564839   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:53.564847   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:53.564914   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:53.605208   78008 cri.go:89] found id: ""
	I0917 18:30:53.605257   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.605268   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:53.605277   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:53.605339   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:53.647177   78008 cri.go:89] found id: ""
	I0917 18:30:53.647205   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.647214   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:53.647219   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:53.647278   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:53.694030   78008 cri.go:89] found id: ""
	I0917 18:30:53.694057   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.694067   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:53.694075   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:53.694085   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:53.746611   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:53.746645   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:53.761563   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:53.761595   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:53.835663   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:53.835694   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:53.835709   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:53.920796   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:53.920848   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:51.166015   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:53.665173   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:53.001959   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:55.501150   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:54.625124   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:56.626246   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:56.468452   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:56.482077   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:56.482148   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:56.518569   78008 cri.go:89] found id: ""
	I0917 18:30:56.518593   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.518601   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:56.518607   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:56.518665   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:56.560000   78008 cri.go:89] found id: ""
	I0917 18:30:56.560033   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.560045   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:56.560054   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:56.560117   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:56.600391   78008 cri.go:89] found id: ""
	I0917 18:30:56.600423   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.600435   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:56.600442   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:56.600519   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:56.637674   78008 cri.go:89] found id: ""
	I0917 18:30:56.637706   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.637720   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:56.637728   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:56.637781   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:56.673297   78008 cri.go:89] found id: ""
	I0917 18:30:56.673329   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.673340   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:56.673348   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:56.673414   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:56.708863   78008 cri.go:89] found id: ""
	I0917 18:30:56.708898   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.708910   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:56.708917   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:56.708979   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:56.745165   78008 cri.go:89] found id: ""
	I0917 18:30:56.745199   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.745211   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:56.745220   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:56.745297   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:56.793206   78008 cri.go:89] found id: ""
	I0917 18:30:56.793260   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.793273   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:56.793284   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:56.793297   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:56.880661   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:56.880699   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:56.926789   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:56.926820   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:56.978914   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:56.978965   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:56.993199   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:56.993236   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:57.065180   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:59.565927   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:59.579838   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:59.579921   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:59.616623   78008 cri.go:89] found id: ""
	I0917 18:30:59.616648   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.616656   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:59.616662   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:59.616716   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:59.659048   78008 cri.go:89] found id: ""
	I0917 18:30:59.659074   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.659084   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:59.659091   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:59.659153   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:59.694874   78008 cri.go:89] found id: ""
	I0917 18:30:59.694899   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.694910   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:59.694921   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:59.694988   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:59.732858   78008 cri.go:89] found id: ""
	I0917 18:30:59.732889   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.732902   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:59.732909   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:59.732972   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:59.771178   78008 cri.go:89] found id: ""
	I0917 18:30:59.771203   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.771212   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:59.771218   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:59.771271   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:59.812456   78008 cri.go:89] found id: ""
	I0917 18:30:59.812481   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.812490   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:59.812498   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:59.812560   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:59.849876   78008 cri.go:89] found id: ""
	I0917 18:30:59.849906   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.849917   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:59.849924   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:59.849988   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:59.889796   78008 cri.go:89] found id: ""
	I0917 18:30:59.889827   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.889839   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:59.889850   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:59.889865   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:59.942735   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:59.942774   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:59.957159   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:59.957186   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:00.030497   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:00.030522   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:00.030537   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:00.112077   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:00.112134   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:56.164011   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:58.164643   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:00.164831   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:57.502585   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:00.002013   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:02.002047   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:59.125188   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:01.127691   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:02.656525   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:02.671313   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:02.671379   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:02.710779   78008 cri.go:89] found id: ""
	I0917 18:31:02.710807   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.710820   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:02.710827   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:02.710890   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:02.750285   78008 cri.go:89] found id: ""
	I0917 18:31:02.750315   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.750326   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:02.750335   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:02.750399   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:02.790676   78008 cri.go:89] found id: ""
	I0917 18:31:02.790704   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.790712   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:02.790718   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:02.790766   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:02.832124   78008 cri.go:89] found id: ""
	I0917 18:31:02.832154   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.832166   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:02.832174   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:02.832236   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:02.868769   78008 cri.go:89] found id: ""
	I0917 18:31:02.868801   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.868813   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:02.868820   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:02.868886   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:02.910482   78008 cri.go:89] found id: ""
	I0917 18:31:02.910512   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.910524   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:02.910533   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:02.910587   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:02.948128   78008 cri.go:89] found id: ""
	I0917 18:31:02.948154   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.948165   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:02.948172   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:02.948239   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:02.987981   78008 cri.go:89] found id: ""
	I0917 18:31:02.988007   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.988018   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:02.988028   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:02.988042   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:03.044116   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:03.044157   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:03.059837   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:03.059866   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:03.134048   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:03.134073   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:03.134086   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:03.214751   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:03.214792   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:02.169026   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:04.664829   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:04.501493   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:07.001722   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:03.625165   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:06.126203   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:05.768145   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:05.782375   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:05.782455   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:05.820083   78008 cri.go:89] found id: ""
	I0917 18:31:05.820116   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.820127   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:05.820134   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:05.820188   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:05.856626   78008 cri.go:89] found id: ""
	I0917 18:31:05.856655   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.856666   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:05.856673   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:05.856737   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:05.893119   78008 cri.go:89] found id: ""
	I0917 18:31:05.893149   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.893162   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:05.893172   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:05.893299   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:05.931892   78008 cri.go:89] found id: ""
	I0917 18:31:05.931916   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.931924   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:05.931930   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:05.931991   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:05.968770   78008 cri.go:89] found id: ""
	I0917 18:31:05.968802   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.968814   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:05.968820   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:05.968888   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:06.008183   78008 cri.go:89] found id: ""
	I0917 18:31:06.008208   78008 logs.go:276] 0 containers: []
	W0917 18:31:06.008217   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:06.008222   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:06.008267   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:06.043161   78008 cri.go:89] found id: ""
	I0917 18:31:06.043189   78008 logs.go:276] 0 containers: []
	W0917 18:31:06.043199   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:06.043204   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:06.043271   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:06.079285   78008 cri.go:89] found id: ""
	I0917 18:31:06.079315   78008 logs.go:276] 0 containers: []
	W0917 18:31:06.079326   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:06.079336   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:06.079347   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:06.160863   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:06.160913   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:06.202101   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:06.202127   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:06.255482   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:06.255517   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:06.271518   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:06.271545   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:06.344034   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:08.844243   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:08.859312   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:08.859381   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:08.896915   78008 cri.go:89] found id: ""
	I0917 18:31:08.896942   78008 logs.go:276] 0 containers: []
	W0917 18:31:08.896952   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:08.896959   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:08.897022   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:08.937979   78008 cri.go:89] found id: ""
	I0917 18:31:08.938005   78008 logs.go:276] 0 containers: []
	W0917 18:31:08.938014   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:08.938022   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:08.938072   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:08.978502   78008 cri.go:89] found id: ""
	I0917 18:31:08.978536   78008 logs.go:276] 0 containers: []
	W0917 18:31:08.978548   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:08.978556   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:08.978616   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:09.044664   78008 cri.go:89] found id: ""
	I0917 18:31:09.044699   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.044711   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:09.044719   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:09.044796   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:09.082888   78008 cri.go:89] found id: ""
	I0917 18:31:09.082923   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.082944   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:09.082954   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:09.083027   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:09.120314   78008 cri.go:89] found id: ""
	I0917 18:31:09.120339   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.120350   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:09.120357   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:09.120418   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:09.160137   78008 cri.go:89] found id: ""
	I0917 18:31:09.160165   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.160176   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:09.160183   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:09.160241   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:09.198711   78008 cri.go:89] found id: ""
	I0917 18:31:09.198741   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.198749   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:09.198756   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:09.198766   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:09.253431   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:09.253485   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:09.270520   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:09.270554   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:09.349865   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:09.349889   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:09.349909   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:09.436606   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:09.436650   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:07.165101   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:09.165704   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:09.001786   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:11.500557   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:08.625085   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:11.124817   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:13.125531   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:11.981998   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:11.995472   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:11.995556   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:12.035854   78008 cri.go:89] found id: ""
	I0917 18:31:12.035883   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.035894   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:12.035902   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:12.035953   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:12.070923   78008 cri.go:89] found id: ""
	I0917 18:31:12.070953   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.070965   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:12.070973   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:12.071041   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:12.108151   78008 cri.go:89] found id: ""
	I0917 18:31:12.108176   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.108185   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:12.108190   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:12.108238   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:12.146050   78008 cri.go:89] found id: ""
	I0917 18:31:12.146081   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.146092   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:12.146100   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:12.146158   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:12.185355   78008 cri.go:89] found id: ""
	I0917 18:31:12.185387   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.185396   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:12.185402   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:12.185449   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:12.222377   78008 cri.go:89] found id: ""
	I0917 18:31:12.222403   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.222412   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:12.222418   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:12.222488   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:12.258190   78008 cri.go:89] found id: ""
	I0917 18:31:12.258231   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.258242   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:12.258249   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:12.258326   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:12.295674   78008 cri.go:89] found id: ""
	I0917 18:31:12.295710   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.295722   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:12.295731   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:12.295742   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:12.348185   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:12.348223   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:12.363961   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:12.363992   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:12.438630   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:12.438661   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:12.438676   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:12.520086   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:12.520133   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:15.061926   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:15.079141   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:15.079206   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:15.122722   78008 cri.go:89] found id: ""
	I0917 18:31:15.122812   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.122828   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:15.122837   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:15.122895   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:15.168184   78008 cri.go:89] found id: ""
	I0917 18:31:15.168209   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.168218   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:15.168225   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:15.168288   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:15.208219   78008 cri.go:89] found id: ""
	I0917 18:31:15.208246   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.208259   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:15.208266   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:15.208318   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:15.248082   78008 cri.go:89] found id: ""
	I0917 18:31:15.248114   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.248126   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:15.248133   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:15.248197   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:15.285215   78008 cri.go:89] found id: ""
	I0917 18:31:15.285263   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.285274   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:15.285281   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:15.285339   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:15.328617   78008 cri.go:89] found id: ""
	I0917 18:31:15.328650   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.328669   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:15.328675   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:15.328738   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:15.371869   78008 cri.go:89] found id: ""
	I0917 18:31:15.371895   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.371903   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:15.371909   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:15.371955   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:15.418109   78008 cri.go:89] found id: ""
	I0917 18:31:15.418136   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.418145   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:15.418153   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:15.418166   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:15.443709   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:15.443741   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:15.540475   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:15.540499   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:15.540511   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:15.627751   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:15.627781   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:15.671027   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:15.671056   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:11.664755   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:14.164563   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:14.001567   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:16.500724   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:15.127715   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:17.624831   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:18.223732   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:18.239161   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:18.239242   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:18.280252   78008 cri.go:89] found id: ""
	I0917 18:31:18.280282   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.280294   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:18.280301   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:18.280350   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:18.318774   78008 cri.go:89] found id: ""
	I0917 18:31:18.318805   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.318815   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:18.318821   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:18.318877   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:18.354755   78008 cri.go:89] found id: ""
	I0917 18:31:18.354785   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.354796   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:18.354804   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:18.354862   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:18.391283   78008 cri.go:89] found id: ""
	I0917 18:31:18.391310   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.391318   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:18.391324   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:18.391372   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:18.429026   78008 cri.go:89] found id: ""
	I0917 18:31:18.429062   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.429074   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:18.429081   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:18.429135   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:18.468318   78008 cri.go:89] found id: ""
	I0917 18:31:18.468351   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.468365   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:18.468372   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:18.468421   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:18.509871   78008 cri.go:89] found id: ""
	I0917 18:31:18.509903   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.509914   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:18.509922   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:18.509979   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:18.548662   78008 cri.go:89] found id: ""
	I0917 18:31:18.548694   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.548705   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:18.548714   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:18.548729   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:18.587633   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:18.587662   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:18.640867   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:18.640910   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:18.658020   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:18.658054   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:18.729643   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:18.729674   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:18.729686   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:16.664372   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:18.666834   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:18.501952   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:21.001547   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:20.125423   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:22.626597   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:21.313013   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:21.329702   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:21.329768   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:21.378972   78008 cri.go:89] found id: ""
	I0917 18:31:21.378996   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.379004   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:21.379010   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:21.379065   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:21.433355   78008 cri.go:89] found id: ""
	I0917 18:31:21.433382   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.433393   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:21.433400   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:21.433462   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:21.489030   78008 cri.go:89] found id: ""
	I0917 18:31:21.489055   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.489063   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:21.489068   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:21.489124   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:21.529089   78008 cri.go:89] found id: ""
	I0917 18:31:21.529119   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.529131   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:21.529138   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:21.529188   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:21.566892   78008 cri.go:89] found id: ""
	I0917 18:31:21.566919   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.566929   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:21.566935   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:21.566985   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:21.605453   78008 cri.go:89] found id: ""
	I0917 18:31:21.605484   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.605496   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:21.605504   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:21.605569   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:21.647710   78008 cri.go:89] found id: ""
	I0917 18:31:21.647732   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.647740   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:21.647745   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:21.647804   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:21.687002   78008 cri.go:89] found id: ""
	I0917 18:31:21.687036   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.687048   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:21.687058   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:21.687074   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:21.738591   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:21.738631   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:21.752950   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:21.752987   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:21.826268   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:21.826292   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:21.826306   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:21.906598   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:21.906646   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:24.453057   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:24.468867   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:24.468930   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:24.511103   78008 cri.go:89] found id: ""
	I0917 18:31:24.511129   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.511140   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:24.511147   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:24.511200   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:24.546392   78008 cri.go:89] found id: ""
	I0917 18:31:24.546423   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.546434   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:24.546443   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:24.546505   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:24.583266   78008 cri.go:89] found id: ""
	I0917 18:31:24.583299   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.583310   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:24.583320   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:24.583381   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:24.620018   78008 cri.go:89] found id: ""
	I0917 18:31:24.620051   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.620063   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:24.620070   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:24.620133   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:24.659528   78008 cri.go:89] found id: ""
	I0917 18:31:24.659556   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.659566   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:24.659573   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:24.659636   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:24.699115   78008 cri.go:89] found id: ""
	I0917 18:31:24.699153   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.699167   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:24.699175   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:24.699234   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:24.745358   78008 cri.go:89] found id: ""
	I0917 18:31:24.745392   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.745404   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:24.745414   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:24.745483   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:24.786606   78008 cri.go:89] found id: ""
	I0917 18:31:24.786635   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.786644   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:24.786657   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:24.786671   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:24.838417   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:24.838462   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:24.852959   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:24.852988   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:24.927013   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:24.927039   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:24.927058   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:25.008679   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:25.008720   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:21.164500   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:23.165380   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:25.665618   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:23.501265   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:26.002113   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:25.126406   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:27.627599   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:27.549945   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:27.565336   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:27.565450   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:27.605806   78008 cri.go:89] found id: ""
	I0917 18:31:27.605844   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.605853   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:27.605860   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:27.605909   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:27.652915   78008 cri.go:89] found id: ""
	I0917 18:31:27.652955   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.652968   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:27.652977   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:27.653044   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:27.701732   78008 cri.go:89] found id: ""
	I0917 18:31:27.701759   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.701771   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:27.701778   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:27.701841   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:27.744587   78008 cri.go:89] found id: ""
	I0917 18:31:27.744616   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.744628   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:27.744635   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:27.744705   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:27.789161   78008 cri.go:89] found id: ""
	I0917 18:31:27.789196   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.789207   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:27.789214   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:27.789296   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:27.833484   78008 cri.go:89] found id: ""
	I0917 18:31:27.833513   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.833525   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:27.833532   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:27.833591   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:27.873669   78008 cri.go:89] found id: ""
	I0917 18:31:27.873703   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.873715   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:27.873722   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:27.873792   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:27.911270   78008 cri.go:89] found id: ""
	I0917 18:31:27.911301   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.911313   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:27.911323   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:27.911336   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:27.951769   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:27.951798   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:28.002220   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:28.002254   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:28.017358   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:28.017392   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:28.091456   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:28.091481   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:28.091492   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:27.666003   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:30.164548   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:28.501094   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:31.005569   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:30.124439   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:32.126247   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:30.679643   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:30.693877   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:30.693948   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:30.732196   78008 cri.go:89] found id: ""
	I0917 18:31:30.732228   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.732240   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:30.732247   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:30.732320   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:30.774700   78008 cri.go:89] found id: ""
	I0917 18:31:30.774730   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.774742   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:30.774749   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:30.774838   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:30.814394   78008 cri.go:89] found id: ""
	I0917 18:31:30.814420   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.814428   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:30.814434   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:30.814487   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:30.854746   78008 cri.go:89] found id: ""
	I0917 18:31:30.854788   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.854801   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:30.854830   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:30.854899   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:30.893533   78008 cri.go:89] found id: ""
	I0917 18:31:30.893564   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.893574   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:30.893580   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:30.893649   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:30.932719   78008 cri.go:89] found id: ""
	I0917 18:31:30.932746   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.932757   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:30.932763   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:30.932811   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:30.974004   78008 cri.go:89] found id: ""
	I0917 18:31:30.974047   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.974056   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:30.974061   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:30.974117   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:31.017469   78008 cri.go:89] found id: ""
	I0917 18:31:31.017498   78008 logs.go:276] 0 containers: []
	W0917 18:31:31.017509   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:31.017517   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:31.017529   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:31.094385   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:31.094409   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:31.094424   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:31.177975   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:31.178012   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:31.218773   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:31.218804   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:31.272960   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:31.272996   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:33.788825   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:33.804904   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:33.804985   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:33.847149   78008 cri.go:89] found id: ""
	I0917 18:31:33.847178   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.847190   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:33.847198   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:33.847259   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:33.883548   78008 cri.go:89] found id: ""
	I0917 18:31:33.883573   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.883581   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:33.883586   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:33.883632   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:33.917495   78008 cri.go:89] found id: ""
	I0917 18:31:33.917523   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.917535   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:33.917542   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:33.917634   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:33.954931   78008 cri.go:89] found id: ""
	I0917 18:31:33.954955   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.954963   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:33.954969   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:33.955019   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:33.991535   78008 cri.go:89] found id: ""
	I0917 18:31:33.991568   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.991577   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:33.991582   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:33.991639   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:34.039451   78008 cri.go:89] found id: ""
	I0917 18:31:34.039478   78008 logs.go:276] 0 containers: []
	W0917 18:31:34.039489   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:34.039497   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:34.039557   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:34.081258   78008 cri.go:89] found id: ""
	I0917 18:31:34.081300   78008 logs.go:276] 0 containers: []
	W0917 18:31:34.081311   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:34.081317   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:34.081379   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:34.119557   78008 cri.go:89] found id: ""
	I0917 18:31:34.119586   78008 logs.go:276] 0 containers: []
	W0917 18:31:34.119597   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:34.119608   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:34.119623   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:34.163345   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:34.163379   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:34.218399   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:34.218454   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:34.232705   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:34.232736   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:34.309948   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:34.309972   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:34.309984   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:32.164688   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:34.165267   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:33.500604   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:35.501094   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:34.624847   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:36.624971   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:36.896504   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:36.913784   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:36.913870   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:36.954525   78008 cri.go:89] found id: ""
	I0917 18:31:36.954557   78008 logs.go:276] 0 containers: []
	W0917 18:31:36.954568   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:36.954578   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:36.954648   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:36.989379   78008 cri.go:89] found id: ""
	I0917 18:31:36.989408   78008 logs.go:276] 0 containers: []
	W0917 18:31:36.989419   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:36.989426   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:36.989491   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:37.029078   78008 cri.go:89] found id: ""
	I0917 18:31:37.029107   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.029119   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:37.029126   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:37.029180   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:37.066636   78008 cri.go:89] found id: ""
	I0917 18:31:37.066670   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.066683   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:37.066691   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:37.066754   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:37.109791   78008 cri.go:89] found id: ""
	I0917 18:31:37.109827   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.109838   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:37.109849   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:37.109925   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:37.153415   78008 cri.go:89] found id: ""
	I0917 18:31:37.153448   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.153459   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:37.153467   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:37.153527   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:37.192826   78008 cri.go:89] found id: ""
	I0917 18:31:37.192853   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.192864   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:37.192871   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:37.192930   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:37.230579   78008 cri.go:89] found id: ""
	I0917 18:31:37.230632   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.230647   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:37.230665   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:37.230677   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:37.315392   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:37.315430   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:37.356521   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:37.356554   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:37.410552   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:37.410591   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:37.426013   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:37.426040   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:37.499352   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:39.999538   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:40.014515   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:40.014590   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:40.051511   78008 cri.go:89] found id: ""
	I0917 18:31:40.051548   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.051558   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:40.051564   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:40.051623   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:40.089707   78008 cri.go:89] found id: ""
	I0917 18:31:40.089733   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.089747   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:40.089752   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:40.089802   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:40.137303   78008 cri.go:89] found id: ""
	I0917 18:31:40.137326   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.137335   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:40.137341   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:40.137389   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:40.176721   78008 cri.go:89] found id: ""
	I0917 18:31:40.176746   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.176755   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:40.176761   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:40.176809   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:40.212369   78008 cri.go:89] found id: ""
	I0917 18:31:40.212401   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.212412   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:40.212421   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:40.212494   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:40.255798   78008 cri.go:89] found id: ""
	I0917 18:31:40.255828   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.255838   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:40.255847   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:40.255982   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:40.293643   78008 cri.go:89] found id: ""
	I0917 18:31:40.293672   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.293682   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:40.293689   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:40.293752   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:40.332300   78008 cri.go:89] found id: ""
	I0917 18:31:40.332330   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.332340   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:40.332350   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:40.332365   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:40.389068   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:40.389115   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:40.403118   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:40.403149   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:40.476043   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:40.476067   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:40.476081   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:40.563164   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:40.563204   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:36.664291   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:38.666750   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:37.501943   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:40.000891   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:42.001550   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:38.625406   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:41.124655   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:43.126544   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:43.112107   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:43.127968   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:43.128034   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:43.166351   78008 cri.go:89] found id: ""
	I0917 18:31:43.166371   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.166379   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:43.166384   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:43.166433   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:43.201124   78008 cri.go:89] found id: ""
	I0917 18:31:43.201160   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.201173   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:43.201181   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:43.201265   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:43.245684   78008 cri.go:89] found id: ""
	I0917 18:31:43.245717   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.245728   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:43.245735   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:43.245796   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:43.282751   78008 cri.go:89] found id: ""
	I0917 18:31:43.282777   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.282785   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:43.282791   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:43.282844   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:43.322180   78008 cri.go:89] found id: ""
	I0917 18:31:43.322212   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.322223   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:43.322230   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:43.322294   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:43.359575   78008 cri.go:89] found id: ""
	I0917 18:31:43.359608   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.359620   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:43.359627   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:43.359689   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:43.398782   78008 cri.go:89] found id: ""
	I0917 18:31:43.398811   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.398824   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:43.398833   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:43.398913   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:43.437747   78008 cri.go:89] found id: ""
	I0917 18:31:43.437771   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.437779   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:43.437787   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:43.437800   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:43.477986   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:43.478019   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:43.532637   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:43.532674   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:43.547552   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:43.547577   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:43.632556   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:43.632578   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:43.632592   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:41.163988   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:43.165378   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:45.664803   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:44.500302   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:46.500489   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:45.128136   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:47.626024   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:46.214890   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:46.229327   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:46.229408   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:46.268605   78008 cri.go:89] found id: ""
	I0917 18:31:46.268632   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.268642   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:46.268649   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:46.268711   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:46.309508   78008 cri.go:89] found id: ""
	I0917 18:31:46.309539   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.309549   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:46.309558   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:46.309620   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:46.352610   78008 cri.go:89] found id: ""
	I0917 18:31:46.352639   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.352648   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:46.352654   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:46.352723   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:46.398702   78008 cri.go:89] found id: ""
	I0917 18:31:46.398738   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.398747   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:46.398753   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:46.398811   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:46.437522   78008 cri.go:89] found id: ""
	I0917 18:31:46.437545   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.437554   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:46.437559   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:46.437641   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:46.474865   78008 cri.go:89] found id: ""
	I0917 18:31:46.474893   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.474902   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:46.474909   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:46.474957   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:46.514497   78008 cri.go:89] found id: ""
	I0917 18:31:46.514525   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.514536   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:46.514543   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:46.514605   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:46.556948   78008 cri.go:89] found id: ""
	I0917 18:31:46.556979   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.556988   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:46.556997   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:46.557008   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:46.609300   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:46.609337   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:46.626321   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:46.626351   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:46.707669   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:46.707701   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:46.707714   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:46.789774   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:46.789815   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:49.332780   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:49.347262   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:49.347334   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:49.388368   78008 cri.go:89] found id: ""
	I0917 18:31:49.388411   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.388423   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:49.388431   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:49.388493   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:49.423664   78008 cri.go:89] found id: ""
	I0917 18:31:49.423694   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.423707   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:49.423714   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:49.423776   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:49.462882   78008 cri.go:89] found id: ""
	I0917 18:31:49.462911   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.462924   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:49.462931   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:49.462988   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:49.524014   78008 cri.go:89] found id: ""
	I0917 18:31:49.524047   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.524056   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:49.524062   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:49.524114   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:49.564703   78008 cri.go:89] found id: ""
	I0917 18:31:49.564737   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.564748   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:49.564762   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:49.564827   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:49.609460   78008 cri.go:89] found id: ""
	I0917 18:31:49.609484   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.609493   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:49.609499   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:49.609554   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:49.651008   78008 cri.go:89] found id: ""
	I0917 18:31:49.651032   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.651040   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:49.651045   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:49.651106   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:49.693928   78008 cri.go:89] found id: ""
	I0917 18:31:49.693954   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.693961   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:49.693969   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:49.693981   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:49.774940   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:49.774977   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:49.820362   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:49.820398   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:49.875508   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:49.875549   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:49.890690   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:49.890723   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:49.967803   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:47.664890   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:49.664943   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:48.502246   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:51.001296   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:50.125915   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:52.625169   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:52.468533   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:52.483749   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:52.483812   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:52.523017   78008 cri.go:89] found id: ""
	I0917 18:31:52.523040   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.523048   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:52.523055   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:52.523101   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:52.559848   78008 cri.go:89] found id: ""
	I0917 18:31:52.559879   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.559889   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:52.559895   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:52.559955   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:52.597168   78008 cri.go:89] found id: ""
	I0917 18:31:52.597192   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.597200   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:52.597207   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:52.597278   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:52.634213   78008 cri.go:89] found id: ""
	I0917 18:31:52.634241   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.634252   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:52.634265   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:52.634326   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:52.673842   78008 cri.go:89] found id: ""
	I0917 18:31:52.673865   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.673873   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:52.673878   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:52.673926   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:52.711568   78008 cri.go:89] found id: ""
	I0917 18:31:52.711596   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.711609   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:52.711617   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:52.711676   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:52.757002   78008 cri.go:89] found id: ""
	I0917 18:31:52.757030   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.757038   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:52.757043   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:52.757092   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:52.793092   78008 cri.go:89] found id: ""
	I0917 18:31:52.793126   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.793135   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:52.793143   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:52.793155   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:52.847641   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:52.847682   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:52.862287   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:52.862314   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:52.941307   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:52.941331   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:52.941344   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:53.026114   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:53.026155   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:55.573116   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:55.588063   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:55.588125   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:55.633398   78008 cri.go:89] found id: ""
	I0917 18:31:55.633422   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.633430   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:55.633437   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:55.633511   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:55.669754   78008 cri.go:89] found id: ""
	I0917 18:31:55.669785   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.669796   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:55.669803   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:55.669876   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:52.165645   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:54.166228   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:53.500688   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:55.501849   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:55.126327   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:57.624683   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:55.711492   78008 cri.go:89] found id: ""
	I0917 18:31:55.711521   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.711533   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:55.711541   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:55.711603   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:55.749594   78008 cri.go:89] found id: ""
	I0917 18:31:55.749628   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.749638   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:55.749643   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:55.749695   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:55.786114   78008 cri.go:89] found id: ""
	I0917 18:31:55.786143   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.786155   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:55.786162   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:55.786222   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:55.824254   78008 cri.go:89] found id: ""
	I0917 18:31:55.824282   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.824293   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:55.824301   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:55.824361   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:55.861690   78008 cri.go:89] found id: ""
	I0917 18:31:55.861718   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.861728   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:55.861733   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:55.861794   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:55.913729   78008 cri.go:89] found id: ""
	I0917 18:31:55.913754   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.913766   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:55.913775   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:55.913788   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:55.976835   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:55.976880   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:56.003201   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:56.003236   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:56.092101   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:56.092123   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:56.092137   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:56.170498   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:56.170533   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:58.714212   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:58.730997   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:58.731072   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:58.775640   78008 cri.go:89] found id: ""
	I0917 18:31:58.775678   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.775693   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:58.775701   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:58.775770   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:58.811738   78008 cri.go:89] found id: ""
	I0917 18:31:58.811764   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.811776   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:58.811783   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:58.811852   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:58.849803   78008 cri.go:89] found id: ""
	I0917 18:31:58.849827   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.849836   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:58.849841   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:58.849898   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:58.885827   78008 cri.go:89] found id: ""
	I0917 18:31:58.885856   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.885871   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:58.885878   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:58.885943   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:58.925539   78008 cri.go:89] found id: ""
	I0917 18:31:58.925565   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.925574   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:58.925579   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:58.925628   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:58.961074   78008 cri.go:89] found id: ""
	I0917 18:31:58.961104   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.961116   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:58.961123   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:58.961190   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:58.997843   78008 cri.go:89] found id: ""
	I0917 18:31:58.997878   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.997889   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:58.997896   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:58.997962   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:59.034836   78008 cri.go:89] found id: ""
	I0917 18:31:59.034866   78008 logs.go:276] 0 containers: []
	W0917 18:31:59.034876   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:59.034884   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:59.034899   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:59.049085   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:59.049116   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:59.126143   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:59.126168   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:59.126183   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:59.210043   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:59.210096   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:59.258546   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:59.258575   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:56.664145   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:58.664990   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:58.000809   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:58.494554   77433 pod_ready.go:82] duration metric: took 4m0.000545882s for pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace to be "Ready" ...
	E0917 18:31:58.494588   77433 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace to be "Ready" (will not retry!)
	I0917 18:31:58.494611   77433 pod_ready.go:39] duration metric: took 4m9.313096637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:31:58.494638   77433 kubeadm.go:597] duration metric: took 4m19.208089477s to restartPrimaryControlPlane
	W0917 18:31:58.494716   77433 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 18:31:58.494760   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:31:59.625888   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:02.125831   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:01.811930   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:01.833160   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:32:01.833223   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:32:01.891148   78008 cri.go:89] found id: ""
	I0917 18:32:01.891178   78008 logs.go:276] 0 containers: []
	W0917 18:32:01.891189   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:32:01.891197   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:32:01.891260   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:32:01.954367   78008 cri.go:89] found id: ""
	I0917 18:32:01.954407   78008 logs.go:276] 0 containers: []
	W0917 18:32:01.954418   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:32:01.954425   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:32:01.954490   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:32:01.998154   78008 cri.go:89] found id: ""
	I0917 18:32:01.998187   78008 logs.go:276] 0 containers: []
	W0917 18:32:01.998199   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:32:01.998206   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:32:01.998267   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:32:02.035412   78008 cri.go:89] found id: ""
	I0917 18:32:02.035446   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.035457   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:32:02.035464   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:32:02.035539   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:32:02.070552   78008 cri.go:89] found id: ""
	I0917 18:32:02.070586   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.070599   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:32:02.070604   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:32:02.070673   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:32:02.108680   78008 cri.go:89] found id: ""
	I0917 18:32:02.108717   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.108729   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:32:02.108737   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:32:02.108787   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:32:02.148560   78008 cri.go:89] found id: ""
	I0917 18:32:02.148585   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.148594   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:32:02.148600   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:32:02.148647   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:32:02.186398   78008 cri.go:89] found id: ""
	I0917 18:32:02.186434   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.186445   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:32:02.186454   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:32:02.186468   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:32:02.273674   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:32:02.273695   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:32:02.273708   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:32:02.359656   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:32:02.359704   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:32:02.405465   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:32:02.405494   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:32:02.466534   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:32:02.466568   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:32:04.983572   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:04.998711   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:32:04.998796   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:32:05.038080   78008 cri.go:89] found id: ""
	I0917 18:32:05.038111   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.038121   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:32:05.038129   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:32:05.038189   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:32:05.074542   78008 cri.go:89] found id: ""
	I0917 18:32:05.074571   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.074582   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:32:05.074588   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:32:05.074652   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:32:05.113115   78008 cri.go:89] found id: ""
	I0917 18:32:05.113140   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.113149   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:32:05.113156   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:32:05.113216   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:32:05.151752   78008 cri.go:89] found id: ""
	I0917 18:32:05.151777   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.151786   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:32:05.151791   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:32:05.151840   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:32:05.191014   78008 cri.go:89] found id: ""
	I0917 18:32:05.191044   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.191056   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:32:05.191064   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:32:05.191126   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:32:05.226738   78008 cri.go:89] found id: ""
	I0917 18:32:05.226774   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.226787   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:32:05.226794   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:32:05.226856   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:32:05.263072   78008 cri.go:89] found id: ""
	I0917 18:32:05.263102   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.263115   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:32:05.263124   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:32:05.263184   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:32:05.302622   78008 cri.go:89] found id: ""
	I0917 18:32:05.302651   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.302666   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:32:05.302677   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:32:05.302691   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:32:05.358101   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:32:05.358150   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:32:05.373289   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:32:05.373326   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:32:05.451451   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:32:05.451484   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:32:05.451496   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:32:05.532529   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:32:05.532570   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:32:01.165911   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:03.665523   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:04.126090   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:06.625207   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:08.079204   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:08.093914   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:32:08.093996   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:32:08.131132   78008 cri.go:89] found id: ""
	I0917 18:32:08.131164   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.131173   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:32:08.131178   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:32:08.131230   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:32:08.168017   78008 cri.go:89] found id: ""
	I0917 18:32:08.168044   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.168055   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:32:08.168062   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:32:08.168124   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:32:08.210190   78008 cri.go:89] found id: ""
	I0917 18:32:08.210212   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.210221   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:32:08.210226   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:32:08.210279   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:32:08.250264   78008 cri.go:89] found id: ""
	I0917 18:32:08.250291   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.250299   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:32:08.250304   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:32:08.250352   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:32:08.287732   78008 cri.go:89] found id: ""
	I0917 18:32:08.287760   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.287768   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:32:08.287775   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:32:08.287826   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:32:08.325131   78008 cri.go:89] found id: ""
	I0917 18:32:08.325161   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.325170   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:32:08.325176   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:32:08.325243   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:32:08.365979   78008 cri.go:89] found id: ""
	I0917 18:32:08.366008   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.366019   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:32:08.366027   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:32:08.366088   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:32:08.403430   78008 cri.go:89] found id: ""
	I0917 18:32:08.403472   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.403484   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:32:08.403495   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:32:08.403511   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:32:08.444834   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:32:08.444869   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:32:08.500363   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:32:08.500408   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:32:08.516624   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:32:08.516653   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:32:08.591279   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:32:08.591304   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:32:08.591317   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:32:06.165279   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:08.168012   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:10.665050   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:11.173345   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:11.187689   78008 kubeadm.go:597] duration metric: took 4m1.808927826s to restartPrimaryControlPlane
	W0917 18:32:11.187762   78008 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 18:32:11.187786   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:32:12.794262   78008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.606454478s)
	I0917 18:32:12.794344   78008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:32:12.809379   78008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:32:12.821912   78008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:32:12.833176   78008 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:32:12.833201   78008 kubeadm.go:157] found existing configuration files:
	
	I0917 18:32:12.833279   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:32:12.843175   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:32:12.843245   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:32:12.855310   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:32:12.866777   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:32:12.866846   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:32:12.878436   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:32:12.889677   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:32:12.889763   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:32:12.900141   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:32:12.909916   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:32:12.909994   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:32:12.920578   78008 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:32:12.993663   78008 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0917 18:32:12.993743   78008 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:32:13.145113   78008 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:32:13.145321   78008 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:32:13.145451   78008 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0917 18:32:13.346279   78008 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:32:08.627002   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:09.118558   77819 pod_ready.go:82] duration metric: took 4m0.00024297s for pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace to be "Ready" ...
	E0917 18:32:09.118584   77819 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0917 18:32:09.118600   77819 pod_ready.go:39] duration metric: took 4m13.424544466s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:09.118628   77819 kubeadm.go:597] duration metric: took 4m21.847475999s to restartPrimaryControlPlane
	W0917 18:32:09.118695   77819 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 18:32:09.118723   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:32:13.348308   78008 out.go:235]   - Generating certificates and keys ...
	I0917 18:32:13.348411   78008 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:32:13.348505   78008 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:32:13.348622   78008 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:32:13.348719   78008 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:32:13.348814   78008 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:32:13.348895   78008 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:32:13.348991   78008 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:32:13.349126   78008 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:32:13.349595   78008 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:32:13.349939   78008 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:32:13.350010   78008 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:32:13.350096   78008 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:32:13.677314   78008 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:32:13.840807   78008 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:32:13.886801   78008 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:32:13.937675   78008 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:32:13.956057   78008 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:32:13.957185   78008 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:32:13.957266   78008 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:32:14.099317   78008 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:32:14.101339   78008 out.go:235]   - Booting up control plane ...
	I0917 18:32:14.101446   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:32:14.107518   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:32:14.107630   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:32:14.107964   78008 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:32:14.118995   78008 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0917 18:32:13.164003   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:15.165309   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:17.664956   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:20.165073   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:24.890884   77433 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.396095322s)
	I0917 18:32:24.890966   77433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:32:24.915367   77433 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:32:24.928191   77433 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:32:24.945924   77433 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:32:24.945943   77433 kubeadm.go:157] found existing configuration files:
	
	I0917 18:32:24.945988   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:32:24.961382   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:32:24.961454   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:32:24.977324   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:32:24.989771   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:32:24.989861   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:32:25.001342   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:32:25.035933   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:32:25.036004   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:32:25.047185   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:32:25.058299   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:32:25.058358   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:32:25.070264   77433 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:32:25.124517   77433 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 18:32:25.124634   77433 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:32:25.257042   77433 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:32:25.257211   77433 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:32:25.257378   77433 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 18:32:25.267568   77433 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:32:22.663592   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:24.665849   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:25.269902   77433 out.go:235]   - Generating certificates and keys ...
	I0917 18:32:25.270012   77433 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:32:25.270115   77433 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:32:25.270221   77433 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:32:25.270288   77433 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:32:25.270379   77433 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:32:25.270462   77433 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:32:25.270563   77433 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:32:25.270664   77433 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:32:25.270747   77433 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:32:25.270810   77433 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:32:25.270844   77433 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:32:25.270892   77433 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:32:25.425276   77433 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:32:25.498604   77433 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 18:32:25.848094   77433 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:32:26.011742   77433 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:32:26.097462   77433 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:32:26.097929   77433 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:32:26.100735   77433 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:32:26.102662   77433 out.go:235]   - Booting up control plane ...
	I0917 18:32:26.102777   77433 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:32:26.102880   77433 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:32:26.102954   77433 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:32:26.123221   77433 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:32:26.130932   77433 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:32:26.131021   77433 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:32:26.291311   77433 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 18:32:26.291462   77433 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 18:32:27.164870   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:29.165716   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:27.298734   77433 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00350356s
	I0917 18:32:27.298851   77433 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 18:32:32.298994   77433 kubeadm.go:310] [api-check] The API server is healthy after 5.002867585s
	I0917 18:32:32.319430   77433 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 18:32:32.345527   77433 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 18:32:32.381518   77433 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 18:32:32.381817   77433 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-328741 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 18:32:32.398185   77433 kubeadm.go:310] [bootstrap-token] Using token: jgy27g.uvhet1w3psx1hofx
	I0917 18:32:32.399853   77433 out.go:235]   - Configuring RBAC rules ...
	I0917 18:32:32.400009   77433 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 18:32:32.407740   77433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 18:32:32.421320   77433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 18:32:32.427046   77433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 18:32:32.434506   77433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 18:32:32.438950   77433 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 18:32:32.705233   77433 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 18:32:33.140761   77433 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 18:32:33.720560   77433 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 18:32:33.720589   77433 kubeadm.go:310] 
	I0917 18:32:33.720679   77433 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 18:32:33.720690   77433 kubeadm.go:310] 
	I0917 18:32:33.720803   77433 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 18:32:33.720823   77433 kubeadm.go:310] 
	I0917 18:32:33.720869   77433 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 18:32:33.720932   77433 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 18:32:33.721010   77433 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 18:32:33.721021   77433 kubeadm.go:310] 
	I0917 18:32:33.721094   77433 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 18:32:33.721103   77433 kubeadm.go:310] 
	I0917 18:32:33.721168   77433 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 18:32:33.721176   77433 kubeadm.go:310] 
	I0917 18:32:33.721291   77433 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 18:32:33.721406   77433 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 18:32:33.721515   77433 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 18:32:33.721527   77433 kubeadm.go:310] 
	I0917 18:32:33.721653   77433 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 18:32:33.721780   77433 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 18:32:33.721797   77433 kubeadm.go:310] 
	I0917 18:32:33.721923   77433 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jgy27g.uvhet1w3psx1hofx \
	I0917 18:32:33.722093   77433 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 \
	I0917 18:32:33.722131   77433 kubeadm.go:310] 	--control-plane 
	I0917 18:32:33.722140   77433 kubeadm.go:310] 
	I0917 18:32:33.722267   77433 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 18:32:33.722278   77433 kubeadm.go:310] 
	I0917 18:32:33.722389   77433 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jgy27g.uvhet1w3psx1hofx \
	I0917 18:32:33.722565   77433 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 
	I0917 18:32:33.723290   77433 kubeadm.go:310] W0917 18:32:25.090856    2941 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:32:33.723705   77433 kubeadm.go:310] W0917 18:32:25.092716    2941 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:32:33.723861   77433 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:32:33.723883   77433 cni.go:84] Creating CNI manager for ""
	I0917 18:32:33.723896   77433 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:32:33.725956   77433 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:32:31.665048   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:34.166586   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:33.727153   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:32:33.739127   77433 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:32:33.759704   77433 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 18:32:33.759766   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:33.759799   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-328741 minikube.k8s.io/updated_at=2024_09_17T18_32_33_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=no-preload-328741 minikube.k8s.io/primary=true
	I0917 18:32:33.977462   77433 ops.go:34] apiserver oom_adj: -16
	I0917 18:32:33.977485   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:34.477572   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:34.977644   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:35.477829   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:35.977732   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:36.477549   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:36.978147   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:37.477629   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:37.977554   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:38.125930   77433 kubeadm.go:1113] duration metric: took 4.366225265s to wait for elevateKubeSystemPrivileges
	I0917 18:32:38.125973   77433 kubeadm.go:394] duration metric: took 4m58.899335742s to StartCluster
	I0917 18:32:38.125999   77433 settings.go:142] acquiring lock: {Name:mk34898861b3ed534da4bd8404feab95fe690001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:32:38.126117   77433 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:32:38.128667   77433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:32:38.129071   77433 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 18:32:38.129134   77433 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 18:32:38.129258   77433 addons.go:69] Setting storage-provisioner=true in profile "no-preload-328741"
	I0917 18:32:38.129284   77433 addons.go:234] Setting addon storage-provisioner=true in "no-preload-328741"
	W0917 18:32:38.129295   77433 addons.go:243] addon storage-provisioner should already be in state true
	I0917 18:32:38.129331   77433 host.go:66] Checking if "no-preload-328741" exists ...
	I0917 18:32:38.129364   77433 config.go:182] Loaded profile config "no-preload-328741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:32:38.129374   77433 addons.go:69] Setting default-storageclass=true in profile "no-preload-328741"
	I0917 18:32:38.129397   77433 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-328741"
	I0917 18:32:38.129397   77433 addons.go:69] Setting metrics-server=true in profile "no-preload-328741"
	I0917 18:32:38.129440   77433 addons.go:234] Setting addon metrics-server=true in "no-preload-328741"
	W0917 18:32:38.129451   77433 addons.go:243] addon metrics-server should already be in state true
	I0917 18:32:38.129491   77433 host.go:66] Checking if "no-preload-328741" exists ...
	I0917 18:32:38.129831   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.129832   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.129875   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.129965   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.129980   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.129992   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.130833   77433 out.go:177] * Verifying Kubernetes components...
	I0917 18:32:38.132232   77433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:32:38.151440   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37047
	I0917 18:32:38.151521   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43717
	I0917 18:32:38.151524   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46631
	I0917 18:32:38.152003   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.152216   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.152574   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.152591   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.152728   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.152743   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.153076   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.153077   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.153304   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetState
	I0917 18:32:38.153689   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.153731   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.156960   77433 addons.go:234] Setting addon default-storageclass=true in "no-preload-328741"
	W0917 18:32:38.156980   77433 addons.go:243] addon default-storageclass should already be in state true
	I0917 18:32:38.157007   77433 host.go:66] Checking if "no-preload-328741" exists ...
	I0917 18:32:38.157358   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.157404   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.157700   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.158314   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.158332   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.158738   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.159296   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.159332   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.179409   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41819
	I0917 18:32:38.179948   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.180402   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.180433   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.180922   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.181082   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetState
	I0917 18:32:38.183522   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35935
	I0917 18:32:38.183904   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.184373   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.184389   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.184750   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.184911   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetState
	I0917 18:32:38.187520   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37647
	I0917 18:32:38.187560   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:32:38.187560   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:32:38.188071   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.188750   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.188768   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.189208   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.189573   77433 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:32:38.189597   77433 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0917 18:32:35.488250   77819 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.369501216s)
	I0917 18:32:35.488328   77819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:32:35.507245   77819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:32:35.522739   77819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:32:35.537981   77819 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:32:35.538002   77819 kubeadm.go:157] found existing configuration files:
	
	I0917 18:32:35.538060   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0917 18:32:35.552269   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:32:35.552346   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:32:35.566005   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0917 18:32:35.577402   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:32:35.577482   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:32:35.588633   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0917 18:32:35.600487   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:32:35.600559   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:32:35.612243   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0917 18:32:35.623548   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:32:35.623630   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:32:35.635837   77819 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:32:35.690169   77819 kubeadm.go:310] W0917 18:32:35.657767    2589 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:32:35.690728   77819 kubeadm.go:310] W0917 18:32:35.658500    2589 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:32:35.819945   77819 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:32:38.189867   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.189904   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.191297   77433 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 18:32:38.191318   77433 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 18:32:38.191340   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:32:38.191421   77433 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:32:38.191441   77433 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 18:32:38.191467   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:32:38.195617   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.196040   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:32:38.196070   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.196098   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.196292   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:32:38.196554   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:32:38.196633   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:32:38.196645   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.196829   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:32:38.196868   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:32:38.196999   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:32:38.197320   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:32:38.197549   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:32:38.197724   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:32:38.211021   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34023
	I0917 18:32:38.211713   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.212330   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.212349   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.212900   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.213161   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetState
	I0917 18:32:38.214937   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:32:38.215252   77433 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 18:32:38.215267   77433 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 18:32:38.215284   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:32:38.218542   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.219120   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:32:38.219141   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.219398   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:32:38.219649   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:32:38.219795   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:32:38.219983   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:32:38.350631   77433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:32:38.420361   77433 node_ready.go:35] waiting up to 6m0s for node "no-preload-328741" to be "Ready" ...
	I0917 18:32:38.445121   77433 node_ready.go:49] node "no-preload-328741" has status "Ready":"True"
	I0917 18:32:38.445147   77433 node_ready.go:38] duration metric: took 24.749282ms for node "no-preload-328741" to be "Ready" ...
	I0917 18:32:38.445159   77433 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:38.468481   77433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:38.473593   77433 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 18:32:38.529563   77433 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 18:32:38.529592   77433 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0917 18:32:38.569714   77433 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:32:38.611817   77433 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 18:32:38.611845   77433 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 18:32:38.681763   77433 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:32:38.681791   77433 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 18:32:38.754936   77433 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:32:38.771115   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:38.771142   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:38.771564   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:38.771583   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:38.771603   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:38.771612   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:38.773362   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:38.773370   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:38.773381   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:38.782423   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:38.782468   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:38.782821   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:38.782877   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:38.782889   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:39.826176   77433 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.256415127s)
	I0917 18:32:39.826230   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:39.826241   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:39.826591   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:39.826618   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:39.826619   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:39.826627   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:39.826638   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:39.826905   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:39.828259   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:39.828279   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:40.095498   77433 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.340502717s)
	I0917 18:32:40.095562   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:40.095578   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:40.096020   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:40.096018   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:40.096047   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:40.096056   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:40.096064   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:40.096372   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:40.096391   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:40.097299   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:40.097317   77433 addons.go:475] Verifying addon metrics-server=true in "no-preload-328741"
	I0917 18:32:40.099125   77433 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0917 18:32:36.663739   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:38.666621   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:40.100317   77433 addons.go:510] duration metric: took 1.971194765s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0917 18:32:40.481646   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:44.319473   77819 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 18:32:44.319570   77819 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:32:44.319698   77819 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:32:44.319793   77819 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:32:44.319888   77819 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 18:32:44.319977   77819 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:32:44.322424   77819 out.go:235]   - Generating certificates and keys ...
	I0917 18:32:44.322509   77819 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:32:44.322570   77819 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:32:44.322640   77819 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:32:44.322732   77819 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:32:44.322806   77819 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:32:44.322854   77819 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:32:44.322911   77819 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:32:44.322993   77819 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:32:44.323071   77819 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:32:44.323150   77819 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:32:44.323197   77819 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:32:44.323246   77819 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:32:44.323289   77819 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:32:44.323337   77819 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 18:32:44.323390   77819 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:32:44.323456   77819 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:32:44.323521   77819 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:32:44.323613   77819 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:32:44.323704   77819 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:32:44.324959   77819 out.go:235]   - Booting up control plane ...
	I0917 18:32:44.325043   77819 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:32:44.325120   77819 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:32:44.325187   77819 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:32:44.325303   77819 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:32:44.325371   77819 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:32:44.325404   77819 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:32:44.325533   77819 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 18:32:44.325635   77819 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 18:32:44.325710   77819 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001958745s
	I0917 18:32:44.325774   77819 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 18:32:44.325830   77819 kubeadm.go:310] [api-check] The API server is healthy after 5.002835169s
	I0917 18:32:44.325919   77819 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 18:32:44.326028   77819 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 18:32:44.326086   77819 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 18:32:44.326239   77819 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-438836 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 18:32:44.326311   77819 kubeadm.go:310] [bootstrap-token] Using token: xgap2f.3rz1qjyfivkbqx8u
	I0917 18:32:44.327661   77819 out.go:235]   - Configuring RBAC rules ...
	I0917 18:32:44.327770   77819 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 18:32:44.327838   77819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 18:32:44.328050   77819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 18:32:44.328166   77819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 18:32:44.328266   77819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 18:32:44.328337   77819 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 18:32:44.328483   77819 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 18:32:44.328519   77819 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 18:32:44.328564   77819 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 18:32:44.328573   77819 kubeadm.go:310] 
	I0917 18:32:44.328628   77819 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 18:32:44.328634   77819 kubeadm.go:310] 
	I0917 18:32:44.328702   77819 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 18:32:44.328710   77819 kubeadm.go:310] 
	I0917 18:32:44.328736   77819 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 18:32:44.328798   77819 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 18:32:44.328849   77819 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 18:32:44.328858   77819 kubeadm.go:310] 
	I0917 18:32:44.328940   77819 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 18:32:44.328949   77819 kubeadm.go:310] 
	I0917 18:32:44.328997   77819 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 18:32:44.329011   77819 kubeadm.go:310] 
	I0917 18:32:44.329054   77819 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 18:32:44.329122   77819 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 18:32:44.329184   77819 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 18:32:44.329191   77819 kubeadm.go:310] 
	I0917 18:32:44.329281   77819 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 18:32:44.329359   77819 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 18:32:44.329372   77819 kubeadm.go:310] 
	I0917 18:32:44.329487   77819 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token xgap2f.3rz1qjyfivkbqx8u \
	I0917 18:32:44.329599   77819 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 \
	I0917 18:32:44.329619   77819 kubeadm.go:310] 	--control-plane 
	I0917 18:32:44.329625   77819 kubeadm.go:310] 
	I0917 18:32:44.329709   77819 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 18:32:44.329716   77819 kubeadm.go:310] 
	I0917 18:32:44.329784   77819 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token xgap2f.3rz1qjyfivkbqx8u \
	I0917 18:32:44.329896   77819 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 
	I0917 18:32:44.329910   77819 cni.go:84] Creating CNI manager for ""
	I0917 18:32:44.329916   77819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:32:44.331403   77819 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:32:41.165452   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:43.167184   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:45.664612   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:42.976970   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:45.475620   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:44.332786   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:32:44.344553   77819 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:32:44.365355   77819 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 18:32:44.365417   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:44.365457   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-438836 minikube.k8s.io/updated_at=2024_09_17T18_32_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=default-k8s-diff-port-438836 minikube.k8s.io/primary=true
	I0917 18:32:44.393987   77819 ops.go:34] apiserver oom_adj: -16
	I0917 18:32:44.608512   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:45.109295   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:45.609455   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:46.108538   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:46.609062   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:47.108933   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:47.608565   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:48.109355   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:48.609390   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:49.109204   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:49.305574   77819 kubeadm.go:1113] duration metric: took 4.940218828s to wait for elevateKubeSystemPrivileges
	I0917 18:32:49.305616   77819 kubeadm.go:394] duration metric: took 5m2.086280483s to StartCluster
	I0917 18:32:49.305640   77819 settings.go:142] acquiring lock: {Name:mk34898861b3ed534da4bd8404feab95fe690001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:32:49.305743   77819 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:32:49.308226   77819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:32:49.308590   77819 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.58 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 18:32:49.308755   77819 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 18:32:49.308838   77819 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-438836"
	I0917 18:32:49.308861   77819 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-438836"
	I0917 18:32:49.308863   77819 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-438836"
	I0917 18:32:49.308882   77819 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-438836"
	I0917 18:32:49.308881   77819 config.go:182] Loaded profile config "default-k8s-diff-port-438836": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:32:49.308895   77819 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-438836"
	W0917 18:32:49.308946   77819 addons.go:243] addon metrics-server should already be in state true
	I0917 18:32:49.309006   77819 host.go:66] Checking if "default-k8s-diff-port-438836" exists ...
	I0917 18:32:49.308895   77819 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-438836"
	W0917 18:32:49.308873   77819 addons.go:243] addon storage-provisioner should already be in state true
	I0917 18:32:49.309151   77819 host.go:66] Checking if "default-k8s-diff-port-438836" exists ...
	I0917 18:32:49.309458   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.309509   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.309544   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.309580   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.309585   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.309613   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.310410   77819 out.go:177] * Verifying Kubernetes components...
	I0917 18:32:49.311819   77819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:32:49.326762   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40995
	I0917 18:32:49.327055   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41171
	I0917 18:32:49.327287   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.327615   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.327869   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.327888   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.328171   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.328194   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.328215   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.328403   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetState
	I0917 18:32:49.328622   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.329285   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.329330   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.329573   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42919
	I0917 18:32:49.330145   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.330651   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.330674   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.331084   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.331715   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.331767   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.332232   77819 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-438836"
	W0917 18:32:49.332250   77819 addons.go:243] addon default-storageclass should already be in state true
	I0917 18:32:49.332278   77819 host.go:66] Checking if "default-k8s-diff-port-438836" exists ...
	I0917 18:32:49.332550   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.332595   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.346536   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35171
	I0917 18:32:49.347084   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.347712   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.347737   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.348229   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.348469   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetState
	I0917 18:32:49.350631   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35929
	I0917 18:32:49.351520   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.351581   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:32:49.352110   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.352138   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.352297   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35853
	I0917 18:32:49.352720   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.352736   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.353270   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.353310   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.353318   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.353334   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.353707   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.353861   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetState
	I0917 18:32:49.354855   77819 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0917 18:32:49.356031   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:32:49.356123   77819 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 18:32:49.356153   77819 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 18:32:49.356181   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:32:49.358023   77819 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:32:47.475181   77433 pod_ready.go:93] pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:47.475212   77433 pod_ready.go:82] duration metric: took 9.006699747s for pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:47.475230   77433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qv4pq" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.483276   77433 pod_ready.go:93] pod "coredns-7c65d6cfc9-qv4pq" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:48.483301   77433 pod_ready.go:82] duration metric: took 1.008063055s for pod "coredns-7c65d6cfc9-qv4pq" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.483310   77433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.488897   77433 pod_ready.go:93] pod "etcd-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:48.488922   77433 pod_ready.go:82] duration metric: took 5.605818ms for pod "etcd-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.488931   77433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.493809   77433 pod_ready.go:93] pod "kube-apiserver-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:48.493840   77433 pod_ready.go:82] duration metric: took 4.899361ms for pod "kube-apiserver-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.493853   77433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.498703   77433 pod_ready.go:93] pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:48.498730   77433 pod_ready.go:82] duration metric: took 4.869599ms for pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.498741   77433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2945m" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.673260   77433 pod_ready.go:93] pod "kube-proxy-2945m" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:48.673288   77433 pod_ready.go:82] duration metric: took 174.539603ms for pod "kube-proxy-2945m" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.673300   77433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:49.073094   77433 pod_ready.go:93] pod "kube-scheduler-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:49.073121   77433 pod_ready.go:82] duration metric: took 399.810804ms for pod "kube-scheduler-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:49.073132   77433 pod_ready.go:39] duration metric: took 10.627960333s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:49.073148   77433 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:32:49.073220   77433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:49.089310   77433 api_server.go:72] duration metric: took 10.960186006s to wait for apiserver process to appear ...
	I0917 18:32:49.089337   77433 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:32:49.089360   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:32:49.094838   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 200:
	ok
	I0917 18:32:49.095838   77433 api_server.go:141] control plane version: v1.31.1
	I0917 18:32:49.095862   77433 api_server.go:131] duration metric: took 6.516706ms to wait for apiserver health ...
	I0917 18:32:49.095872   77433 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:32:49.278262   77433 system_pods.go:59] 9 kube-system pods found
	I0917 18:32:49.278306   77433 system_pods.go:61] "coredns-7c65d6cfc9-gddwk" [57f85dd3-be48-4648-8d70-7a06aeaecdc2] Running
	I0917 18:32:49.278312   77433 system_pods.go:61] "coredns-7c65d6cfc9-qv4pq" [31f7e4b5-3870-41a1-96f8-8e13511fe684] Running
	I0917 18:32:49.278315   77433 system_pods.go:61] "etcd-no-preload-328741" [42b632f3-5576-4779-8895-3adcecfb278c] Running
	I0917 18:32:49.278319   77433 system_pods.go:61] "kube-apiserver-no-preload-328741" [ff2d44e3-dad5-4c24-a24d-2df425466747] Running
	I0917 18:32:49.278323   77433 system_pods.go:61] "kube-controller-manager-no-preload-328741" [eec3bebd-16ed-428e-8411-bca31800b36c] Running
	I0917 18:32:49.278326   77433 system_pods.go:61] "kube-proxy-2945m" [8a7b75b4-28c5-476a-b002-05313976c138] Running
	I0917 18:32:49.278329   77433 system_pods.go:61] "kube-scheduler-no-preload-328741" [06c97bf5-3ad3-45c5-8eaa-aa3cdbf51f12] Running
	I0917 18:32:49.278337   77433 system_pods.go:61] "metrics-server-6867b74b74-cvttg" [1b2d6700-2e3c-4a35-9794-0ec095eed0d4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:32:49.278341   77433 system_pods.go:61] "storage-provisioner" [03a8e7f5-ea70-4653-837b-5ad54de48136] Running
	I0917 18:32:49.278348   77433 system_pods.go:74] duration metric: took 182.470522ms to wait for pod list to return data ...
	I0917 18:32:49.278355   77433 default_sa.go:34] waiting for default service account to be created ...
	I0917 18:32:49.474126   77433 default_sa.go:45] found service account: "default"
	I0917 18:32:49.474155   77433 default_sa.go:55] duration metric: took 195.79307ms for default service account to be created ...
	I0917 18:32:49.474166   77433 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 18:32:49.678032   77433 system_pods.go:86] 9 kube-system pods found
	I0917 18:32:49.678062   77433 system_pods.go:89] "coredns-7c65d6cfc9-gddwk" [57f85dd3-be48-4648-8d70-7a06aeaecdc2] Running
	I0917 18:32:49.678068   77433 system_pods.go:89] "coredns-7c65d6cfc9-qv4pq" [31f7e4b5-3870-41a1-96f8-8e13511fe684] Running
	I0917 18:32:49.678072   77433 system_pods.go:89] "etcd-no-preload-328741" [42b632f3-5576-4779-8895-3adcecfb278c] Running
	I0917 18:32:49.678076   77433 system_pods.go:89] "kube-apiserver-no-preload-328741" [ff2d44e3-dad5-4c24-a24d-2df425466747] Running
	I0917 18:32:49.678080   77433 system_pods.go:89] "kube-controller-manager-no-preload-328741" [eec3bebd-16ed-428e-8411-bca31800b36c] Running
	I0917 18:32:49.678083   77433 system_pods.go:89] "kube-proxy-2945m" [8a7b75b4-28c5-476a-b002-05313976c138] Running
	I0917 18:32:49.678086   77433 system_pods.go:89] "kube-scheduler-no-preload-328741" [06c97bf5-3ad3-45c5-8eaa-aa3cdbf51f12] Running
	I0917 18:32:49.678095   77433 system_pods.go:89] "metrics-server-6867b74b74-cvttg" [1b2d6700-2e3c-4a35-9794-0ec095eed0d4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:32:49.678101   77433 system_pods.go:89] "storage-provisioner" [03a8e7f5-ea70-4653-837b-5ad54de48136] Running
	I0917 18:32:49.678111   77433 system_pods.go:126] duration metric: took 203.938016ms to wait for k8s-apps to be running ...
	I0917 18:32:49.678120   77433 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 18:32:49.678169   77433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:32:49.698139   77433 system_svc.go:56] duration metric: took 20.008261ms WaitForService to wait for kubelet
	I0917 18:32:49.698169   77433 kubeadm.go:582] duration metric: took 11.569050863s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 18:32:49.698188   77433 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:32:49.873214   77433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:32:49.873286   77433 node_conditions.go:123] node cpu capacity is 2
	I0917 18:32:49.873304   77433 node_conditions.go:105] duration metric: took 175.108582ms to run NodePressure ...
	I0917 18:32:49.873319   77433 start.go:241] waiting for startup goroutines ...
	I0917 18:32:49.873329   77433 start.go:246] waiting for cluster config update ...
	I0917 18:32:49.873342   77433 start.go:255] writing updated cluster config ...
	I0917 18:32:49.873719   77433 ssh_runner.go:195] Run: rm -f paused
	I0917 18:32:49.928157   77433 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 18:32:49.930136   77433 out.go:177] * Done! kubectl is now configured to use "no-preload-328741" cluster and "default" namespace by default
	I0917 18:32:47.158355   77264 pod_ready.go:82] duration metric: took 4m0.000722561s for pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace to be "Ready" ...
	E0917 18:32:47.158398   77264 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0917 18:32:47.158416   77264 pod_ready.go:39] duration metric: took 4m11.016184959s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:47.158443   77264 kubeadm.go:597] duration metric: took 4m19.974943276s to restartPrimaryControlPlane
	W0917 18:32:47.158508   77264 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 18:32:47.158539   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:32:49.359450   77819 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:32:49.359475   77819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 18:32:49.359496   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:32:49.360356   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.361125   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:32:49.360783   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:32:49.361427   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:32:49.361439   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.361615   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:32:49.361803   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:32:49.363091   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.363388   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:32:49.363420   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.363601   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:32:49.363803   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:32:49.363956   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:32:49.364108   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:32:49.374395   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40731
	I0917 18:32:49.374937   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.375474   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.375506   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.375858   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.376073   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetState
	I0917 18:32:49.377667   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:32:49.377884   77819 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 18:32:49.377899   77819 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 18:32:49.377912   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:32:49.381821   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.381992   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:32:49.382009   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.382202   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:32:49.382366   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:32:49.382534   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:32:49.382855   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:32:49.601072   77819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:32:49.657872   77819 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-438836" to be "Ready" ...
	I0917 18:32:49.669721   77819 node_ready.go:49] node "default-k8s-diff-port-438836" has status "Ready":"True"
	I0917 18:32:49.669750   77819 node_ready.go:38] duration metric: took 11.838649ms for node "default-k8s-diff-port-438836" to be "Ready" ...
	I0917 18:32:49.669761   77819 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:49.692344   77819 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8nrnc" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:49.774555   77819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:32:49.821754   77819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 18:32:49.826676   77819 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 18:32:49.826694   77819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0917 18:32:49.941685   77819 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 18:32:49.941712   77819 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 18:32:50.121418   77819 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:32:50.121444   77819 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 18:32:50.233586   77819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:32:50.948870   77819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.174278798s)
	I0917 18:32:50.948915   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:50.948926   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:50.948941   77819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.12715113s)
	I0917 18:32:50.948983   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:50.948997   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:50.949213   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:50.949240   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:50.949249   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:50.949257   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:50.949335   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:50.949346   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Closing plugin on server side
	I0917 18:32:50.949349   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:50.949367   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:50.949375   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:50.949484   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Closing plugin on server side
	I0917 18:32:50.949517   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:50.949530   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:50.949689   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Closing plugin on server side
	I0917 18:32:50.949700   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:50.949720   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:50.971989   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:50.972009   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:50.972307   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:50.972326   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:51.167019   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:51.167041   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:51.167324   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:51.167350   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:51.167358   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:51.167356   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Closing plugin on server side
	I0917 18:32:51.167366   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:51.167581   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:51.167593   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:51.167605   77819 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-438836"
	I0917 18:32:51.170208   77819 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0917 18:32:51.171345   77819 addons.go:510] duration metric: took 1.86260047s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0917 18:32:51.701056   77819 pod_ready.go:103] pod "coredns-7c65d6cfc9-8nrnc" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:53.199802   77819 pod_ready.go:93] pod "coredns-7c65d6cfc9-8nrnc" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:53.199832   77819 pod_ready.go:82] duration metric: took 3.507449551s for pod "coredns-7c65d6cfc9-8nrnc" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:53.199846   77819 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x4l48" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:54.116602   78008 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0917 18:32:54.116783   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:32:54.117004   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:32:55.207337   77819 pod_ready.go:103] pod "coredns-7c65d6cfc9-x4l48" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:56.207361   77819 pod_ready.go:93] pod "coredns-7c65d6cfc9-x4l48" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:56.207390   77819 pod_ready.go:82] duration metric: took 3.007535449s for pod "coredns-7c65d6cfc9-x4l48" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.207403   77819 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.212003   77819 pod_ready.go:93] pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:56.212025   77819 pod_ready.go:82] duration metric: took 4.613897ms for pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.212034   77819 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.216625   77819 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:56.216645   77819 pod_ready.go:82] duration metric: took 4.604444ms for pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.216654   77819 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.724223   77819 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:56.724257   77819 pod_ready.go:82] duration metric: took 507.594976ms for pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.724277   77819 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xwqtr" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.729284   77819 pod_ready.go:93] pod "kube-proxy-xwqtr" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:56.729312   77819 pod_ready.go:82] duration metric: took 5.025818ms for pod "kube-proxy-xwqtr" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.729324   77819 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:57.004900   77819 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:57.004926   77819 pod_ready.go:82] duration metric: took 275.593421ms for pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:57.004935   77819 pod_ready.go:39] duration metric: took 7.335162837s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:57.004951   77819 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:32:57.004999   77819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:57.020042   77819 api_server.go:72] duration metric: took 7.711410338s to wait for apiserver process to appear ...
	I0917 18:32:57.020070   77819 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:32:57.020095   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:32:57.024504   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 200:
	ok
	I0917 18:32:57.025722   77819 api_server.go:141] control plane version: v1.31.1
	I0917 18:32:57.025749   77819 api_server.go:131] duration metric: took 5.670742ms to wait for apiserver health ...
	I0917 18:32:57.025759   77819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:32:57.206512   77819 system_pods.go:59] 9 kube-system pods found
	I0917 18:32:57.206548   77819 system_pods.go:61] "coredns-7c65d6cfc9-8nrnc" [96eeb328-605e-468b-a022-dbb7b5b44501] Running
	I0917 18:32:57.206555   77819 system_pods.go:61] "coredns-7c65d6cfc9-x4l48" [12a20eeb-edd1-4f5b-bf64-ba3d2c8ae05b] Running
	I0917 18:32:57.206561   77819 system_pods.go:61] "etcd-default-k8s-diff-port-438836" [091ba47e-1133-4557-b3d7-eb39578840ab] Running
	I0917 18:32:57.206567   77819 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-438836" [cbb0e5fe-7583-4f3e-a0cd-dc32b00bb161] Running
	I0917 18:32:57.206573   77819 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-438836" [fe0a5927-2747-4e04-b9fc-c3071cb01ceb] Running
	I0917 18:32:57.206577   77819 system_pods.go:61] "kube-proxy-xwqtr" [5875ff28-7e41-4887-94da-d7632d8141e8] Running
	I0917 18:32:57.206582   77819 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-438836" [b25c5a55-a0e5-432a-a490-69b75d3a48d8] Running
	I0917 18:32:57.206593   77819 system_pods.go:61] "metrics-server-6867b74b74-qnfv2" [75be5ed8-b62d-42c8-8ea9-5809187be05a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:32:57.206599   77819 system_pods.go:61] "storage-provisioner" [a1ae1ecf-9311-4d61-a56d-9147876d4a9d] Running
	I0917 18:32:57.206609   77819 system_pods.go:74] duration metric: took 180.842325ms to wait for pod list to return data ...
	I0917 18:32:57.206619   77819 default_sa.go:34] waiting for default service account to be created ...
	I0917 18:32:57.404368   77819 default_sa.go:45] found service account: "default"
	I0917 18:32:57.404395   77819 default_sa.go:55] duration metric: took 197.770326ms for default service account to be created ...
	I0917 18:32:57.404404   77819 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 18:32:57.607472   77819 system_pods.go:86] 9 kube-system pods found
	I0917 18:32:57.607504   77819 system_pods.go:89] "coredns-7c65d6cfc9-8nrnc" [96eeb328-605e-468b-a022-dbb7b5b44501] Running
	I0917 18:32:57.607513   77819 system_pods.go:89] "coredns-7c65d6cfc9-x4l48" [12a20eeb-edd1-4f5b-bf64-ba3d2c8ae05b] Running
	I0917 18:32:57.607519   77819 system_pods.go:89] "etcd-default-k8s-diff-port-438836" [091ba47e-1133-4557-b3d7-eb39578840ab] Running
	I0917 18:32:57.607523   77819 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-438836" [cbb0e5fe-7583-4f3e-a0cd-dc32b00bb161] Running
	I0917 18:32:57.607529   77819 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-438836" [fe0a5927-2747-4e04-b9fc-c3071cb01ceb] Running
	I0917 18:32:57.607536   77819 system_pods.go:89] "kube-proxy-xwqtr" [5875ff28-7e41-4887-94da-d7632d8141e8] Running
	I0917 18:32:57.607542   77819 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-438836" [b25c5a55-a0e5-432a-a490-69b75d3a48d8] Running
	I0917 18:32:57.607552   77819 system_pods.go:89] "metrics-server-6867b74b74-qnfv2" [75be5ed8-b62d-42c8-8ea9-5809187be05a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:32:57.607558   77819 system_pods.go:89] "storage-provisioner" [a1ae1ecf-9311-4d61-a56d-9147876d4a9d] Running
	I0917 18:32:57.607573   77819 system_pods.go:126] duration metric: took 203.161716ms to wait for k8s-apps to be running ...
	I0917 18:32:57.607584   77819 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 18:32:57.607642   77819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:32:57.623570   77819 system_svc.go:56] duration metric: took 15.976138ms WaitForService to wait for kubelet
	I0917 18:32:57.623607   77819 kubeadm.go:582] duration metric: took 8.314980472s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 18:32:57.623629   77819 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:32:57.804485   77819 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:32:57.804510   77819 node_conditions.go:123] node cpu capacity is 2
	I0917 18:32:57.804520   77819 node_conditions.go:105] duration metric: took 180.885929ms to run NodePressure ...
	I0917 18:32:57.804532   77819 start.go:241] waiting for startup goroutines ...
	I0917 18:32:57.804539   77819 start.go:246] waiting for cluster config update ...
	I0917 18:32:57.804549   77819 start.go:255] writing updated cluster config ...
	I0917 18:32:57.804868   77819 ssh_runner.go:195] Run: rm -f paused
	I0917 18:32:57.854248   77819 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 18:32:57.856295   77819 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-438836" cluster and "default" namespace by default
	I0917 18:32:59.116802   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:32:59.117073   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:33:09.116772   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:33:09.117022   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:33:13.480418   77264 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.32185403s)
	I0917 18:33:13.480497   77264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:33:13.497676   77264 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:33:13.509036   77264 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:33:13.519901   77264 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:33:13.519927   77264 kubeadm.go:157] found existing configuration files:
	
	I0917 18:33:13.519985   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:33:13.530704   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:33:13.530784   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:33:13.541442   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:33:13.553771   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:33:13.553844   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:33:13.566357   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:33:13.576787   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:33:13.576871   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:33:13.587253   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:33:13.597253   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:33:13.597331   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:33:13.607686   77264 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:33:13.657294   77264 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 18:33:13.657416   77264 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:33:13.784063   77264 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:33:13.784228   77264 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:33:13.784388   77264 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 18:33:13.797531   77264 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:33:13.799464   77264 out.go:235]   - Generating certificates and keys ...
	I0917 18:33:13.799555   77264 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:33:13.799626   77264 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:33:13.799735   77264 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:33:13.799849   77264 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:33:13.799964   77264 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:33:13.800059   77264 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:33:13.800305   77264 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:33:13.800620   77264 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:33:13.800843   77264 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:33:13.801056   77264 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:33:13.801220   77264 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:33:13.801361   77264 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:33:13.949574   77264 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:33:14.002216   77264 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 18:33:14.113507   77264 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:33:14.328861   77264 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:33:14.452448   77264 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:33:14.452956   77264 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:33:14.456029   77264 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:33:14.458085   77264 out.go:235]   - Booting up control plane ...
	I0917 18:33:14.458197   77264 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:33:14.458298   77264 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:33:14.458418   77264 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:33:14.480556   77264 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:33:14.490011   77264 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:33:14.490108   77264 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:33:14.641550   77264 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 18:33:14.641680   77264 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 18:33:16.163986   77264 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.521637216s
	I0917 18:33:16.164081   77264 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 18:33:21.167283   77264 kubeadm.go:310] [api-check] The API server is healthy after 5.003926265s
	I0917 18:33:21.187439   77264 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 18:33:21.214590   77264 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 18:33:21.256056   77264 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 18:33:21.256319   77264 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-081863 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 18:33:21.274920   77264 kubeadm.go:310] [bootstrap-token] Using token: tkf10q.2xx4v0n14dywt5kc
	I0917 18:33:21.276557   77264 out.go:235]   - Configuring RBAC rules ...
	I0917 18:33:21.276707   77264 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 18:33:21.286544   77264 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 18:33:21.299514   77264 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 18:33:21.304466   77264 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 18:33:21.309218   77264 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 18:33:21.315113   77264 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 18:33:21.575303   77264 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 18:33:22.022249   77264 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 18:33:22.576184   77264 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 18:33:22.576211   77264 kubeadm.go:310] 
	I0917 18:33:22.576279   77264 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 18:33:22.576291   77264 kubeadm.go:310] 
	I0917 18:33:22.576360   77264 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 18:33:22.576367   77264 kubeadm.go:310] 
	I0917 18:33:22.576388   77264 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 18:33:22.576480   77264 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 18:33:22.576565   77264 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 18:33:22.576576   77264 kubeadm.go:310] 
	I0917 18:33:22.576640   77264 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 18:33:22.576649   77264 kubeadm.go:310] 
	I0917 18:33:22.576725   77264 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 18:33:22.576742   77264 kubeadm.go:310] 
	I0917 18:33:22.576802   77264 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 18:33:22.576884   77264 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 18:33:22.576987   77264 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 18:33:22.577008   77264 kubeadm.go:310] 
	I0917 18:33:22.577111   77264 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 18:33:22.577221   77264 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 18:33:22.577246   77264 kubeadm.go:310] 
	I0917 18:33:22.577361   77264 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tkf10q.2xx4v0n14dywt5kc \
	I0917 18:33:22.577505   77264 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 \
	I0917 18:33:22.577543   77264 kubeadm.go:310] 	--control-plane 
	I0917 18:33:22.577552   77264 kubeadm.go:310] 
	I0917 18:33:22.577660   77264 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 18:33:22.577671   77264 kubeadm.go:310] 
	I0917 18:33:22.577778   77264 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tkf10q.2xx4v0n14dywt5kc \
	I0917 18:33:22.577908   77264 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 
	I0917 18:33:22.579092   77264 kubeadm.go:310] W0917 18:33:13.630065    2521 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:33:22.579481   77264 kubeadm.go:310] W0917 18:33:13.630936    2521 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:33:22.579593   77264 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:33:22.579621   77264 cni.go:84] Creating CNI manager for ""
	I0917 18:33:22.579630   77264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:33:22.581566   77264 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:33:22.582849   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:33:22.595489   77264 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:33:22.627349   77264 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 18:33:22.627411   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:22.627448   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-081863 minikube.k8s.io/updated_at=2024_09_17T18_33_22_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=embed-certs-081863 minikube.k8s.io/primary=true
	I0917 18:33:22.862361   77264 ops.go:34] apiserver oom_adj: -16
	I0917 18:33:22.862491   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:23.362641   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:23.863054   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:24.363374   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:24.862744   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:25.362644   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:25.863065   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:25.974152   77264 kubeadm.go:1113] duration metric: took 3.346801442s to wait for elevateKubeSystemPrivileges
	I0917 18:33:25.974185   77264 kubeadm.go:394] duration metric: took 4m58.844504582s to StartCluster
	I0917 18:33:25.974203   77264 settings.go:142] acquiring lock: {Name:mk34898861b3ed534da4bd8404feab95fe690001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:33:25.974289   77264 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:33:25.976039   77264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:33:25.976296   77264 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.61 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 18:33:25.976407   77264 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 18:33:25.976517   77264 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-081863"
	I0917 18:33:25.976528   77264 addons.go:69] Setting default-storageclass=true in profile "embed-certs-081863"
	I0917 18:33:25.976535   77264 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-081863"
	W0917 18:33:25.976543   77264 addons.go:243] addon storage-provisioner should already be in state true
	I0917 18:33:25.976547   77264 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-081863"
	I0917 18:33:25.976573   77264 host.go:66] Checking if "embed-certs-081863" exists ...
	I0917 18:33:25.976624   77264 config.go:182] Loaded profile config "embed-certs-081863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:33:25.976662   77264 addons.go:69] Setting metrics-server=true in profile "embed-certs-081863"
	I0917 18:33:25.976672   77264 addons.go:234] Setting addon metrics-server=true in "embed-certs-081863"
	W0917 18:33:25.976679   77264 addons.go:243] addon metrics-server should already be in state true
	I0917 18:33:25.976698   77264 host.go:66] Checking if "embed-certs-081863" exists ...
	I0917 18:33:25.976964   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.976984   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.976997   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:25.977013   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:25.977030   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.977050   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:25.978439   77264 out.go:177] * Verifying Kubernetes components...
	I0917 18:33:25.980250   77264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:33:25.993034   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46451
	I0917 18:33:25.993038   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45343
	I0917 18:33:25.993551   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38487
	I0917 18:33:25.993589   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:25.993625   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:25.993887   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:25.994098   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:25.994122   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:25.994193   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:25.994211   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:25.994442   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:25.994466   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:25.994523   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:25.994523   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:25.994762   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetState
	I0917 18:33:25.994791   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:25.995118   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.995168   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:25.995251   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.995284   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:25.998228   77264 addons.go:234] Setting addon default-storageclass=true in "embed-certs-081863"
	W0917 18:33:25.998260   77264 addons.go:243] addon default-storageclass should already be in state true
	I0917 18:33:25.998301   77264 host.go:66] Checking if "embed-certs-081863" exists ...
	I0917 18:33:25.998642   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.998688   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:26.011862   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0917 18:33:26.012556   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:26.013142   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:26.013168   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:26.013578   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:26.014129   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36901
	I0917 18:33:26.014246   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43409
	I0917 18:33:26.014331   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetState
	I0917 18:33:26.014633   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:26.014703   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:26.015086   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:26.015108   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:26.015379   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:26.015396   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:26.015451   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:26.015895   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:26.016078   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:26.016113   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:26.016486   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetState
	I0917 18:33:26.016525   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:33:26.018385   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:33:26.019139   77264 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0917 18:33:26.020119   77264 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:33:26.020991   77264 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 18:33:26.021013   77264 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 18:33:26.021035   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:33:26.021810   77264 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:33:26.021825   77264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 18:33:26.021839   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:33:26.025804   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.026074   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:33:26.026097   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.025803   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.026468   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:33:26.026649   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:33:26.026937   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:33:26.026982   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:33:26.026991   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.027025   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:33:26.027114   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:33:26.027232   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:33:26.027417   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:33:26.027580   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:33:26.035905   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42799
	I0917 18:33:26.036621   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:26.037566   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:26.037597   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:26.038013   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:26.038317   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetState
	I0917 18:33:26.040464   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:33:26.040887   77264 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 18:33:26.040908   77264 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 18:33:26.040922   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:33:26.043857   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.044291   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:33:26.044325   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.044488   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:33:26.044682   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:33:26.044838   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:33:26.045034   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:33:26.155880   77264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:33:26.182293   77264 node_ready.go:35] waiting up to 6m0s for node "embed-certs-081863" to be "Ready" ...
	I0917 18:33:26.191336   77264 node_ready.go:49] node "embed-certs-081863" has status "Ready":"True"
	I0917 18:33:26.191358   77264 node_ready.go:38] duration metric: took 9.032061ms for node "embed-certs-081863" to be "Ready" ...
	I0917 18:33:26.191366   77264 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:33:26.196333   77264 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:26.260819   77264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:33:26.270678   77264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 18:33:26.270701   77264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0917 18:33:26.306169   77264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 18:33:26.310271   77264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 18:33:26.310300   77264 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 18:33:26.367576   77264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:33:26.367603   77264 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 18:33:26.424838   77264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:33:27.088293   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.088326   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.088329   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.088352   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.088726   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.088759   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.088782   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Closing plugin on server side
	I0917 18:33:27.088794   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.088831   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.088845   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.088853   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.088798   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.088923   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.089075   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.089088   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.089200   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.089210   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.089242   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Closing plugin on server side
	I0917 18:33:27.162204   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.162227   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.162597   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.162619   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.423795   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.423824   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.424110   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.424127   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.424136   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.424145   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.424369   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.424385   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.424395   77264 addons.go:475] Verifying addon metrics-server=true in "embed-certs-081863"
	I0917 18:33:27.424390   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Closing plugin on server side
	I0917 18:33:27.426548   77264 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0917 18:33:29.116398   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:33:29.116681   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:33:27.427684   77264 addons.go:510] duration metric: took 1.451280405s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0917 18:33:28.311561   77264 pod_ready.go:103] pod "etcd-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:33:30.703554   77264 pod_ready.go:103] pod "etcd-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:33:31.203018   77264 pod_ready.go:93] pod "etcd-embed-certs-081863" in "kube-system" namespace has status "Ready":"True"
	I0917 18:33:31.203047   77264 pod_ready.go:82] duration metric: took 5.006684537s for pod "etcd-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:31.203057   77264 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:31.207921   77264 pod_ready.go:93] pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace has status "Ready":"True"
	I0917 18:33:31.207949   77264 pod_ready.go:82] duration metric: took 4.88424ms for pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:31.207964   77264 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:31.212804   77264 pod_ready.go:93] pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace has status "Ready":"True"
	I0917 18:33:31.212830   77264 pod_ready.go:82] duration metric: took 4.856814ms for pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:31.212842   77264 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:32.221895   77264 pod_ready.go:93] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"True"
	I0917 18:33:32.221921   77264 pod_ready.go:82] duration metric: took 1.009071567s for pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:32.221929   77264 pod_ready.go:39] duration metric: took 6.030554324s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:33:32.221942   77264 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:33:32.221991   77264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:33:32.242087   77264 api_server.go:72] duration metric: took 6.265747566s to wait for apiserver process to appear ...
	I0917 18:33:32.242113   77264 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:33:32.242129   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:33:32.246960   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 200:
	ok
	I0917 18:33:32.248201   77264 api_server.go:141] control plane version: v1.31.1
	I0917 18:33:32.248223   77264 api_server.go:131] duration metric: took 6.105102ms to wait for apiserver health ...
	I0917 18:33:32.248231   77264 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:33:32.257513   77264 system_pods.go:59] 9 kube-system pods found
	I0917 18:33:32.257546   77264 system_pods.go:61] "coredns-7c65d6cfc9-662sf" [dc7d0fc2-f0dd-420c-b2f9-16ddb84bf3c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:33:32.257557   77264 system_pods.go:61] "coredns-7c65d6cfc9-dxjr7" [16ebe197-5fcf-4988-968b-c9edd71886ff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:33:32.257563   77264 system_pods.go:61] "etcd-embed-certs-081863" [305d6255-3a64-42e2-ad46-cfb94470289d] Running
	I0917 18:33:32.257569   77264 system_pods.go:61] "kube-apiserver-embed-certs-081863" [693ee853-314d-49fc-884c-aaaa2ac17a59] Running
	I0917 18:33:32.257575   77264 system_pods.go:61] "kube-controller-manager-embed-certs-081863" [ff8d98db-0214-405a-858d-e720dccd0492] Running
	I0917 18:33:32.257579   77264 system_pods.go:61] "kube-proxy-7w64h" [46f3bcbd-64c9-4b30-9aa9-6f6e1eb9833b] Running
	I0917 18:33:32.257585   77264 system_pods.go:61] "kube-scheduler-embed-certs-081863" [fb3b40eb-5f37-486c-a897-c7d3574ea408] Running
	I0917 18:33:32.257593   77264 system_pods.go:61] "metrics-server-6867b74b74-98t8z" [941996a1-2109-4c06-88d1-19c6987f81bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:33:32.257602   77264 system_pods.go:61] "storage-provisioner" [107868ba-cf29-42b0-bb0d-c0da9b6b4c8c] Running
	I0917 18:33:32.257612   77264 system_pods.go:74] duration metric: took 9.373269ms to wait for pod list to return data ...
	I0917 18:33:32.257625   77264 default_sa.go:34] waiting for default service account to be created ...
	I0917 18:33:32.264675   77264 default_sa.go:45] found service account: "default"
	I0917 18:33:32.264700   77264 default_sa.go:55] duration metric: took 7.064658ms for default service account to be created ...
	I0917 18:33:32.264711   77264 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 18:33:32.270932   77264 system_pods.go:86] 9 kube-system pods found
	I0917 18:33:32.270964   77264 system_pods.go:89] "coredns-7c65d6cfc9-662sf" [dc7d0fc2-f0dd-420c-b2f9-16ddb84bf3c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:33:32.270975   77264 system_pods.go:89] "coredns-7c65d6cfc9-dxjr7" [16ebe197-5fcf-4988-968b-c9edd71886ff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:33:32.270983   77264 system_pods.go:89] "etcd-embed-certs-081863" [305d6255-3a64-42e2-ad46-cfb94470289d] Running
	I0917 18:33:32.270990   77264 system_pods.go:89] "kube-apiserver-embed-certs-081863" [693ee853-314d-49fc-884c-aaaa2ac17a59] Running
	I0917 18:33:32.270996   77264 system_pods.go:89] "kube-controller-manager-embed-certs-081863" [ff8d98db-0214-405a-858d-e720dccd0492] Running
	I0917 18:33:32.271002   77264 system_pods.go:89] "kube-proxy-7w64h" [46f3bcbd-64c9-4b30-9aa9-6f6e1eb9833b] Running
	I0917 18:33:32.271009   77264 system_pods.go:89] "kube-scheduler-embed-certs-081863" [fb3b40eb-5f37-486c-a897-c7d3574ea408] Running
	I0917 18:33:32.271018   77264 system_pods.go:89] "metrics-server-6867b74b74-98t8z" [941996a1-2109-4c06-88d1-19c6987f81bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:33:32.271024   77264 system_pods.go:89] "storage-provisioner" [107868ba-cf29-42b0-bb0d-c0da9b6b4c8c] Running
	I0917 18:33:32.271037   77264 system_pods.go:126] duration metric: took 6.318783ms to wait for k8s-apps to be running ...
	I0917 18:33:32.271049   77264 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 18:33:32.271102   77264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:33:32.287483   77264 system_svc.go:56] duration metric: took 16.427006ms WaitForService to wait for kubelet
	I0917 18:33:32.287516   77264 kubeadm.go:582] duration metric: took 6.311184714s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 18:33:32.287535   77264 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:33:32.406700   77264 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:33:32.406738   77264 node_conditions.go:123] node cpu capacity is 2
	I0917 18:33:32.406754   77264 node_conditions.go:105] duration metric: took 119.213403ms to run NodePressure ...
	I0917 18:33:32.406767   77264 start.go:241] waiting for startup goroutines ...
	I0917 18:33:32.406777   77264 start.go:246] waiting for cluster config update ...
	I0917 18:33:32.406791   77264 start.go:255] writing updated cluster config ...
	I0917 18:33:32.407061   77264 ssh_runner.go:195] Run: rm -f paused
	I0917 18:33:32.455606   77264 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 18:33:32.457636   77264 out.go:177] * Done! kubectl is now configured to use "embed-certs-081863" cluster and "default" namespace by default
	I0917 18:34:09.116050   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:34:09.116348   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:34:09.116382   78008 kubeadm.go:310] 
	I0917 18:34:09.116437   78008 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0917 18:34:09.116522   78008 kubeadm.go:310] 		timed out waiting for the condition
	I0917 18:34:09.116546   78008 kubeadm.go:310] 
	I0917 18:34:09.116595   78008 kubeadm.go:310] 	This error is likely caused by:
	I0917 18:34:09.116645   78008 kubeadm.go:310] 		- The kubelet is not running
	I0917 18:34:09.116792   78008 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0917 18:34:09.116804   78008 kubeadm.go:310] 
	I0917 18:34:09.116949   78008 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0917 18:34:09.116993   78008 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0917 18:34:09.117053   78008 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0917 18:34:09.117070   78008 kubeadm.go:310] 
	I0917 18:34:09.117199   78008 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0917 18:34:09.117318   78008 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0917 18:34:09.117331   78008 kubeadm.go:310] 
	I0917 18:34:09.117467   78008 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0917 18:34:09.117585   78008 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0917 18:34:09.117689   78008 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0917 18:34:09.117782   78008 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0917 18:34:09.117793   78008 kubeadm.go:310] 
	I0917 18:34:09.118509   78008 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:34:09.118613   78008 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0917 18:34:09.118740   78008 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0917 18:34:09.118821   78008 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0917 18:34:09.118869   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:34:09.597153   78008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:34:09.614431   78008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:34:09.627627   78008 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:34:09.627653   78008 kubeadm.go:157] found existing configuration files:
	
	I0917 18:34:09.627702   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:34:09.639927   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:34:09.639997   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:34:09.651694   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:34:09.662886   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:34:09.662951   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:34:09.675194   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:34:09.686971   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:34:09.687040   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:34:09.699343   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:34:09.711202   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:34:09.711259   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:34:09.722049   78008 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:34:09.800536   78008 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0917 18:34:09.800589   78008 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:34:09.951244   78008 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:34:09.951389   78008 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:34:09.951517   78008 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0917 18:34:10.148311   78008 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:34:10.150656   78008 out.go:235]   - Generating certificates and keys ...
	I0917 18:34:10.150769   78008 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:34:10.150858   78008 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:34:10.150978   78008 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:34:10.151065   78008 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:34:10.151169   78008 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:34:10.151256   78008 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:34:10.151519   78008 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:34:10.151757   78008 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:34:10.152388   78008 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:34:10.152908   78008 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:34:10.153071   78008 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:34:10.153159   78008 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:34:10.298790   78008 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:34:10.463403   78008 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:34:10.699997   78008 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:34:10.983279   78008 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:34:11.006708   78008 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:34:11.008239   78008 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:34:11.008306   78008 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:34:11.173261   78008 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:34:11.175163   78008 out.go:235]   - Booting up control plane ...
	I0917 18:34:11.175324   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:34:11.188834   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:34:11.189874   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:34:11.190719   78008 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:34:11.193221   78008 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0917 18:34:51.193814   78008 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0917 18:34:51.194231   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:34:51.194466   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:34:56.194972   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:34:56.195214   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:35:06.195454   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:35:06.195700   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:35:26.196645   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:35:26.196867   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:36:06.199013   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:36:06.199291   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:36:06.199313   78008 kubeadm.go:310] 
	I0917 18:36:06.199374   78008 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0917 18:36:06.199427   78008 kubeadm.go:310] 		timed out waiting for the condition
	I0917 18:36:06.199434   78008 kubeadm.go:310] 
	I0917 18:36:06.199481   78008 kubeadm.go:310] 	This error is likely caused by:
	I0917 18:36:06.199514   78008 kubeadm.go:310] 		- The kubelet is not running
	I0917 18:36:06.199643   78008 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0917 18:36:06.199663   78008 kubeadm.go:310] 
	I0917 18:36:06.199785   78008 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0917 18:36:06.199835   78008 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0917 18:36:06.199878   78008 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0917 18:36:06.199882   78008 kubeadm.go:310] 
	I0917 18:36:06.200017   78008 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0917 18:36:06.200218   78008 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0917 18:36:06.200235   78008 kubeadm.go:310] 
	I0917 18:36:06.200391   78008 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0917 18:36:06.200515   78008 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0917 18:36:06.200640   78008 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0917 18:36:06.200746   78008 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0917 18:36:06.200763   78008 kubeadm.go:310] 
	I0917 18:36:06.201520   78008 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:36:06.201636   78008 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0917 18:36:06.201798   78008 kubeadm.go:394] duration metric: took 7m56.884157814s to StartCluster
	I0917 18:36:06.201852   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:36:06.201800   78008 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0917 18:36:06.201920   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:36:06.251742   78008 cri.go:89] found id: ""
	I0917 18:36:06.251773   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.251781   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:36:06.251787   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:36:06.251853   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:36:06.292437   78008 cri.go:89] found id: ""
	I0917 18:36:06.292471   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.292483   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:36:06.292490   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:36:06.292548   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:36:06.334539   78008 cri.go:89] found id: ""
	I0917 18:36:06.334571   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.334580   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:36:06.334590   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:36:06.334641   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:36:06.372231   78008 cri.go:89] found id: ""
	I0917 18:36:06.372267   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.372279   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:36:06.372287   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:36:06.372346   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:36:06.411995   78008 cri.go:89] found id: ""
	I0917 18:36:06.412023   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.412031   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:36:06.412036   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:36:06.412100   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:36:06.450809   78008 cri.go:89] found id: ""
	I0917 18:36:06.450834   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.450842   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:36:06.450848   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:36:06.450897   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:36:06.486772   78008 cri.go:89] found id: ""
	I0917 18:36:06.486802   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.486814   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:36:06.486831   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:36:06.486886   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:36:06.528167   78008 cri.go:89] found id: ""
	I0917 18:36:06.528198   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.528210   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:36:06.528222   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:36:06.528234   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:36:06.610415   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:36:06.610445   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:36:06.610461   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:36:06.745881   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:36:06.745921   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:36:06.788764   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:36:06.788802   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:36:06.843477   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:36:06.843514   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0917 18:36:06.858338   78008 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0917 18:36:06.858388   78008 out.go:270] * 
	W0917 18:36:06.858456   78008 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0917 18:36:06.858485   78008 out.go:270] * 
	W0917 18:36:06.859898   78008 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 18:36:06.863606   78008 out.go:201] 
	W0917 18:36:06.865246   78008 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0917 18:36:06.865293   78008 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0917 18:36:06.865313   78008 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0917 18:36:06.866942   78008 out.go:201] 
	
	
	==> CRI-O <==
	Sep 17 18:41:52 no-preload-328741 crio[703]: time="2024-09-17 18:41:52.260001646Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5b6c36af-dd65-4a39-90e1-450b870c299b name=/runtime.v1.RuntimeService/Version
	Sep 17 18:41:52 no-preload-328741 crio[703]: time="2024-09-17 18:41:52.261276228Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=51065ea3-4a8b-4b0f-aa44-ec3d3eae2102 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:41:52 no-preload-328741 crio[703]: time="2024-09-17 18:41:52.261818487Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598512261792326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=51065ea3-4a8b-4b0f-aa44-ec3d3eae2102 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:41:52 no-preload-328741 crio[703]: time="2024-09-17 18:41:52.262515178Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f27c3ab9-c0ee-4691-bb37-3a0d61d2ae10 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:41:52 no-preload-328741 crio[703]: time="2024-09-17 18:41:52.262569768Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f27c3ab9-c0ee-4691-bb37-3a0d61d2ae10 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:41:52 no-preload-328741 crio[703]: time="2024-09-17 18:41:52.262787078Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e25a907995d3828299c33ad3073839ba30163d24795684a0b4aeaeba2183d2b6,PodSandboxId:288779327f47e6ca273a51536feafb6635a867a22806556906b66eed182a3e10,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726597960312259345,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03a8e7f5-ea70-4653-837b-5ad54de48136,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f85a4e2d08da43dbe670d19d96aa06781add66c8376d2f9e232c67764cd73292,PodSandboxId:6b63e0f9c879ea63c0b44fec1d9189cc25b1c291d957937b00a49d550a5996bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597959282174778,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gddwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57f85dd3-be48-4648-8d70-7a06aeaecdc2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c2c1757b5010ce5dd4c18226be5ba9dbac3aef31541d44404ba77109a340985,PodSandboxId:dce3e4d35ce3334eac74ee57b6df8c41b455f9f83b5902c71565ae834feab740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597959359057212,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qv4pq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31
f7e4b5-3870-41a1-96f8-8e13511fe684,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:808a6dceb70633f9f5388cad26d46c756126eb16dca9e626a592a5498a96de64,PodSandboxId:ada4f57161adf25f5621b4b2d120ad81751a027ec3875894b33acfdd00807480,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726597958943487687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2945m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a7b75b4-28c5-476a-b002-05313976c138,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e528cc460f1008cbb0e3528a6ff5ef106aeddd775cf1a915b8ae6d511d1959,PodSandboxId:c0bc1617cf8d0b7b093f166452216af03951825a2a2245534f60f18dd359ffd5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726597947484439622,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a7f5bfd03d56b992d0996fc63641b99,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91b94fc0010bc21e78fe4163bdc09bbba44c0d8f5a62d66aae52653816ad8b6,PodSandboxId:530a84d7c7b43240fd5b68b35730b00c7c74171098c9c63e194b585ec6666c33,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726597947543461772,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64eabf95ac53e177e5c6b586c85b9274,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e54abd262e2695f398247112d6e385326a4419d9c9f6744d7c80478ea5abe131,PodSandboxId:fd02b06c4661d51a0c1c1e3ab21cc2363ee892f13f478155001e44b53d83848a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726597947501637454,Labels:map[string]string{io.kubernetes
.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34afefdc5cd5dfffb05860cfe10789d3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976b4c709134ce601c00b084d7057f3af82f641b9700602c3b971204e0fdd136,PodSandboxId:53ac81c8f4a083199505dfcac4dc0075877c7719c345daea72847f9410c08792,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726597947465569478,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88e3c9e3660a4a2fc689695534e4c55,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e46c0fa82cfcfd9610da4d10f59632ea78de09d0e9da381b6552d2ae89e1db9,PodSandboxId:2c2bd6cac468611364bef6a0d230ff1b8952207263562842678794b3b5857856,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726597661747011364,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34afefdc5cd5dfffb05860cfe10789d3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f27c3ab9-c0ee-4691-bb37-3a0d61d2ae10 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:41:52 no-preload-328741 crio[703]: time="2024-09-17 18:41:52.307922117Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cece9934-6e6e-41af-841d-61fa224f0d42 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:41:52 no-preload-328741 crio[703]: time="2024-09-17 18:41:52.308019598Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cece9934-6e6e-41af-841d-61fa224f0d42 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:41:52 no-preload-328741 crio[703]: time="2024-09-17 18:41:52.309412828Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0218eb9f-623d-492e-a7fd-6e172783fa8e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:41:52 no-preload-328741 crio[703]: time="2024-09-17 18:41:52.309782259Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598512309759538,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0218eb9f-623d-492e-a7fd-6e172783fa8e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:41:52 no-preload-328741 crio[703]: time="2024-09-17 18:41:52.310555852Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=056f1ee1-efe5-45b5-9dc4-8bcaed004d83 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:41:52 no-preload-328741 crio[703]: time="2024-09-17 18:41:52.310630443Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=056f1ee1-efe5-45b5-9dc4-8bcaed004d83 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:41:52 no-preload-328741 crio[703]: time="2024-09-17 18:41:52.310843391Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e25a907995d3828299c33ad3073839ba30163d24795684a0b4aeaeba2183d2b6,PodSandboxId:288779327f47e6ca273a51536feafb6635a867a22806556906b66eed182a3e10,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726597960312259345,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03a8e7f5-ea70-4653-837b-5ad54de48136,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f85a4e2d08da43dbe670d19d96aa06781add66c8376d2f9e232c67764cd73292,PodSandboxId:6b63e0f9c879ea63c0b44fec1d9189cc25b1c291d957937b00a49d550a5996bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597959282174778,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gddwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57f85dd3-be48-4648-8d70-7a06aeaecdc2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c2c1757b5010ce5dd4c18226be5ba9dbac3aef31541d44404ba77109a340985,PodSandboxId:dce3e4d35ce3334eac74ee57b6df8c41b455f9f83b5902c71565ae834feab740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597959359057212,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qv4pq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31
f7e4b5-3870-41a1-96f8-8e13511fe684,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:808a6dceb70633f9f5388cad26d46c756126eb16dca9e626a592a5498a96de64,PodSandboxId:ada4f57161adf25f5621b4b2d120ad81751a027ec3875894b33acfdd00807480,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726597958943487687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2945m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a7b75b4-28c5-476a-b002-05313976c138,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e528cc460f1008cbb0e3528a6ff5ef106aeddd775cf1a915b8ae6d511d1959,PodSandboxId:c0bc1617cf8d0b7b093f166452216af03951825a2a2245534f60f18dd359ffd5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726597947484439622,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a7f5bfd03d56b992d0996fc63641b99,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91b94fc0010bc21e78fe4163bdc09bbba44c0d8f5a62d66aae52653816ad8b6,PodSandboxId:530a84d7c7b43240fd5b68b35730b00c7c74171098c9c63e194b585ec6666c33,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726597947543461772,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64eabf95ac53e177e5c6b586c85b9274,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e54abd262e2695f398247112d6e385326a4419d9c9f6744d7c80478ea5abe131,PodSandboxId:fd02b06c4661d51a0c1c1e3ab21cc2363ee892f13f478155001e44b53d83848a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726597947501637454,Labels:map[string]string{io.kubernetes
.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34afefdc5cd5dfffb05860cfe10789d3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976b4c709134ce601c00b084d7057f3af82f641b9700602c3b971204e0fdd136,PodSandboxId:53ac81c8f4a083199505dfcac4dc0075877c7719c345daea72847f9410c08792,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726597947465569478,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88e3c9e3660a4a2fc689695534e4c55,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e46c0fa82cfcfd9610da4d10f59632ea78de09d0e9da381b6552d2ae89e1db9,PodSandboxId:2c2bd6cac468611364bef6a0d230ff1b8952207263562842678794b3b5857856,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726597661747011364,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34afefdc5cd5dfffb05860cfe10789d3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=056f1ee1-efe5-45b5-9dc4-8bcaed004d83 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:41:52 no-preload-328741 crio[703]: time="2024-09-17 18:41:52.352863877Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=15f2b747-88dc-4dbb-99a0-bee55f1fe457 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:41:52 no-preload-328741 crio[703]: time="2024-09-17 18:41:52.352940078Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=15f2b747-88dc-4dbb-99a0-bee55f1fe457 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:41:52 no-preload-328741 crio[703]: time="2024-09-17 18:41:52.354405377Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c254d89a-227c-4664-97be-50a3e4989053 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:41:52 no-preload-328741 crio[703]: time="2024-09-17 18:41:52.354761447Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598512354735563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c254d89a-227c-4664-97be-50a3e4989053 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:41:52 no-preload-328741 crio[703]: time="2024-09-17 18:41:52.355523243Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec9ab1ea-16c2-4edf-82ca-30f4c0260f3b name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:41:52 no-preload-328741 crio[703]: time="2024-09-17 18:41:52.355580392Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec9ab1ea-16c2-4edf-82ca-30f4c0260f3b name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:41:52 no-preload-328741 crio[703]: time="2024-09-17 18:41:52.355790973Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e25a907995d3828299c33ad3073839ba30163d24795684a0b4aeaeba2183d2b6,PodSandboxId:288779327f47e6ca273a51536feafb6635a867a22806556906b66eed182a3e10,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726597960312259345,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03a8e7f5-ea70-4653-837b-5ad54de48136,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f85a4e2d08da43dbe670d19d96aa06781add66c8376d2f9e232c67764cd73292,PodSandboxId:6b63e0f9c879ea63c0b44fec1d9189cc25b1c291d957937b00a49d550a5996bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597959282174778,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gddwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57f85dd3-be48-4648-8d70-7a06aeaecdc2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c2c1757b5010ce5dd4c18226be5ba9dbac3aef31541d44404ba77109a340985,PodSandboxId:dce3e4d35ce3334eac74ee57b6df8c41b455f9f83b5902c71565ae834feab740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597959359057212,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qv4pq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31
f7e4b5-3870-41a1-96f8-8e13511fe684,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:808a6dceb70633f9f5388cad26d46c756126eb16dca9e626a592a5498a96de64,PodSandboxId:ada4f57161adf25f5621b4b2d120ad81751a027ec3875894b33acfdd00807480,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726597958943487687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2945m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a7b75b4-28c5-476a-b002-05313976c138,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e528cc460f1008cbb0e3528a6ff5ef106aeddd775cf1a915b8ae6d511d1959,PodSandboxId:c0bc1617cf8d0b7b093f166452216af03951825a2a2245534f60f18dd359ffd5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726597947484439622,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a7f5bfd03d56b992d0996fc63641b99,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91b94fc0010bc21e78fe4163bdc09bbba44c0d8f5a62d66aae52653816ad8b6,PodSandboxId:530a84d7c7b43240fd5b68b35730b00c7c74171098c9c63e194b585ec6666c33,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726597947543461772,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64eabf95ac53e177e5c6b586c85b9274,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e54abd262e2695f398247112d6e385326a4419d9c9f6744d7c80478ea5abe131,PodSandboxId:fd02b06c4661d51a0c1c1e3ab21cc2363ee892f13f478155001e44b53d83848a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726597947501637454,Labels:map[string]string{io.kubernetes
.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34afefdc5cd5dfffb05860cfe10789d3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976b4c709134ce601c00b084d7057f3af82f641b9700602c3b971204e0fdd136,PodSandboxId:53ac81c8f4a083199505dfcac4dc0075877c7719c345daea72847f9410c08792,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726597947465569478,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88e3c9e3660a4a2fc689695534e4c55,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e46c0fa82cfcfd9610da4d10f59632ea78de09d0e9da381b6552d2ae89e1db9,PodSandboxId:2c2bd6cac468611364bef6a0d230ff1b8952207263562842678794b3b5857856,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726597661747011364,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34afefdc5cd5dfffb05860cfe10789d3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ec9ab1ea-16c2-4edf-82ca-30f4c0260f3b name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:41:52 no-preload-328741 crio[703]: time="2024-09-17 18:41:52.361038797Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=01cc95e7-6cb9-4673-b3fc-0c0e88a03038 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 17 18:41:52 no-preload-328741 crio[703]: time="2024-09-17 18:41:52.361418789Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d3c58c47d00fe7117932b0a5f382dfa642c045dc285bb1985768099c2a5c1398,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-cvttg,Uid:1b2d6700-2e3c-4a35-9794-0ec095eed0d4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726597960244207554,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-cvttg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2d6700-2e3c-4a35-9794-0ec095eed0d4,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-17T18:32:39.931495774Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:288779327f47e6ca273a51536feafb6635a867a22806556906b66eed182a3e10,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:03a8e7f5-ea70-4653-837b-5ad54de48136,Na
mespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726597960126815146,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03a8e7f5-ea70-4653-837b-5ad54de48136,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volu
mes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-17T18:32:39.817358402Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ada4f57161adf25f5621b4b2d120ad81751a027ec3875894b33acfdd00807480,Metadata:&PodSandboxMetadata{Name:kube-proxy-2945m,Uid:8a7b75b4-28c5-476a-b002-05313976c138,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726597958631233808,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-2945m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a7b75b4-28c5-476a-b002-05313976c138,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-17T18:32:37.724314833Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dce3e4d35ce3334eac74ee57b6df8c41b455f9f83b5902c71565ae834feab740,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-qv4pq,Uid
:31f7e4b5-3870-41a1-96f8-8e13511fe684,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726597958406689679,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-qv4pq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31f7e4b5-3870-41a1-96f8-8e13511fe684,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-17T18:32:38.092252534Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6b63e0f9c879ea63c0b44fec1d9189cc25b1c291d957937b00a49d550a5996bc,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-gddwk,Uid:57f85dd3-be48-4648-8d70-7a06aeaecdc2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726597958341677838,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-gddwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57f85dd3-be48-4648-8d70-7a06aeaecdc2,k8s-app: kube-dns,pod-templat
e-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-17T18:32:38.029818465Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fd02b06c4661d51a0c1c1e3ab21cc2363ee892f13f478155001e44b53d83848a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-328741,Uid:34afefdc5cd5dfffb05860cfe10789d3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726597947274615106,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34afefdc5cd5dfffb05860cfe10789d3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.182:8443,kubernetes.io/config.hash: 34afefdc5cd5dfffb05860cfe10789d3,kubernetes.io/config.seen: 2024-09-17T18:32:26.819710835Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c0bc1617cf8d0b7b093f166452216af
03951825a2a2245534f60f18dd359ffd5,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-328741,Uid:4a7f5bfd03d56b992d0996fc63641b99,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726597947267962811,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a7f5bfd03d56b992d0996fc63641b99,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.182:2379,kubernetes.io/config.hash: 4a7f5bfd03d56b992d0996fc63641b99,kubernetes.io/config.seen: 2024-09-17T18:32:26.819729694Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:53ac81c8f4a083199505dfcac4dc0075877c7719c345daea72847f9410c08792,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-328741,Uid:e88e3c9e3660a4a2fc689695534e4c55,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726597947258027606,Labels:map[string]strin
g{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88e3c9e3660a4a2fc689695534e4c55,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e88e3c9e3660a4a2fc689695534e4c55,kubernetes.io/config.seen: 2024-09-17T18:32:26.819722353Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:530a84d7c7b43240fd5b68b35730b00c7c74171098c9c63e194b585ec6666c33,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-328741,Uid:64eabf95ac53e177e5c6b586c85b9274,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726597947257868201,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64eabf95ac53e177e5c6b586c85b9274,tier: control-plane,},Annotations:map[string]strin
g{kubernetes.io/config.hash: 64eabf95ac53e177e5c6b586c85b9274,kubernetes.io/config.seen: 2024-09-17T18:32:26.819714977Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2c2bd6cac468611364bef6a0d230ff1b8952207263562842678794b3b5857856,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-328741,Uid:34afefdc5cd5dfffb05860cfe10789d3,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726597661440621745,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34afefdc5cd5dfffb05860cfe10789d3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.182:8443,kubernetes.io/config.hash: 34afefdc5cd5dfffb05860cfe10789d3,kubernetes.io/config.seen: 2024-09-17T18:27:40.939662520Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/inter
ceptors.go:74" id=01cc95e7-6cb9-4673-b3fc-0c0e88a03038 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 17 18:41:52 no-preload-328741 crio[703]: time="2024-09-17 18:41:52.362552173Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a85babb8-94e9-4748-b7e4-5acd577acdcf name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:41:52 no-preload-328741 crio[703]: time="2024-09-17 18:41:52.362603821Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a85babb8-94e9-4748-b7e4-5acd577acdcf name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:41:52 no-preload-328741 crio[703]: time="2024-09-17 18:41:52.362809623Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e25a907995d3828299c33ad3073839ba30163d24795684a0b4aeaeba2183d2b6,PodSandboxId:288779327f47e6ca273a51536feafb6635a867a22806556906b66eed182a3e10,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726597960312259345,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03a8e7f5-ea70-4653-837b-5ad54de48136,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f85a4e2d08da43dbe670d19d96aa06781add66c8376d2f9e232c67764cd73292,PodSandboxId:6b63e0f9c879ea63c0b44fec1d9189cc25b1c291d957937b00a49d550a5996bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597959282174778,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gddwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57f85dd3-be48-4648-8d70-7a06aeaecdc2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c2c1757b5010ce5dd4c18226be5ba9dbac3aef31541d44404ba77109a340985,PodSandboxId:dce3e4d35ce3334eac74ee57b6df8c41b455f9f83b5902c71565ae834feab740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597959359057212,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qv4pq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31
f7e4b5-3870-41a1-96f8-8e13511fe684,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:808a6dceb70633f9f5388cad26d46c756126eb16dca9e626a592a5498a96de64,PodSandboxId:ada4f57161adf25f5621b4b2d120ad81751a027ec3875894b33acfdd00807480,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726597958943487687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2945m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a7b75b4-28c5-476a-b002-05313976c138,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e528cc460f1008cbb0e3528a6ff5ef106aeddd775cf1a915b8ae6d511d1959,PodSandboxId:c0bc1617cf8d0b7b093f166452216af03951825a2a2245534f60f18dd359ffd5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726597947484439622,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a7f5bfd03d56b992d0996fc63641b99,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91b94fc0010bc21e78fe4163bdc09bbba44c0d8f5a62d66aae52653816ad8b6,PodSandboxId:530a84d7c7b43240fd5b68b35730b00c7c74171098c9c63e194b585ec6666c33,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726597947543461772,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64eabf95ac53e177e5c6b586c85b9274,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e54abd262e2695f398247112d6e385326a4419d9c9f6744d7c80478ea5abe131,PodSandboxId:fd02b06c4661d51a0c1c1e3ab21cc2363ee892f13f478155001e44b53d83848a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726597947501637454,Labels:map[string]string{io.kubernetes
.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34afefdc5cd5dfffb05860cfe10789d3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976b4c709134ce601c00b084d7057f3af82f641b9700602c3b971204e0fdd136,PodSandboxId:53ac81c8f4a083199505dfcac4dc0075877c7719c345daea72847f9410c08792,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726597947465569478,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88e3c9e3660a4a2fc689695534e4c55,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e46c0fa82cfcfd9610da4d10f59632ea78de09d0e9da381b6552d2ae89e1db9,PodSandboxId:2c2bd6cac468611364bef6a0d230ff1b8952207263562842678794b3b5857856,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726597661747011364,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34afefdc5cd5dfffb05860cfe10789d3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a85babb8-94e9-4748-b7e4-5acd577acdcf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e25a907995d38       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   288779327f47e       storage-provisioner
	0c2c1757b5010       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   dce3e4d35ce33       coredns-7c65d6cfc9-qv4pq
	f85a4e2d08da4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   6b63e0f9c879e       coredns-7c65d6cfc9-gddwk
	808a6dceb7063       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   ada4f57161adf       kube-proxy-2945m
	b91b94fc0010b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   530a84d7c7b43       kube-controller-manager-no-preload-328741
	e54abd262e269       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   fd02b06c4661d       kube-apiserver-no-preload-328741
	49e528cc460f1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   c0bc1617cf8d0       etcd-no-preload-328741
	976b4c709134c       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   53ac81c8f4a08       kube-scheduler-no-preload-328741
	4e46c0fa82cfc       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   2c2bd6cac4686       kube-apiserver-no-preload-328741
	
	
	==> coredns [0c2c1757b5010ce5dd4c18226be5ba9dbac3aef31541d44404ba77109a340985] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [f85a4e2d08da43dbe670d19d96aa06781add66c8376d2f9e232c67764cd73292] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-328741
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-328741
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=no-preload-328741
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T18_32_33_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 18:32:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-328741
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 18:41:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 18:37:50 +0000   Tue, 17 Sep 2024 18:32:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 18:37:50 +0000   Tue, 17 Sep 2024 18:32:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 18:37:50 +0000   Tue, 17 Sep 2024 18:32:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 18:37:50 +0000   Tue, 17 Sep 2024 18:32:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.182
	  Hostname:    no-preload-328741
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 159bd2b5f8b94daca6c02b7ffef2b2e6
	  System UUID:                159bd2b5-f8b9-4dac-a6c0-2b7ffef2b2e6
	  Boot ID:                    e330fa09-6d35-43d5-8b23-1c8e7bf952a0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-gddwk                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m14s
	  kube-system                 coredns-7c65d6cfc9-qv4pq                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m14s
	  kube-system                 etcd-no-preload-328741                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m20s
	  kube-system                 kube-apiserver-no-preload-328741             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 kube-controller-manager-no-preload-328741    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 kube-proxy-2945m                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-scheduler-no-preload-328741             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 metrics-server-6867b74b74-cvttg              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m13s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m12s  kube-proxy       
	  Normal  Starting                 9m20s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m19s  kubelet          Node no-preload-328741 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s  kubelet          Node no-preload-328741 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s  kubelet          Node no-preload-328741 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m16s  node-controller  Node no-preload-328741 event: Registered Node no-preload-328741 in Controller
	
	
	==> dmesg <==
	[  +0.052080] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041143] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.862010] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.638613] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.597641] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.929271] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.057298] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067527] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.195977] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.125163] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.298048] systemd-fstab-generator[696]: Ignoring "noauto" option for root device
	[ +16.021909] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	[  +0.059667] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.021214] systemd-fstab-generator[1352]: Ignoring "noauto" option for root device
	[  +4.577933] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.122932] kauditd_printk_skb: 82 callbacks suppressed
	[Sep17 18:32] systemd-fstab-generator[2968]: Ignoring "noauto" option for root device
	[  +0.071057] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.503827] systemd-fstab-generator[3288]: Ignoring "noauto" option for root device
	[  +0.096771] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.363111] systemd-fstab-generator[3419]: Ignoring "noauto" option for root device
	[  +0.107648] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.775165] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [49e528cc460f1008cbb0e3528a6ff5ef106aeddd775cf1a915b8ae6d511d1959] <==
	{"level":"info","ts":"2024-09-17T18:32:27.942173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ff4c26660998c2c8 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-17T18:32:27.942295Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ff4c26660998c2c8 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-17T18:32:27.942403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ff4c26660998c2c8 received MsgPreVoteResp from ff4c26660998c2c8 at term 1"}
	{"level":"info","ts":"2024-09-17T18:32:27.942439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ff4c26660998c2c8 became candidate at term 2"}
	{"level":"info","ts":"2024-09-17T18:32:27.942466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ff4c26660998c2c8 received MsgVoteResp from ff4c26660998c2c8 at term 2"}
	{"level":"info","ts":"2024-09-17T18:32:27.945154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ff4c26660998c2c8 became leader at term 2"}
	{"level":"info","ts":"2024-09-17T18:32:27.942439Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.182:2380"}
	{"level":"info","ts":"2024-09-17T18:32:27.942407Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-17T18:32:27.945362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ff4c26660998c2c8 elected leader ff4c26660998c2c8 at term 2"}
	{"level":"info","ts":"2024-09-17T18:32:27.945771Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"ff4c26660998c2c8","initial-advertise-peer-urls":["https://192.168.72.182:2380"],"listen-peer-urls":["https://192.168.72.182:2380"],"advertise-client-urls":["https://192.168.72.182:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.182:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-17T18:32:27.948597Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.182:2380"}
	{"level":"info","ts":"2024-09-17T18:32:27.948979Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-17T18:32:27.955389Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:32:27.963640Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ff4c26660998c2c8","local-member-attributes":"{Name:no-preload-328741 ClientURLs:[https://192.168.72.182:2379]}","request-path":"/0/members/ff4c26660998c2c8/attributes","cluster-id":"1c15affd5c0f3dba","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T18:32:27.963742Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T18:32:27.964723Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T18:32:27.965691Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T18:32:27.979280Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.182:2379"}
	{"level":"info","ts":"2024-09-17T18:32:27.979414Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1c15affd5c0f3dba","local-member-id":"ff4c26660998c2c8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:32:27.979544Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:32:27.979606Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:32:27.968617Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T18:32:27.987458Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-17T18:32:28.025157Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T18:32:28.025326Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:41:52 up 14 min,  0 users,  load average: 0.27, 0.27, 0.17
	Linux no-preload-328741 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4e46c0fa82cfcfd9610da4d10f59632ea78de09d0e9da381b6552d2ae89e1db9] <==
	W0917 18:32:21.610030       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:21.611490       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:21.613934       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:21.615280       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:21.636446       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:21.669464       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:21.672990       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:21.704066       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:21.777509       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:21.779905       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:21.791885       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:21.830607       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:21.927152       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:21.960453       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:21.990708       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:22.025508       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:22.105164       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:22.189991       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:22.307464       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:22.347524       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:22.411682       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:22.449574       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:22.450941       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:22.483585       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:22.749813       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e54abd262e2695f398247112d6e385326a4419d9c9f6744d7c80478ea5abe131] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0917 18:37:31.187597       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 18:37:31.187633       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0917 18:37:31.188687       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0917 18:37:31.188807       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0917 18:38:31.189561       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 18:38:31.189684       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0917 18:38:31.189807       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 18:38:31.189844       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0917 18:38:31.191010       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0917 18:38:31.191042       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0917 18:40:31.191871       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 18:40:31.192314       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0917 18:40:31.192231       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 18:40:31.192443       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0917 18:40:31.193991       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0917 18:40:31.194041       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b91b94fc0010bc21e78fe4163bdc09bbba44c0d8f5a62d66aae52653816ad8b6] <==
	E0917 18:36:37.160797       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:36:37.619594       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:37:07.167414       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:37:07.628027       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:37:37.176346       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:37:37.639160       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0917 18:37:50.374321       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-328741"
	E0917 18:38:07.187253       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:38:07.648839       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:38:37.193718       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:38:37.656887       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0917 18:38:40.119805       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="227.305µs"
	I0917 18:38:51.109422       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="166.1µs"
	E0917 18:39:07.200856       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:39:07.665798       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:39:37.208327       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:39:37.674232       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:40:07.215728       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:40:07.682434       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:40:37.223217       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:40:37.691710       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:41:07.230187       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:41:07.702009       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:41:37.237281       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:41:37.714555       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [808a6dceb70633f9f5388cad26d46c756126eb16dca9e626a592a5498a96de64] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 18:32:39.515289       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 18:32:39.542858       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.182"]
	E0917 18:32:39.542969       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 18:32:39.813964       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 18:32:39.814053       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 18:32:39.814148       1 server_linux.go:169] "Using iptables Proxier"
	I0917 18:32:39.816811       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 18:32:39.817289       1 server.go:483] "Version info" version="v1.31.1"
	I0917 18:32:39.817338       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 18:32:39.818822       1 config.go:199] "Starting service config controller"
	I0917 18:32:39.818903       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 18:32:39.818957       1 config.go:105] "Starting endpoint slice config controller"
	I0917 18:32:39.818978       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 18:32:39.819676       1 config.go:328] "Starting node config controller"
	I0917 18:32:39.819766       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 18:32:39.920266       1 shared_informer.go:320] Caches are synced for service config
	I0917 18:32:39.923263       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 18:32:39.962193       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [976b4c709134ce601c00b084d7057f3af82f641b9700602c3b971204e0fdd136] <==
	W0917 18:32:31.114635       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 18:32:31.114747       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:31.127308       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 18:32:31.127865       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:31.145408       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0917 18:32:31.146248       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:31.167005       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0917 18:32:31.167161       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0917 18:32:31.260506       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0917 18:32:31.261018       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:31.315068       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 18:32:31.315331       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:31.392117       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0917 18:32:31.392268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:31.413854       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0917 18:32:31.414040       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:31.489902       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0917 18:32:31.490478       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:31.518510       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 18:32:31.519380       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:31.554004       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 18:32:31.554216       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:31.554490       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0917 18:32:31.554779       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0917 18:32:33.251420       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 17 18:40:43 no-preload-328741 kubelet[3296]: E0917 18:40:43.090722    3296 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cvttg" podUID="1b2d6700-2e3c-4a35-9794-0ec095eed0d4"
	Sep 17 18:40:43 no-preload-328741 kubelet[3296]: E0917 18:40:43.205336    3296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598443204887105,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:40:43 no-preload-328741 kubelet[3296]: E0917 18:40:43.205491    3296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598443204887105,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:40:53 no-preload-328741 kubelet[3296]: E0917 18:40:53.206980    3296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598453206669277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:40:53 no-preload-328741 kubelet[3296]: E0917 18:40:53.207348    3296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598453206669277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:40:55 no-preload-328741 kubelet[3296]: E0917 18:40:55.090020    3296 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cvttg" podUID="1b2d6700-2e3c-4a35-9794-0ec095eed0d4"
	Sep 17 18:41:03 no-preload-328741 kubelet[3296]: E0917 18:41:03.211074    3296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598463209307334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:41:03 no-preload-328741 kubelet[3296]: E0917 18:41:03.212879    3296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598463209307334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:41:10 no-preload-328741 kubelet[3296]: E0917 18:41:10.090502    3296 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cvttg" podUID="1b2d6700-2e3c-4a35-9794-0ec095eed0d4"
	Sep 17 18:41:13 no-preload-328741 kubelet[3296]: E0917 18:41:13.214563    3296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598473213982791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:41:13 no-preload-328741 kubelet[3296]: E0917 18:41:13.214900    3296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598473213982791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:41:23 no-preload-328741 kubelet[3296]: E0917 18:41:23.091870    3296 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cvttg" podUID="1b2d6700-2e3c-4a35-9794-0ec095eed0d4"
	Sep 17 18:41:23 no-preload-328741 kubelet[3296]: E0917 18:41:23.217312    3296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598483216795409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:41:23 no-preload-328741 kubelet[3296]: E0917 18:41:23.217402    3296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598483216795409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:41:33 no-preload-328741 kubelet[3296]: E0917 18:41:33.150203    3296 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 17 18:41:33 no-preload-328741 kubelet[3296]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 17 18:41:33 no-preload-328741 kubelet[3296]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 17 18:41:33 no-preload-328741 kubelet[3296]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 17 18:41:33 no-preload-328741 kubelet[3296]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 17 18:41:33 no-preload-328741 kubelet[3296]: E0917 18:41:33.220272    3296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598493219658511,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:41:33 no-preload-328741 kubelet[3296]: E0917 18:41:33.220315    3296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598493219658511,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:41:35 no-preload-328741 kubelet[3296]: E0917 18:41:35.091507    3296 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cvttg" podUID="1b2d6700-2e3c-4a35-9794-0ec095eed0d4"
	Sep 17 18:41:43 no-preload-328741 kubelet[3296]: E0917 18:41:43.221556    3296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598503221042250,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:41:43 no-preload-328741 kubelet[3296]: E0917 18:41:43.221885    3296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598503221042250,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:41:49 no-preload-328741 kubelet[3296]: E0917 18:41:49.090188    3296 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cvttg" podUID="1b2d6700-2e3c-4a35-9794-0ec095eed0d4"
	
	
	==> storage-provisioner [e25a907995d3828299c33ad3073839ba30163d24795684a0b4aeaeba2183d2b6] <==
	I0917 18:32:40.426772       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 18:32:40.444804       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 18:32:40.444898       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0917 18:32:40.459137       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 18:32:40.460047       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"caff1387-3893-41de-a0f4-a5fcc852dbf2", APIVersion:"v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-328741_f404a796-99f9-45f2-9b4b-86fe000126d1 became leader
	I0917 18:32:40.460531       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-328741_f404a796-99f9-45f2-9b4b-86fe000126d1!
	I0917 18:32:40.561457       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-328741_f404a796-99f9-45f2-9b4b-86fe000126d1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-328741 -n no-preload-328741
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-328741 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-cvttg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-328741 describe pod metrics-server-6867b74b74-cvttg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-328741 describe pod metrics-server-6867b74b74-cvttg: exit status 1 (73.063386ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-cvttg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-328741 describe pod metrics-server-6867b74b74-cvttg: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0917 18:33:10.821042   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-438836 -n default-k8s-diff-port-438836
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-17 18:41:58.405344893 +0000 UTC m=+6387.479917162
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-438836 -n default-k8s-diff-port-438836
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-438836 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-438836 logs -n 25: (2.268702426s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo containerd config dump                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	| delete  | -p                                                     | disable-driver-mounts-671774 | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | disable-driver-mounts-671774                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:20 UTC |
	|         | default-k8s-diff-port-438836                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-081863            | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-081863                                  | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-328741             | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:20 UTC | 17 Sep 24 18:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-328741                                   | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:20 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-438836  | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:21 UTC | 17 Sep 24 18:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:21 UTC |                     |
	|         | default-k8s-diff-port-438836                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-081863                 | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-190698        | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-081863                                  | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC | 17 Sep 24 18:33 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-328741                  | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-328741                                   | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC | 17 Sep 24 18:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-438836       | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC | 17 Sep 24 18:32 UTC |
	|         | default-k8s-diff-port-438836                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-190698                              | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC | 17 Sep 24 18:23 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-190698             | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC | 17 Sep 24 18:23 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-190698                              | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 18:23:50
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 18:23:50.674050   78008 out.go:345] Setting OutFile to fd 1 ...
	I0917 18:23:50.674338   78008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:23:50.674349   78008 out.go:358] Setting ErrFile to fd 2...
	I0917 18:23:50.674356   78008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:23:50.674556   78008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 18:23:50.675161   78008 out.go:352] Setting JSON to false
	I0917 18:23:50.676159   78008 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7546,"bootTime":1726589885,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 18:23:50.676252   78008 start.go:139] virtualization: kvm guest
	I0917 18:23:50.678551   78008 out.go:177] * [old-k8s-version-190698] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 18:23:50.679898   78008 notify.go:220] Checking for updates...
	I0917 18:23:50.679923   78008 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 18:23:50.681520   78008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 18:23:50.683062   78008 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:23:50.684494   78008 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 18:23:50.685988   78008 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 18:23:50.687372   78008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 18:23:50.689066   78008 config.go:182] Loaded profile config "old-k8s-version-190698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0917 18:23:50.689526   78008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:23:50.689604   78008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:23:50.704879   78008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41839
	I0917 18:23:50.705416   78008 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:23:50.705985   78008 main.go:141] libmachine: Using API Version  1
	I0917 18:23:50.706014   78008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:23:50.706318   78008 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:23:50.706508   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:23:50.708560   78008 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0917 18:23:50.709804   78008 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 18:23:50.710139   78008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:23:50.710185   78008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:23:50.725466   78008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41935
	I0917 18:23:50.725978   78008 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:23:50.726521   78008 main.go:141] libmachine: Using API Version  1
	I0917 18:23:50.726552   78008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:23:50.726874   78008 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:23:50.727047   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:23:50.764769   78008 out.go:177] * Using the kvm2 driver based on existing profile
	I0917 18:23:50.766378   78008 start.go:297] selected driver: kvm2
	I0917 18:23:50.766396   78008 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:23:50.766522   78008 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 18:23:50.767254   78008 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:23:50.767323   78008 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19662-11085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0917 18:23:50.783226   78008 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0917 18:23:50.783619   78008 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 18:23:50.783658   78008 cni.go:84] Creating CNI manager for ""
	I0917 18:23:50.783697   78008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:23:50.783745   78008 start.go:340] cluster config:
	{Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:23:50.783859   78008 iso.go:125] acquiring lock: {Name:mkee7131e09b1c5eb94d106bda884bcbdbf6f906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:23:48.141429   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:23:50.786173   78008 out.go:177] * Starting "old-k8s-version-190698" primary control-plane node in "old-k8s-version-190698" cluster
	I0917 18:23:50.787985   78008 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0917 18:23:50.788036   78008 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0917 18:23:50.788046   78008 cache.go:56] Caching tarball of preloaded images
	I0917 18:23:50.788122   78008 preload.go:172] Found /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 18:23:50.788132   78008 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0917 18:23:50.788236   78008 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/config.json ...
	I0917 18:23:50.788409   78008 start.go:360] acquireMachinesLock for old-k8s-version-190698: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 18:23:54.221530   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:23:57.293515   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:03.373505   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:06.445563   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:12.525534   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:15.597572   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:21.677533   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:24.749529   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:30.829519   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:33.901554   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:39.981533   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:43.053468   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:49.133556   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:52.205564   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:58.285562   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:01.357500   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:07.437467   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:10.509559   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:16.589464   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:19.661586   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:25.741498   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:28.813506   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:34.893488   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:37.965553   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:44.045546   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:47.117526   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:53.197534   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:56.269532   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:02.349528   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:05.421492   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:11.501470   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:14.573534   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:20.653500   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:23.725530   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:29.805601   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:32.877548   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:38.957496   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:42.029510   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:48.109547   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:51.181567   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:57.261480   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:27:00.333628   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:27:03.338059   77433 start.go:364] duration metric: took 4m21.061938866s to acquireMachinesLock for "no-preload-328741"
	I0917 18:27:03.338119   77433 start.go:96] Skipping create...Using existing machine configuration
	I0917 18:27:03.338127   77433 fix.go:54] fixHost starting: 
	I0917 18:27:03.338580   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:27:03.338627   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:27:03.353917   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38843
	I0917 18:27:03.354383   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:27:03.354859   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:27:03.354881   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:27:03.355169   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:27:03.355331   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:03.355481   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetState
	I0917 18:27:03.357141   77433 fix.go:112] recreateIfNeeded on no-preload-328741: state=Stopped err=<nil>
	I0917 18:27:03.357164   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	W0917 18:27:03.357305   77433 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 18:27:03.359125   77433 out.go:177] * Restarting existing kvm2 VM for "no-preload-328741" ...
	I0917 18:27:03.335549   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:27:03.335586   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetMachineName
	I0917 18:27:03.335955   77264 buildroot.go:166] provisioning hostname "embed-certs-081863"
	I0917 18:27:03.335984   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetMachineName
	I0917 18:27:03.336183   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:27:03.337915   77264 machine.go:96] duration metric: took 4m37.417759423s to provisionDockerMachine
	I0917 18:27:03.337964   77264 fix.go:56] duration metric: took 4m37.441049892s for fixHost
	I0917 18:27:03.337973   77264 start.go:83] releasing machines lock for "embed-certs-081863", held for 4m37.441075799s
	W0917 18:27:03.337995   77264 start.go:714] error starting host: provision: host is not running
	W0917 18:27:03.338098   77264 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0917 18:27:03.338107   77264 start.go:729] Will try again in 5 seconds ...
	I0917 18:27:03.360504   77433 main.go:141] libmachine: (no-preload-328741) Calling .Start
	I0917 18:27:03.360723   77433 main.go:141] libmachine: (no-preload-328741) Ensuring networks are active...
	I0917 18:27:03.361552   77433 main.go:141] libmachine: (no-preload-328741) Ensuring network default is active
	I0917 18:27:03.361892   77433 main.go:141] libmachine: (no-preload-328741) Ensuring network mk-no-preload-328741 is active
	I0917 18:27:03.362266   77433 main.go:141] libmachine: (no-preload-328741) Getting domain xml...
	I0917 18:27:03.362986   77433 main.go:141] libmachine: (no-preload-328741) Creating domain...
	I0917 18:27:04.605668   77433 main.go:141] libmachine: (no-preload-328741) Waiting to get IP...
	I0917 18:27:04.606667   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:04.607120   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:04.607206   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:04.607116   78688 retry.go:31] will retry after 233.634344ms: waiting for machine to come up
	I0917 18:27:04.842666   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:04.843211   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:04.843238   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:04.843149   78688 retry.go:31] will retry after 295.987515ms: waiting for machine to come up
	I0917 18:27:05.140821   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:05.141150   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:05.141173   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:05.141121   78688 retry.go:31] will retry after 482.890276ms: waiting for machine to come up
	I0917 18:27:05.625952   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:05.626401   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:05.626461   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:05.626347   78688 retry.go:31] will retry after 554.515102ms: waiting for machine to come up
	I0917 18:27:06.182038   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:06.182421   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:06.182448   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:06.182375   78688 retry.go:31] will retry after 484.48355ms: waiting for machine to come up
	I0917 18:27:06.668366   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:06.668886   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:06.668917   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:06.668862   78688 retry.go:31] will retry after 821.433387ms: waiting for machine to come up
	I0917 18:27:08.338629   77264 start.go:360] acquireMachinesLock for embed-certs-081863: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 18:27:07.491878   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:07.492313   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:07.492333   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:07.492274   78688 retry.go:31] will retry after 777.017059ms: waiting for machine to come up
	I0917 18:27:08.271320   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:08.271721   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:08.271748   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:08.271671   78688 retry.go:31] will retry after 1.033548419s: waiting for machine to come up
	I0917 18:27:09.307361   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:09.307889   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:09.307922   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:09.307826   78688 retry.go:31] will retry after 1.347955425s: waiting for machine to come up
	I0917 18:27:10.657426   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:10.657903   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:10.657927   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:10.657850   78688 retry.go:31] will retry after 1.52847221s: waiting for machine to come up
	I0917 18:27:12.188594   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:12.189069   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:12.189094   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:12.189031   78688 retry.go:31] will retry after 2.329019451s: waiting for machine to come up
	I0917 18:27:14.519240   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:14.519691   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:14.519718   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:14.519643   78688 retry.go:31] will retry after 2.547184893s: waiting for machine to come up
	I0917 18:27:17.068162   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:17.068621   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:17.068645   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:17.068577   78688 retry.go:31] will retry after 3.042534231s: waiting for machine to come up
	I0917 18:27:21.442547   77819 start.go:364] duration metric: took 3m42.844200352s to acquireMachinesLock for "default-k8s-diff-port-438836"
	I0917 18:27:21.442612   77819 start.go:96] Skipping create...Using existing machine configuration
	I0917 18:27:21.442620   77819 fix.go:54] fixHost starting: 
	I0917 18:27:21.443035   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:27:21.443089   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:27:21.462997   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38191
	I0917 18:27:21.463468   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:27:21.464035   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:27:21.464056   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:27:21.464377   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:27:21.464546   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:21.464703   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetState
	I0917 18:27:21.466460   77819 fix.go:112] recreateIfNeeded on default-k8s-diff-port-438836: state=Stopped err=<nil>
	I0917 18:27:21.466502   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	W0917 18:27:21.466643   77819 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 18:27:21.468932   77819 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-438836" ...
	I0917 18:27:20.113857   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.114336   77433 main.go:141] libmachine: (no-preload-328741) Found IP for machine: 192.168.72.182
	I0917 18:27:20.114359   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has current primary IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.114364   77433 main.go:141] libmachine: (no-preload-328741) Reserving static IP address...
	I0917 18:27:20.114774   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "no-preload-328741", mac: "52:54:00:de:bd:6d", ip: "192.168.72.182"} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.114792   77433 main.go:141] libmachine: (no-preload-328741) Reserved static IP address: 192.168.72.182
	I0917 18:27:20.114808   77433 main.go:141] libmachine: (no-preload-328741) DBG | skip adding static IP to network mk-no-preload-328741 - found existing host DHCP lease matching {name: "no-preload-328741", mac: "52:54:00:de:bd:6d", ip: "192.168.72.182"}
	I0917 18:27:20.114822   77433 main.go:141] libmachine: (no-preload-328741) DBG | Getting to WaitForSSH function...
	I0917 18:27:20.114831   77433 main.go:141] libmachine: (no-preload-328741) Waiting for SSH to be available...
	I0917 18:27:20.116945   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.117224   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.117268   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.117371   77433 main.go:141] libmachine: (no-preload-328741) DBG | Using SSH client type: external
	I0917 18:27:20.117396   77433 main.go:141] libmachine: (no-preload-328741) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa (-rw-------)
	I0917 18:27:20.117427   77433 main.go:141] libmachine: (no-preload-328741) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:27:20.117439   77433 main.go:141] libmachine: (no-preload-328741) DBG | About to run SSH command:
	I0917 18:27:20.117446   77433 main.go:141] libmachine: (no-preload-328741) DBG | exit 0
	I0917 18:27:20.241462   77433 main.go:141] libmachine: (no-preload-328741) DBG | SSH cmd err, output: <nil>: 
	I0917 18:27:20.241844   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetConfigRaw
	I0917 18:27:20.242520   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetIP
	I0917 18:27:20.245397   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.245786   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.245821   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.246121   77433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/config.json ...
	I0917 18:27:20.246346   77433 machine.go:93] provisionDockerMachine start ...
	I0917 18:27:20.246367   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:20.246573   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.248978   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.249318   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.249345   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.249489   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:20.249643   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.249795   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.249911   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:20.250048   77433 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:20.250301   77433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0917 18:27:20.250317   77433 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 18:27:20.357778   77433 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 18:27:20.357805   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetMachineName
	I0917 18:27:20.358058   77433 buildroot.go:166] provisioning hostname "no-preload-328741"
	I0917 18:27:20.358083   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetMachineName
	I0917 18:27:20.358261   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.361057   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.361463   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.361498   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.361617   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:20.361774   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.361948   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.362031   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:20.362157   77433 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:20.362321   77433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0917 18:27:20.362337   77433 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-328741 && echo "no-preload-328741" | sudo tee /etc/hostname
	I0917 18:27:20.486928   77433 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-328741
	
	I0917 18:27:20.486956   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.489814   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.490212   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.490245   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.490451   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:20.490627   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.490846   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.491105   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:20.491327   77433 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:20.491532   77433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0917 18:27:20.491553   77433 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-328741' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-328741/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-328741' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:27:20.607308   77433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:27:20.607336   77433 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:27:20.607379   77433 buildroot.go:174] setting up certificates
	I0917 18:27:20.607394   77433 provision.go:84] configureAuth start
	I0917 18:27:20.607407   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetMachineName
	I0917 18:27:20.607708   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetIP
	I0917 18:27:20.610353   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.610722   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.610751   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.610897   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.612874   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.613160   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.613196   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.613366   77433 provision.go:143] copyHostCerts
	I0917 18:27:20.613425   77433 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:27:20.613435   77433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:27:20.613508   77433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:27:20.613607   77433 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:27:20.613614   77433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:27:20.613645   77433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:27:20.613706   77433 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:27:20.613713   77433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:27:20.613734   77433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:27:20.613789   77433 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.no-preload-328741 san=[127.0.0.1 192.168.72.182 localhost minikube no-preload-328741]
	I0917 18:27:20.808567   77433 provision.go:177] copyRemoteCerts
	I0917 18:27:20.808634   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:27:20.808662   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.811568   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.811927   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.811954   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.812154   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:20.812347   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.812503   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:20.812627   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:27:20.895825   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0917 18:27:20.922489   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 18:27:20.948827   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:27:20.974824   77433 provision.go:87] duration metric: took 367.418792ms to configureAuth
	I0917 18:27:20.974852   77433 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:27:20.975023   77433 config.go:182] Loaded profile config "no-preload-328741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:27:20.975090   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.977758   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.978068   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.978105   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.978254   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:20.978473   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.978662   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.978784   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:20.978951   77433 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:20.979110   77433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0917 18:27:20.979126   77433 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:27:21.205095   77433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:27:21.205123   77433 machine.go:96] duration metric: took 958.76263ms to provisionDockerMachine
	I0917 18:27:21.205136   77433 start.go:293] postStartSetup for "no-preload-328741" (driver="kvm2")
	I0917 18:27:21.205148   77433 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:27:21.205167   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:21.205532   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:27:21.205565   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:21.208451   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.208840   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:21.208882   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.209046   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:21.209355   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:21.209578   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:21.209759   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:27:21.291918   77433 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:27:21.296054   77433 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:27:21.296077   77433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:27:21.296139   77433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:27:21.296215   77433 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:27:21.296313   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:27:21.305838   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:27:21.331220   77433 start.go:296] duration metric: took 126.069168ms for postStartSetup
	I0917 18:27:21.331261   77433 fix.go:56] duration metric: took 17.993134184s for fixHost
	I0917 18:27:21.331280   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:21.334290   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.334663   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:21.334688   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.334893   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:21.335134   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:21.335275   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:21.335443   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:21.335597   77433 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:21.335788   77433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0917 18:27:21.335803   77433 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:27:21.442323   77433 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726597641.413351440
	
	I0917 18:27:21.442375   77433 fix.go:216] guest clock: 1726597641.413351440
	I0917 18:27:21.442390   77433 fix.go:229] Guest: 2024-09-17 18:27:21.41335144 +0000 UTC Remote: 2024-09-17 18:27:21.331264373 +0000 UTC m=+279.198911017 (delta=82.087067ms)
	I0917 18:27:21.442423   77433 fix.go:200] guest clock delta is within tolerance: 82.087067ms
	I0917 18:27:21.442443   77433 start.go:83] releasing machines lock for "no-preload-328741", held for 18.10434208s
	I0917 18:27:21.442489   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:21.442775   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetIP
	I0917 18:27:21.445223   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.445561   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:21.445602   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.445710   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:21.446182   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:21.446357   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:21.446466   77433 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:27:21.446519   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:21.446551   77433 ssh_runner.go:195] Run: cat /version.json
	I0917 18:27:21.446574   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:21.449063   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.449340   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.449400   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:21.449435   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.449557   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:21.449699   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:21.449832   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:21.449833   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:21.449866   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.450010   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:21.450004   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:27:21.450104   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:21.450222   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:21.450352   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:27:21.552947   77433 ssh_runner.go:195] Run: systemctl --version
	I0917 18:27:21.559634   77433 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:27:21.707720   77433 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:27:21.714672   77433 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:27:21.714746   77433 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:27:21.731669   77433 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:27:21.731700   77433 start.go:495] detecting cgroup driver to use...
	I0917 18:27:21.731776   77433 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:27:21.749370   77433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:27:21.765181   77433 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:27:21.765284   77433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:27:21.782356   77433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:27:21.801216   77433 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:27:21.918587   77433 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:27:22.089578   77433 docker.go:233] disabling docker service ...
	I0917 18:27:22.089661   77433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:27:22.110533   77433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:27:22.125372   77433 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:27:22.241575   77433 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:27:22.367081   77433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:27:22.381835   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:27:22.402356   77433 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 18:27:22.402432   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.413980   77433 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:27:22.414051   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.426845   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.439426   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.451352   77433 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:27:22.463891   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.475686   77433 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.495380   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.507217   77433 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:27:22.517776   77433 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:27:22.517844   77433 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:27:22.537889   77433 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:27:22.549554   77433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:27:22.663258   77433 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:27:22.762619   77433 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:27:22.762693   77433 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:27:22.769911   77433 start.go:563] Will wait 60s for crictl version
	I0917 18:27:22.769967   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:22.775014   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:27:22.819750   77433 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:27:22.819864   77433 ssh_runner.go:195] Run: crio --version
	I0917 18:27:22.849303   77433 ssh_runner.go:195] Run: crio --version
	I0917 18:27:22.887418   77433 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 18:27:21.470362   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Start
	I0917 18:27:21.470570   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Ensuring networks are active...
	I0917 18:27:21.471316   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Ensuring network default is active
	I0917 18:27:21.471781   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Ensuring network mk-default-k8s-diff-port-438836 is active
	I0917 18:27:21.472151   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Getting domain xml...
	I0917 18:27:21.472856   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Creating domain...
	I0917 18:27:22.744436   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting to get IP...
	I0917 18:27:22.745314   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:22.745829   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:22.745899   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:22.745819   78807 retry.go:31] will retry after 201.903728ms: waiting for machine to come up
	I0917 18:27:22.949838   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:22.951570   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:22.951596   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:22.951537   78807 retry.go:31] will retry after 376.852856ms: waiting for machine to come up
	I0917 18:27:23.330165   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:23.330685   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:23.330706   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:23.330633   78807 retry.go:31] will retry after 415.874344ms: waiting for machine to come up
	I0917 18:27:22.888728   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetIP
	I0917 18:27:22.891793   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:22.892111   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:22.892130   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:22.892513   77433 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0917 18:27:22.897071   77433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:27:22.911118   77433 kubeadm.go:883] updating cluster {Name:no-preload-328741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-328741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:27:22.911279   77433 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 18:27:22.911333   77433 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:27:22.949155   77433 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0917 18:27:22.949180   77433 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0917 18:27:22.949270   77433 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:22.949289   77433 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:22.949319   77433 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0917 18:27:22.949298   77433 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:22.949398   77433 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:22.949424   77433 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:22.949449   77433 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:22.949339   77433 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:22.950952   77433 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:22.951106   77433 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:22.951113   77433 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:22.951238   77433 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:22.951257   77433 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0917 18:27:22.951257   77433 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:22.951343   77433 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:22.951426   77433 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.145473   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:23.155577   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:23.167187   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:23.169154   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:23.171736   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.196199   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:23.225029   77433 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0917 18:27:23.225085   77433 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:23.225133   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.233185   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0917 18:27:23.269008   77433 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0917 18:27:23.269045   77433 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:23.269092   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.307273   77433 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0917 18:27:23.307319   77433 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:23.307374   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.345906   77433 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0917 18:27:23.345949   77433 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.345999   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.346222   77433 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0917 18:27:23.346259   77433 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:23.346316   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.362612   77433 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0917 18:27:23.362657   77433 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:23.362684   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:23.362707   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.464589   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:23.464684   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:23.464742   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:23.464815   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.464903   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:23.464911   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:23.616289   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:23.616349   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:23.616400   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:23.616459   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.616514   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:23.616548   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:23.752643   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:23.752754   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:23.752754   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:23.761857   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.761945   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0917 18:27:23.762041   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0917 18:27:23.768641   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:23.883181   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0917 18:27:23.883181   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0917 18:27:23.883230   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0917 18:27:23.883294   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0917 18:27:23.883301   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0917 18:27:23.883302   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0917 18:27:23.883314   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0917 18:27:23.883371   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0917 18:27:23.883388   77433 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0917 18:27:23.883401   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0917 18:27:23.883413   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0917 18:27:23.883680   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0917 18:27:23.883758   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0917 18:27:23.894354   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0917 18:27:23.894539   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0917 18:27:23.901735   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0917 18:27:23.901990   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0917 18:27:23.909116   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:26.450360   77433 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.566575076s)
	I0917 18:27:26.450405   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0917 18:27:26.450360   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.566921389s)
	I0917 18:27:26.450422   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0917 18:27:26.450429   77433 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.541282746s)
	I0917 18:27:26.450444   77433 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0917 18:27:26.450492   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0917 18:27:26.450485   77433 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0917 18:27:26.450524   77433 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:26.450567   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.748331   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:23.748832   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:23.748862   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:23.748765   78807 retry.go:31] will retry after 515.370863ms: waiting for machine to come up
	I0917 18:27:24.265477   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:24.265902   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:24.265939   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:24.265859   78807 retry.go:31] will retry after 629.410487ms: waiting for machine to come up
	I0917 18:27:24.896939   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:24.897469   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:24.897500   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:24.897415   78807 retry.go:31] will retry after 846.873676ms: waiting for machine to come up
	I0917 18:27:25.745594   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:25.746228   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:25.746254   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:25.746167   78807 retry.go:31] will retry after 1.192058073s: waiting for machine to come up
	I0917 18:27:26.940216   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:26.940678   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:26.940702   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:26.940637   78807 retry.go:31] will retry after 1.449067435s: waiting for machine to come up
	I0917 18:27:28.392247   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:28.392711   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:28.392753   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:28.392665   78807 retry.go:31] will retry after 1.444723582s: waiting for machine to come up
	I0917 18:27:29.730898   77433 ssh_runner.go:235] Completed: which crictl: (3.280308944s)
	I0917 18:27:29.730988   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:29.731032   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.280407278s)
	I0917 18:27:29.731069   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0917 18:27:29.731121   77433 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0917 18:27:29.731164   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0917 18:27:29.781214   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:32.016162   77433 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.234900005s)
	I0917 18:27:32.016246   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:32.016175   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.284993422s)
	I0917 18:27:32.016331   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0917 18:27:32.016382   77433 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0917 18:27:32.016431   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0917 18:27:32.062774   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0917 18:27:32.062903   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0917 18:27:29.839565   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:29.840118   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:29.840154   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:29.840044   78807 retry.go:31] will retry after 1.984255207s: waiting for machine to come up
	I0917 18:27:31.825642   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:31.826059   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:31.826105   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:31.826027   78807 retry.go:31] will retry after 1.870760766s: waiting for machine to come up
	I0917 18:27:34.201435   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.18496735s)
	I0917 18:27:34.201470   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0917 18:27:34.201493   77433 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0917 18:27:34.201506   77433 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.138578181s)
	I0917 18:27:34.201545   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0917 18:27:34.201547   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0917 18:27:36.281470   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.079903331s)
	I0917 18:27:36.281515   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0917 18:27:36.281539   77433 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0917 18:27:36.281581   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0917 18:27:33.698947   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:33.699358   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:33.699389   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:33.699308   78807 retry.go:31] will retry after 2.194557575s: waiting for machine to come up
	I0917 18:27:35.896774   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:35.897175   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:35.897215   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:35.897139   78807 retry.go:31] will retry after 3.232409388s: waiting for machine to come up
	I0917 18:27:40.422552   78008 start.go:364] duration metric: took 3m49.634084682s to acquireMachinesLock for "old-k8s-version-190698"
	I0917 18:27:40.422631   78008 start.go:96] Skipping create...Using existing machine configuration
	I0917 18:27:40.422641   78008 fix.go:54] fixHost starting: 
	I0917 18:27:40.423075   78008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:27:40.423129   78008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:27:40.444791   78008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38471
	I0917 18:27:40.445363   78008 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:27:40.446028   78008 main.go:141] libmachine: Using API Version  1
	I0917 18:27:40.446063   78008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:27:40.446445   78008 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:27:40.446690   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:27:40.446844   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetState
	I0917 18:27:40.448698   78008 fix.go:112] recreateIfNeeded on old-k8s-version-190698: state=Stopped err=<nil>
	I0917 18:27:40.448743   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	W0917 18:27:40.448912   78008 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 18:27:40.451316   78008 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-190698" ...
	I0917 18:27:40.452694   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .Start
	I0917 18:27:40.452899   78008 main.go:141] libmachine: (old-k8s-version-190698) Ensuring networks are active...
	I0917 18:27:40.453913   78008 main.go:141] libmachine: (old-k8s-version-190698) Ensuring network default is active
	I0917 18:27:40.454353   78008 main.go:141] libmachine: (old-k8s-version-190698) Ensuring network mk-old-k8s-version-190698 is active
	I0917 18:27:40.454806   78008 main.go:141] libmachine: (old-k8s-version-190698) Getting domain xml...
	I0917 18:27:40.455606   78008 main.go:141] libmachine: (old-k8s-version-190698) Creating domain...
	I0917 18:27:39.131665   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.132199   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Found IP for machine: 192.168.39.58
	I0917 18:27:39.132224   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Reserving static IP address...
	I0917 18:27:39.132241   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has current primary IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.132683   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-438836", mac: "52:54:00:78:fb:fd", ip: "192.168.39.58"} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.132716   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | skip adding static IP to network mk-default-k8s-diff-port-438836 - found existing host DHCP lease matching {name: "default-k8s-diff-port-438836", mac: "52:54:00:78:fb:fd", ip: "192.168.39.58"}
	I0917 18:27:39.132729   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Reserved static IP address: 192.168.39.58
	I0917 18:27:39.132744   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for SSH to be available...
	I0917 18:27:39.132759   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Getting to WaitForSSH function...
	I0917 18:27:39.135223   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.135590   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.135612   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.135797   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Using SSH client type: external
	I0917 18:27:39.135825   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa (-rw-------)
	I0917 18:27:39.135871   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:27:39.135888   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | About to run SSH command:
	I0917 18:27:39.135899   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | exit 0
	I0917 18:27:39.261644   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | SSH cmd err, output: <nil>: 
	I0917 18:27:39.261978   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetConfigRaw
	I0917 18:27:39.262594   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetIP
	I0917 18:27:39.265005   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.265308   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.265376   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.265576   77819 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/config.json ...
	I0917 18:27:39.265817   77819 machine.go:93] provisionDockerMachine start ...
	I0917 18:27:39.265835   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:39.266039   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.268290   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.268616   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.268646   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.268846   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:39.269019   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.269159   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.269333   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:39.269497   77819 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:39.269689   77819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0917 18:27:39.269701   77819 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 18:27:39.378024   77819 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 18:27:39.378050   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetMachineName
	I0917 18:27:39.378284   77819 buildroot.go:166] provisioning hostname "default-k8s-diff-port-438836"
	I0917 18:27:39.378322   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetMachineName
	I0917 18:27:39.378529   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.381247   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.381574   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.381614   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.381765   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:39.381938   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.382057   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.382169   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:39.382311   77819 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:39.382546   77819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0917 18:27:39.382567   77819 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-438836 && echo "default-k8s-diff-port-438836" | sudo tee /etc/hostname
	I0917 18:27:39.516431   77819 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-438836
	
	I0917 18:27:39.516462   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.519542   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.519934   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.519966   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.520172   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:39.520405   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.520594   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.520773   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:39.520927   77819 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:39.521094   77819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0917 18:27:39.521111   77819 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-438836' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-438836/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-438836' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:27:39.640608   77819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:27:39.640656   77819 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:27:39.640717   77819 buildroot.go:174] setting up certificates
	I0917 18:27:39.640731   77819 provision.go:84] configureAuth start
	I0917 18:27:39.640750   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetMachineName
	I0917 18:27:39.641038   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetIP
	I0917 18:27:39.643698   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.644026   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.644085   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.644374   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.646822   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.647198   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.647227   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.647360   77819 provision.go:143] copyHostCerts
	I0917 18:27:39.647428   77819 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:27:39.647441   77819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:27:39.647516   77819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:27:39.647637   77819 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:27:39.647658   77819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:27:39.647693   77819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:27:39.647782   77819 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:27:39.647790   77819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:27:39.647817   77819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:27:39.647883   77819 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-438836 san=[127.0.0.1 192.168.39.58 default-k8s-diff-port-438836 localhost minikube]
	I0917 18:27:39.751962   77819 provision.go:177] copyRemoteCerts
	I0917 18:27:39.752028   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:27:39.752053   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.754975   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.755348   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.755381   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.755541   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:39.755725   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.755872   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:39.755988   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:27:39.840071   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0917 18:27:39.866175   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 18:27:39.896353   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:27:39.924332   77819 provision.go:87] duration metric: took 283.582838ms to configureAuth
	I0917 18:27:39.924363   77819 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:27:39.924606   77819 config.go:182] Loaded profile config "default-k8s-diff-port-438836": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:27:39.924701   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.927675   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.928027   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.928058   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.928307   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:39.928545   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.928710   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.928854   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:39.929011   77819 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:39.929244   77819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0917 18:27:39.929272   77819 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:27:40.170729   77819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:27:40.170763   77819 machine.go:96] duration metric: took 904.932975ms to provisionDockerMachine
	I0917 18:27:40.170776   77819 start.go:293] postStartSetup for "default-k8s-diff-port-438836" (driver="kvm2")
	I0917 18:27:40.170789   77819 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:27:40.170810   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:40.171145   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:27:40.171187   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:40.173980   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.174451   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:40.174480   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.174739   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:40.174926   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:40.175096   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:40.175261   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:27:40.263764   77819 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:27:40.269500   77819 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:27:40.269528   77819 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:27:40.269611   77819 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:27:40.269711   77819 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:27:40.269838   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:27:40.280672   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:27:40.309608   77819 start.go:296] duration metric: took 138.819033ms for postStartSetup
	I0917 18:27:40.309648   77819 fix.go:56] duration metric: took 18.867027995s for fixHost
	I0917 18:27:40.309668   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:40.312486   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.313018   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:40.313042   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.313201   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:40.313408   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:40.313574   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:40.313691   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:40.313853   77819 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:40.314037   77819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0917 18:27:40.314050   77819 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:27:40.422393   77819 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726597660.391833807
	
	I0917 18:27:40.422417   77819 fix.go:216] guest clock: 1726597660.391833807
	I0917 18:27:40.422424   77819 fix.go:229] Guest: 2024-09-17 18:27:40.391833807 +0000 UTC Remote: 2024-09-17 18:27:40.309651352 +0000 UTC m=+241.856499140 (delta=82.182455ms)
	I0917 18:27:40.422443   77819 fix.go:200] guest clock delta is within tolerance: 82.182455ms
	I0917 18:27:40.422448   77819 start.go:83] releasing machines lock for "default-k8s-diff-port-438836", held for 18.97986821s
	I0917 18:27:40.422473   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:40.422745   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetIP
	I0917 18:27:40.425463   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.425856   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:40.425885   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.426048   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:40.426529   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:40.426665   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:40.426742   77819 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:27:40.426807   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:40.426910   77819 ssh_runner.go:195] Run: cat /version.json
	I0917 18:27:40.426936   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:40.429570   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.429639   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.429967   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:40.430004   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:40.430031   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.430047   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.430161   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:40.430297   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:40.430376   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:40.430470   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:40.430662   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:40.430664   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:40.430841   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:27:40.430837   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:27:40.532536   77819 ssh_runner.go:195] Run: systemctl --version
	I0917 18:27:40.540125   77819 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:27:40.697991   77819 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:27:40.705336   77819 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:27:40.705427   77819 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:27:40.723038   77819 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:27:40.723065   77819 start.go:495] detecting cgroup driver to use...
	I0917 18:27:40.723135   77819 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:27:40.745561   77819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:27:40.765884   77819 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:27:40.765955   77819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:27:40.786769   77819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:27:40.805655   77819 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:27:40.935895   77819 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:27:41.121556   77819 docker.go:233] disabling docker service ...
	I0917 18:27:41.121638   77819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:27:41.144711   77819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:27:41.164782   77819 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:27:41.308439   77819 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:27:41.467525   77819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:27:41.485989   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:27:41.510198   77819 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 18:27:41.510282   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.526458   77819 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:27:41.526566   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.543334   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.558978   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.574621   77819 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:27:41.587226   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.603144   77819 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.627410   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.639981   77819 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:27:41.651547   77819 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:27:41.651615   77819 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:27:41.669534   77819 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:27:41.684429   77819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:27:41.839270   77819 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:27:41.974151   77819 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:27:41.974230   77819 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:27:41.980491   77819 start.go:563] Will wait 60s for crictl version
	I0917 18:27:41.980563   77819 ssh_runner.go:195] Run: which crictl
	I0917 18:27:41.985802   77819 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:27:42.033141   77819 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:27:42.033247   77819 ssh_runner.go:195] Run: crio --version
	I0917 18:27:42.076192   77819 ssh_runner.go:195] Run: crio --version
	I0917 18:27:42.118442   77819 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 18:27:37.750960   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.469353165s)
	I0917 18:27:37.750995   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0917 18:27:37.751021   77433 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0917 18:27:37.751074   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0917 18:27:38.415240   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0917 18:27:38.415308   77433 cache_images.go:123] Successfully loaded all cached images
	I0917 18:27:38.415317   77433 cache_images.go:92] duration metric: took 15.466122195s to LoadCachedImages
	I0917 18:27:38.415338   77433 kubeadm.go:934] updating node { 192.168.72.182 8443 v1.31.1 crio true true} ...
	I0917 18:27:38.415428   77433 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-328741 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-328741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:27:38.415536   77433 ssh_runner.go:195] Run: crio config
	I0917 18:27:38.466849   77433 cni.go:84] Creating CNI manager for ""
	I0917 18:27:38.466880   77433 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:27:38.466893   77433 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:27:38.466921   77433 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.182 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-328741 NodeName:no-preload-328741 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 18:27:38.467090   77433 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-328741"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.182
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.182"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:27:38.467166   77433 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 18:27:38.478263   77433 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:27:38.478345   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:27:38.488938   77433 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0917 18:27:38.509613   77433 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:27:38.529224   77433 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0917 18:27:38.549010   77433 ssh_runner.go:195] Run: grep 192.168.72.182	control-plane.minikube.internal$ /etc/hosts
	I0917 18:27:38.553381   77433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:27:38.566215   77433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:27:38.688671   77433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:27:38.708655   77433 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741 for IP: 192.168.72.182
	I0917 18:27:38.708677   77433 certs.go:194] generating shared ca certs ...
	I0917 18:27:38.708693   77433 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:27:38.708860   77433 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:27:38.708916   77433 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:27:38.708930   77433 certs.go:256] generating profile certs ...
	I0917 18:27:38.709038   77433 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/client.key
	I0917 18:27:38.709130   77433 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/apiserver.key.843ed40b
	I0917 18:27:38.709199   77433 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/proxy-client.key
	I0917 18:27:38.709384   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:27:38.709421   77433 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:27:38.709435   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:27:38.709471   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:27:38.709519   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:27:38.709552   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:27:38.709606   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:27:38.710412   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:27:38.754736   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:27:38.792703   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:27:38.826420   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:27:38.869433   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0917 18:27:38.897601   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 18:27:38.928694   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:27:38.953856   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 18:27:38.978643   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:27:39.004382   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:27:39.031548   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:27:39.057492   77433 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:27:39.075095   77433 ssh_runner.go:195] Run: openssl version
	I0917 18:27:39.081033   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:27:39.092196   77433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:27:39.097013   77433 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:27:39.097070   77433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:27:39.103104   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:27:39.114377   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:27:39.125639   77433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:27:39.130757   77433 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:27:39.130828   77433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:27:39.137857   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:27:39.150215   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:27:39.161792   77433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:39.166467   77433 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:39.166528   77433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:39.172262   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:27:39.183793   77433 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:27:39.188442   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 18:27:39.194477   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 18:27:39.200688   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 18:27:39.207092   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 18:27:39.213451   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 18:27:39.220286   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 18:27:39.226642   77433 kubeadm.go:392] StartCluster: {Name:no-preload-328741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-328741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:27:39.226747   77433 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:27:39.226814   77433 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:27:39.273929   77433 cri.go:89] found id: ""
	I0917 18:27:39.274001   77433 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:27:39.286519   77433 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 18:27:39.286543   77433 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 18:27:39.286584   77433 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 18:27:39.298955   77433 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 18:27:39.300296   77433 kubeconfig.go:125] found "no-preload-328741" server: "https://192.168.72.182:8443"
	I0917 18:27:39.303500   77433 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 18:27:39.316866   77433 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.182
	I0917 18:27:39.316904   77433 kubeadm.go:1160] stopping kube-system containers ...
	I0917 18:27:39.316917   77433 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0917 18:27:39.316980   77433 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:27:39.356519   77433 cri.go:89] found id: ""
	I0917 18:27:39.356608   77433 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 18:27:39.373894   77433 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:27:39.387121   77433 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:27:39.387140   77433 kubeadm.go:157] found existing configuration files:
	
	I0917 18:27:39.387183   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:27:39.397807   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:27:39.397867   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:27:39.408393   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:27:39.420103   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:27:39.420175   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:27:39.432123   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:27:39.442237   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:27:39.442308   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:27:39.452902   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:27:39.462802   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:27:39.462857   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:27:39.473035   77433 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:27:39.483824   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:39.603594   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:40.540682   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:40.798278   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:40.876550   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:41.006410   77433 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:27:41.006504   77433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:41.507355   77433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:42.006707   77433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:42.054395   77433 api_server.go:72] duration metric: took 1.047984188s to wait for apiserver process to appear ...
	I0917 18:27:42.054448   77433 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:27:42.054473   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:42.054949   77433 api_server.go:269] stopped: https://192.168.72.182:8443/healthz: Get "https://192.168.72.182:8443/healthz": dial tcp 192.168.72.182:8443: connect: connection refused
	I0917 18:27:42.119537   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetIP
	I0917 18:27:42.122908   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:42.123378   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:42.123409   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:42.123739   77819 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0917 18:27:42.129654   77819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:27:42.144892   77819 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-438836 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-438836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:27:42.145015   77819 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 18:27:42.145054   77819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:27:42.191002   77819 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0917 18:27:42.191086   77819 ssh_runner.go:195] Run: which lz4
	I0917 18:27:42.196979   77819 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 18:27:42.203024   77819 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 18:27:42.203079   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0917 18:27:41.874915   78008 main.go:141] libmachine: (old-k8s-version-190698) Waiting to get IP...
	I0917 18:27:41.875882   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:41.876350   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:41.876438   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:41.876337   78975 retry.go:31] will retry after 221.467702ms: waiting for machine to come up
	I0917 18:27:42.100196   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:42.100848   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:42.100869   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:42.100798   78975 retry.go:31] will retry after 339.25287ms: waiting for machine to come up
	I0917 18:27:42.441407   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:42.442029   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:42.442057   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:42.441987   78975 retry.go:31] will retry after 471.576193ms: waiting for machine to come up
	I0917 18:27:42.915529   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:42.916159   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:42.916187   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:42.916123   78975 retry.go:31] will retry after 502.97146ms: waiting for machine to come up
	I0917 18:27:43.420795   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:43.421214   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:43.421256   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:43.421163   78975 retry.go:31] will retry after 660.138027ms: waiting for machine to come up
	I0917 18:27:44.082653   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:44.083225   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:44.083255   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:44.083166   78975 retry.go:31] will retry after 656.142121ms: waiting for machine to come up
	I0917 18:27:44.740700   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:44.741167   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:44.741193   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:44.741129   78975 retry.go:31] will retry after 928.613341ms: waiting for machine to come up
	I0917 18:27:45.671934   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:45.672452   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:45.672489   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:45.672370   78975 retry.go:31] will retry after 980.051509ms: waiting for machine to come up
	I0917 18:27:42.554732   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:45.472618   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:27:45.472651   77433 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:27:45.472667   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:45.491418   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:27:45.491447   77433 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:27:45.554728   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:45.562047   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:27:45.562083   77433 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:27:46.054709   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:46.077483   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:27:46.077533   77433 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:27:46.555249   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:46.570200   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:27:46.570242   77433 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:27:47.054604   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:47.062637   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 200:
	ok
	I0917 18:27:47.074075   77433 api_server.go:141] control plane version: v1.31.1
	I0917 18:27:47.074107   77433 api_server.go:131] duration metric: took 5.019651057s to wait for apiserver health ...
	I0917 18:27:47.074118   77433 cni.go:84] Creating CNI manager for ""
	I0917 18:27:47.074127   77433 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:27:47.275236   77433 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:27:43.762089   77819 crio.go:462] duration metric: took 1.565150626s to copy over tarball
	I0917 18:27:43.762183   77819 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 18:27:46.222613   77819 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.460401071s)
	I0917 18:27:46.222640   77819 crio.go:469] duration metric: took 2.460522168s to extract the tarball
	I0917 18:27:46.222649   77819 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 18:27:46.260257   77819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:27:46.314982   77819 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 18:27:46.315007   77819 cache_images.go:84] Images are preloaded, skipping loading
	I0917 18:27:46.315017   77819 kubeadm.go:934] updating node { 192.168.39.58 8444 v1.31.1 crio true true} ...
	I0917 18:27:46.315159   77819 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-438836 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-438836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:27:46.315267   77819 ssh_runner.go:195] Run: crio config
	I0917 18:27:46.372511   77819 cni.go:84] Creating CNI manager for ""
	I0917 18:27:46.372534   77819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:27:46.372545   77819 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:27:46.372564   77819 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-438836 NodeName:default-k8s-diff-port-438836 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 18:27:46.372684   77819 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-438836"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.58
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:27:46.372742   77819 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 18:27:46.383855   77819 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:27:46.383950   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:27:46.394588   77819 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0917 18:27:46.416968   77819 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:27:46.438389   77819 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0917 18:27:46.461630   77819 ssh_runner.go:195] Run: grep 192.168.39.58	control-plane.minikube.internal$ /etc/hosts
	I0917 18:27:46.467126   77819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:27:46.484625   77819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:27:46.614753   77819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:27:46.638959   77819 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836 for IP: 192.168.39.58
	I0917 18:27:46.638984   77819 certs.go:194] generating shared ca certs ...
	I0917 18:27:46.639004   77819 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:27:46.639166   77819 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:27:46.639228   77819 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:27:46.639240   77819 certs.go:256] generating profile certs ...
	I0917 18:27:46.639349   77819 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/client.key
	I0917 18:27:46.639420   77819 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/apiserver.key.06041009
	I0917 18:27:46.639484   77819 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/proxy-client.key
	I0917 18:27:46.639636   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:27:46.639695   77819 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:27:46.639708   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:27:46.639740   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:27:46.639773   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:27:46.639807   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:27:46.639904   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:27:46.640789   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:27:46.681791   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:27:46.715575   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:27:46.746415   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:27:46.780380   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 18:27:46.805518   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 18:27:46.841727   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:27:46.881056   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 18:27:46.918589   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:27:46.947113   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:27:46.977741   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:27:47.015143   77819 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:27:47.036837   77819 ssh_runner.go:195] Run: openssl version
	I0917 18:27:47.043152   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:27:47.057503   77819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:47.063479   77819 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:47.063554   77819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:47.072746   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:27:47.090698   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:27:47.105125   77819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:27:47.110617   77819 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:27:47.110690   77819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:27:47.117267   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:27:47.131593   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:27:47.145726   77819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:27:47.151245   77819 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:27:47.151350   77819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:27:47.157996   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:27:47.171327   77819 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:27:47.178058   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 18:27:47.185068   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 18:27:47.191776   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 18:27:47.198740   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 18:27:47.206057   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 18:27:47.212608   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 18:27:47.219345   77819 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-438836 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-438836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:27:47.219459   77819 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:27:47.219518   77819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:27:47.259853   77819 cri.go:89] found id: ""
	I0917 18:27:47.259944   77819 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:27:47.271127   77819 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 18:27:47.271146   77819 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 18:27:47.271197   77819 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 18:27:47.283724   77819 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 18:27:47.284834   77819 kubeconfig.go:125] found "default-k8s-diff-port-438836" server: "https://192.168.39.58:8444"
	I0917 18:27:47.287040   77819 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 18:27:47.298429   77819 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.58
	I0917 18:27:47.298462   77819 kubeadm.go:1160] stopping kube-system containers ...
	I0917 18:27:47.298481   77819 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0917 18:27:47.298535   77819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:27:47.341739   77819 cri.go:89] found id: ""
	I0917 18:27:47.341820   77819 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 18:27:47.361539   77819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:27:47.377218   77819 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:27:47.377254   77819 kubeadm.go:157] found existing configuration files:
	
	I0917 18:27:47.377301   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0917 18:27:47.390846   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:27:47.390913   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:27:47.401363   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0917 18:27:47.411412   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:27:47.411490   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:27:47.422596   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0917 18:27:47.438021   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:27:47.438102   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:27:47.450085   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0917 18:27:47.461269   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:27:47.461343   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:27:47.472893   77819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:27:47.484393   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:47.620947   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:46.654519   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:46.654962   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:46.655001   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:46.654927   78975 retry.go:31] will retry after 1.346541235s: waiting for machine to come up
	I0917 18:27:48.003569   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:48.004084   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:48.004118   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:48.004017   78975 retry.go:31] will retry after 2.098571627s: waiting for machine to come up
	I0917 18:27:50.105422   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:50.106073   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:50.106096   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:50.105998   78975 retry.go:31] will retry after 1.995584656s: waiting for machine to come up
	I0917 18:27:47.424559   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:27:47.441071   77433 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:27:47.462954   77433 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:27:47.636311   77433 system_pods.go:59] 8 kube-system pods found
	I0917 18:27:47.636361   77433 system_pods.go:61] "coredns-7c65d6cfc9-cgmx9" [e539dfc7-82f3-4e3a-b4d8-262c528fa5bf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:27:47.636373   77433 system_pods.go:61] "etcd-no-preload-328741" [16eed9ef-b991-4760-a116-af9716a70d71] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 18:27:47.636388   77433 system_pods.go:61] "kube-apiserver-no-preload-328741" [ed952dd4-6a99-4ad8-9cdb-c47a5f9d8e46] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 18:27:47.636397   77433 system_pods.go:61] "kube-controller-manager-no-preload-328741" [5da59a8e-4ce3-41f0-a8a0-d022f8788ce1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 18:27:47.636407   77433 system_pods.go:61] "kube-proxy-kpzxv" [eae9f1b2-95bf-44bf-9752-92e34a863520] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 18:27:47.636415   77433 system_pods.go:61] "kube-scheduler-no-preload-328741" [54c4a13c-e03c-4ccb-993b-7b454a66f266] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 18:27:47.636428   77433 system_pods.go:61] "metrics-server-6867b74b74-l8n57" [06210da2-3da4-4082-a966-7a808d762db9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:27:47.636434   77433 system_pods.go:61] "storage-provisioner" [c7501af5-63e1-499f-acfe-48c569e460dd] Running
	I0917 18:27:47.636445   77433 system_pods.go:74] duration metric: took 173.469578ms to wait for pod list to return data ...
	I0917 18:27:47.636458   77433 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:27:47.642831   77433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:27:47.642863   77433 node_conditions.go:123] node cpu capacity is 2
	I0917 18:27:47.642876   77433 node_conditions.go:105] duration metric: took 6.413638ms to run NodePressure ...
	I0917 18:27:47.642898   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:49.172338   77433 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.529413888s)
	I0917 18:27:49.172374   77433 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0917 18:27:49.181467   77433 kubeadm.go:739] kubelet initialised
	I0917 18:27:49.181492   77433 kubeadm.go:740] duration metric: took 9.106065ms waiting for restarted kubelet to initialise ...
	I0917 18:27:49.181504   77433 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:27:49.188444   77433 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:51.196629   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace has status "Ready":"False"
	I0917 18:27:48.837267   77819 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.216281013s)
	I0917 18:27:48.837303   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:49.079443   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:49.184248   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:49.270646   77819 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:27:49.270739   77819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:49.771210   77819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:50.270888   77819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:50.300440   77819 api_server.go:72] duration metric: took 1.029792788s to wait for apiserver process to appear ...
	I0917 18:27:50.300472   77819 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:27:50.300497   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:50.301150   77819 api_server.go:269] stopped: https://192.168.39.58:8444/healthz: Get "https://192.168.39.58:8444/healthz": dial tcp 192.168.39.58:8444: connect: connection refused
	I0917 18:27:50.800904   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:53.830413   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:27:53.830444   77819 api_server.go:103] status: https://192.168.39.58:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:27:53.830466   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:53.863997   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:27:53.864040   77819 api_server.go:103] status: https://192.168.39.58:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:27:54.301188   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:54.308708   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:27:54.308744   77819 api_server.go:103] status: https://192.168.39.58:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:27:54.801293   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:54.810135   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:27:54.810165   77819 api_server.go:103] status: https://192.168.39.58:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:27:55.300669   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:55.306598   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 200:
	ok
	I0917 18:27:55.314062   77819 api_server.go:141] control plane version: v1.31.1
	I0917 18:27:55.314089   77819 api_server.go:131] duration metric: took 5.013610515s to wait for apiserver health ...
	I0917 18:27:55.314098   77819 cni.go:84] Creating CNI manager for ""
	I0917 18:27:55.314105   77819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:27:55.315933   77819 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:27:52.103970   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:52.104598   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:52.104668   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:52.104610   78975 retry.go:31] will retry after 3.302824s: waiting for machine to come up
	I0917 18:27:55.410506   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:55.410967   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:55.410993   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:55.410917   78975 retry.go:31] will retry after 3.790367729s: waiting for machine to come up
	I0917 18:27:53.697650   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace has status "Ready":"False"
	I0917 18:27:56.195779   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace has status "Ready":"False"
	I0917 18:27:55.317026   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:27:55.328593   77819 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:27:55.353710   77819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:27:55.364593   77819 system_pods.go:59] 8 kube-system pods found
	I0917 18:27:55.364637   77819 system_pods.go:61] "coredns-7c65d6cfc9-5wm4j" [af3267b8-4da2-4e95-802e-981814415f7d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:27:55.364649   77819 system_pods.go:61] "etcd-default-k8s-diff-port-438836" [72235e11-dd9c-4560-a258-84ae2fefc0ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 18:27:55.364659   77819 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-438836" [606ffa55-26de-426a-b101-3e5db2329146] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 18:27:55.364682   77819 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-438836" [a9ef6aae-54f9-4ac7-959f-3fb9dcf6019d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 18:27:55.364694   77819 system_pods.go:61] "kube-proxy-pbjlc" [de4d4161-64cd-4794-9eaa-d42b1b13e4a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 18:27:55.364702   77819 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-438836" [ba637ee3-77ca-4b12-8936-3e8616be80d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 18:27:55.364712   77819 system_pods.go:61] "metrics-server-6867b74b74-gpdsn" [4d3193f7-7912-40c6-b86e-402935023601] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:27:55.364722   77819 system_pods.go:61] "storage-provisioner" [5dbf57a2-126c-46e2-9be5-eb2974b84720] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 18:27:55.364739   77819 system_pods.go:74] duration metric: took 10.995638ms to wait for pod list to return data ...
	I0917 18:27:55.364752   77819 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:27:55.369115   77819 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:27:55.369145   77819 node_conditions.go:123] node cpu capacity is 2
	I0917 18:27:55.369159   77819 node_conditions.go:105] duration metric: took 4.401118ms to run NodePressure ...
	I0917 18:27:55.369179   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:55.688791   77819 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0917 18:27:55.694004   77819 kubeadm.go:739] kubelet initialised
	I0917 18:27:55.694035   77819 kubeadm.go:740] duration metric: took 5.21454ms waiting for restarted kubelet to initialise ...
	I0917 18:27:55.694045   77819 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:27:55.700066   77819 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5wm4j" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.706889   77819 pod_ready.go:103] pod "coredns-7c65d6cfc9-5wm4j" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:00.566518   77264 start.go:364] duration metric: took 52.227841633s to acquireMachinesLock for "embed-certs-081863"
	I0917 18:28:00.566588   77264 start.go:96] Skipping create...Using existing machine configuration
	I0917 18:28:00.566596   77264 fix.go:54] fixHost starting: 
	I0917 18:28:00.567020   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:28:00.567055   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:28:00.585812   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46167
	I0917 18:28:00.586338   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:28:00.586855   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:28:00.586878   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:28:00.587201   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:28:00.587368   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:00.587552   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetState
	I0917 18:28:00.589641   77264 fix.go:112] recreateIfNeeded on embed-certs-081863: state=Stopped err=<nil>
	I0917 18:28:00.589668   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	W0917 18:28:00.589827   77264 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 18:28:00.591622   77264 out.go:177] * Restarting existing kvm2 VM for "embed-certs-081863" ...
	I0917 18:27:59.203551   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.204119   78008 main.go:141] libmachine: (old-k8s-version-190698) Found IP for machine: 192.168.61.143
	I0917 18:27:59.204145   78008 main.go:141] libmachine: (old-k8s-version-190698) Reserving static IP address...
	I0917 18:27:59.204160   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has current primary IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.204580   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "old-k8s-version-190698", mac: "52:54:00:72:8a:43", ip: "192.168.61.143"} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.204623   78008 main.go:141] libmachine: (old-k8s-version-190698) Reserved static IP address: 192.168.61.143
	I0917 18:27:59.204642   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | skip adding static IP to network mk-old-k8s-version-190698 - found existing host DHCP lease matching {name: "old-k8s-version-190698", mac: "52:54:00:72:8a:43", ip: "192.168.61.143"}
	I0917 18:27:59.204660   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | Getting to WaitForSSH function...
	I0917 18:27:59.204675   78008 main.go:141] libmachine: (old-k8s-version-190698) Waiting for SSH to be available...
	I0917 18:27:59.206831   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.207248   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.207277   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.207563   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | Using SSH client type: external
	I0917 18:27:59.207591   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa (-rw-------)
	I0917 18:27:59.207628   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.143 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:27:59.207648   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | About to run SSH command:
	I0917 18:27:59.207660   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | exit 0
	I0917 18:27:59.334284   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | SSH cmd err, output: <nil>: 
	I0917 18:27:59.334712   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetConfigRaw
	I0917 18:27:59.335400   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:27:59.337795   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.338175   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.338199   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.338448   78008 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/config.json ...
	I0917 18:27:59.338675   78008 machine.go:93] provisionDockerMachine start ...
	I0917 18:27:59.338696   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:27:59.338932   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.340943   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.341313   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.341338   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.341517   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:27:59.341695   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.341821   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.341953   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:27:59.342138   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:59.342349   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:27:59.342366   78008 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 18:27:59.449958   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 18:27:59.449986   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetMachineName
	I0917 18:27:59.450245   78008 buildroot.go:166] provisioning hostname "old-k8s-version-190698"
	I0917 18:27:59.450275   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetMachineName
	I0917 18:27:59.450449   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.453653   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.454015   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.454044   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.454246   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:27:59.454451   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.454608   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.454777   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:27:59.454978   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:59.455195   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:27:59.455212   78008 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-190698 && echo "old-k8s-version-190698" | sudo tee /etc/hostname
	I0917 18:27:59.576721   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-190698
	
	I0917 18:27:59.576758   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.579821   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.580176   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.580211   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.580420   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:27:59.580601   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.580774   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.580920   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:27:59.581097   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:59.581292   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:27:59.581313   78008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-190698' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-190698/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-190698' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:27:59.696335   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:27:59.696366   78008 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:27:59.696387   78008 buildroot.go:174] setting up certificates
	I0917 18:27:59.696396   78008 provision.go:84] configureAuth start
	I0917 18:27:59.696405   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetMachineName
	I0917 18:27:59.696689   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:27:59.699694   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.700052   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.700079   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.700251   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.702492   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.702870   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.702897   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.703098   78008 provision.go:143] copyHostCerts
	I0917 18:27:59.703211   78008 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:27:59.703228   78008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:27:59.703308   78008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:27:59.703494   78008 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:27:59.703511   78008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:27:59.703557   78008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:27:59.703696   78008 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:27:59.703711   78008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:27:59.703743   78008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:27:59.703843   78008 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-190698 san=[127.0.0.1 192.168.61.143 localhost minikube old-k8s-version-190698]
	I0917 18:27:59.881199   78008 provision.go:177] copyRemoteCerts
	I0917 18:27:59.881281   78008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:27:59.881319   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.884199   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.884526   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.884559   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.884808   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:27:59.885004   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.885174   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:27:59.885311   78008 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:27:59.972021   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:27:59.999996   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0917 18:28:00.028759   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 18:28:00.062167   78008 provision.go:87] duration metric: took 365.752983ms to configureAuth
	I0917 18:28:00.062224   78008 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:28:00.062431   78008 config.go:182] Loaded profile config "old-k8s-version-190698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0917 18:28:00.062530   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.065903   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.066354   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.066387   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.066851   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.067080   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.067272   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.067551   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.067782   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:00.068031   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:28:00.068058   78008 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:28:00.310378   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:28:00.310410   78008 machine.go:96] duration metric: took 971.72114ms to provisionDockerMachine
	I0917 18:28:00.310424   78008 start.go:293] postStartSetup for "old-k8s-version-190698" (driver="kvm2")
	I0917 18:28:00.310444   78008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:28:00.310465   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.310788   78008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:28:00.310822   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.313609   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.313975   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.314004   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.314158   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.314364   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.314518   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.314672   78008 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:28:00.402352   78008 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:28:00.407061   78008 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:28:00.407091   78008 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:28:00.407183   78008 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:28:00.407295   78008 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:28:00.407435   78008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:28:00.419527   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:28:00.449686   78008 start.go:296] duration metric: took 139.247596ms for postStartSetup
	I0917 18:28:00.449739   78008 fix.go:56] duration metric: took 20.027097941s for fixHost
	I0917 18:28:00.449764   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.452672   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.453033   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.453080   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.453218   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.453433   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.453637   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.453793   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.454001   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:00.454175   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:28:00.454185   78008 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:28:00.566377   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726597680.523257617
	
	I0917 18:28:00.566403   78008 fix.go:216] guest clock: 1726597680.523257617
	I0917 18:28:00.566413   78008 fix.go:229] Guest: 2024-09-17 18:28:00.523257617 +0000 UTC Remote: 2024-09-17 18:28:00.449744487 +0000 UTC m=+249.811602656 (delta=73.51313ms)
	I0917 18:28:00.566439   78008 fix.go:200] guest clock delta is within tolerance: 73.51313ms
	I0917 18:28:00.566445   78008 start.go:83] releasing machines lock for "old-k8s-version-190698", held for 20.143843614s
	I0917 18:28:00.566478   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.566748   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:28:00.570065   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.570491   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.570520   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.570731   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.571320   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.571497   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.571584   78008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:28:00.571649   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.571803   78008 ssh_runner.go:195] Run: cat /version.json
	I0917 18:28:00.571830   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.574802   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.575083   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.575343   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.575382   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.575506   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.575574   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.575600   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.575664   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.575881   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.575941   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.576030   78008 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:28:00.576082   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.576278   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.576430   78008 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:28:00.592850   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Start
	I0917 18:28:00.593044   77264 main.go:141] libmachine: (embed-certs-081863) Ensuring networks are active...
	I0917 18:28:00.593996   77264 main.go:141] libmachine: (embed-certs-081863) Ensuring network default is active
	I0917 18:28:00.594404   77264 main.go:141] libmachine: (embed-certs-081863) Ensuring network mk-embed-certs-081863 is active
	I0917 18:28:00.594855   77264 main.go:141] libmachine: (embed-certs-081863) Getting domain xml...
	I0917 18:28:00.595603   77264 main.go:141] libmachine: (embed-certs-081863) Creating domain...
	I0917 18:28:00.685146   78008 ssh_runner.go:195] Run: systemctl --version
	I0917 18:28:00.692059   78008 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:28:00.844888   78008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:28:00.852326   78008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:28:00.852438   78008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:28:00.869907   78008 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:28:00.869934   78008 start.go:495] detecting cgroup driver to use...
	I0917 18:28:00.870010   78008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:28:00.888992   78008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:28:00.905438   78008 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:28:00.905495   78008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:28:00.920872   78008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:28:00.939154   78008 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:28:01.067061   78008 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:28:01.220976   78008 docker.go:233] disabling docker service ...
	I0917 18:28:01.221068   78008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:28:01.240350   78008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:28:01.257396   78008 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:28:01.407317   78008 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:28:01.552256   78008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:28:01.567151   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:28:01.589401   78008 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0917 18:28:01.589465   78008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:01.604462   78008 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:28:01.604527   78008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:01.617293   78008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:01.629766   78008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:01.643336   78008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:28:01.656308   78008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:28:01.667116   78008 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:28:01.667187   78008 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:28:01.683837   78008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:28:01.697438   78008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:28:01.843288   78008 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:28:01.951590   78008 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:28:01.951666   78008 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:28:01.957158   78008 start.go:563] Will wait 60s for crictl version
	I0917 18:28:01.957240   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:01.961218   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:28:02.001679   78008 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:28:02.001772   78008 ssh_runner.go:195] Run: crio --version
	I0917 18:28:02.032619   78008 ssh_runner.go:195] Run: crio --version
	I0917 18:28:02.064108   78008 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0917 18:27:57.695202   77433 pod_ready.go:93] pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:57.695235   77433 pod_ready.go:82] duration metric: took 8.506750324s for pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.695249   77433 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.700040   77433 pod_ready.go:93] pod "etcd-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:57.700062   77433 pod_ready.go:82] duration metric: took 4.804815ms for pod "etcd-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.700070   77433 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.705836   77433 pod_ready.go:93] pod "kube-apiserver-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:57.705867   77433 pod_ready.go:82] duration metric: took 5.789446ms for pod "kube-apiserver-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.705880   77433 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.215156   77433 pod_ready.go:93] pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:58.215180   77433 pod_ready.go:82] duration metric: took 509.29189ms for pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.215193   77433 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kpzxv" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.221031   77433 pod_ready.go:93] pod "kube-proxy-kpzxv" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:58.221054   77433 pod_ready.go:82] duration metric: took 5.853831ms for pod "kube-proxy-kpzxv" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.221065   77433 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.493958   77433 pod_ready.go:93] pod "kube-scheduler-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:58.493984   77433 pod_ready.go:82] duration metric: took 272.911397ms for pod "kube-scheduler-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.493994   77433 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:00.501591   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:27:59.707995   77819 pod_ready.go:93] pod "coredns-7c65d6cfc9-5wm4j" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:59.708017   77819 pod_ready.go:82] duration metric: took 4.007926053s for pod "coredns-7c65d6cfc9-5wm4j" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:59.708026   77819 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:01.716326   77819 pod_ready.go:103] pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:02.065336   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:28:02.068703   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:02.069066   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:02.069094   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:02.069321   78008 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0917 18:28:02.074550   78008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:28:02.091863   78008 kubeadm.go:883] updating cluster {Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:28:02.092006   78008 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0917 18:28:02.092069   78008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:28:02.152944   78008 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0917 18:28:02.153024   78008 ssh_runner.go:195] Run: which lz4
	I0917 18:28:02.157664   78008 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 18:28:02.162231   78008 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 18:28:02.162290   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0917 18:28:04.015315   78008 crio.go:462] duration metric: took 1.857697544s to copy over tarball
	I0917 18:28:04.015398   78008 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 18:28:01.931491   77264 main.go:141] libmachine: (embed-certs-081863) Waiting to get IP...
	I0917 18:28:01.932448   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:01.932939   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:01.933006   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:01.932914   79167 retry.go:31] will retry after 232.498944ms: waiting for machine to come up
	I0917 18:28:02.167642   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:02.168159   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:02.168187   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:02.168114   79167 retry.go:31] will retry after 297.644768ms: waiting for machine to come up
	I0917 18:28:02.467583   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:02.468395   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:02.468422   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:02.468356   79167 retry.go:31] will retry after 486.22753ms: waiting for machine to come up
	I0917 18:28:02.956719   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:02.957187   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:02.957212   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:02.957151   79167 retry.go:31] will retry after 602.094874ms: waiting for machine to come up
	I0917 18:28:03.560509   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:03.561150   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:03.561177   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:03.561102   79167 retry.go:31] will retry after 732.31608ms: waiting for machine to come up
	I0917 18:28:04.294713   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:04.295264   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:04.295306   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:04.295212   79167 retry.go:31] will retry after 826.461309ms: waiting for machine to come up
	I0917 18:28:05.123086   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:05.123570   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:05.123596   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:05.123528   79167 retry.go:31] will retry after 785.524779ms: waiting for machine to come up
	I0917 18:28:02.503063   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:05.002750   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:03.716871   77819 pod_ready.go:103] pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:05.718652   77819 pod_ready.go:93] pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:05.718685   77819 pod_ready.go:82] duration metric: took 6.010651123s for pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:05.718697   77819 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:07.727355   77819 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:07.199571   78008 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.184141166s)
	I0917 18:28:07.199605   78008 crio.go:469] duration metric: took 3.184259546s to extract the tarball
	I0917 18:28:07.199625   78008 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 18:28:07.247308   78008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:28:07.290580   78008 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0917 18:28:07.290605   78008 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0917 18:28:07.290641   78008 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:28:07.290664   78008 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.290685   78008 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.290705   78008 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.290772   78008 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.290865   78008 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.290898   78008 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0917 18:28:07.290896   78008 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.292426   78008 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.292473   78008 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.292479   78008 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.292525   78008 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:28:07.292555   78008 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.292544   78008 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.292594   78008 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.292796   78008 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0917 18:28:07.460802   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.466278   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.466439   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.473442   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.484306   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.490062   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.517285   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0917 18:28:07.550668   78008 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0917 18:28:07.550730   78008 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.550779   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.598383   78008 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0917 18:28:07.598426   78008 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.598468   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.627615   78008 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0917 18:28:07.627665   78008 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.627737   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.675687   78008 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0917 18:28:07.675733   78008 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.675769   78008 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0917 18:28:07.675806   78008 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.675848   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.675809   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.689052   78008 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0917 18:28:07.689106   78008 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.689141   78008 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0917 18:28:07.689169   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.689186   78008 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0917 18:28:07.689200   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.689224   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.689252   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.689296   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.689336   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.689374   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.782923   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0917 18:28:07.783204   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.833121   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.833205   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.833278   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.833316   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.833343   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.880054   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0917 18:28:07.885156   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.982007   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.990252   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:08.005351   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:08.008118   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0917 18:28:08.008319   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:08.066339   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0917 18:28:08.066388   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0917 18:28:08.173842   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0917 18:28:08.173884   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0917 18:28:08.173951   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:08.181801   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0917 18:28:08.181832   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0917 18:28:08.181952   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0917 18:28:08.196666   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:28:08.219844   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0917 18:28:08.351645   78008 cache_images.go:92] duration metric: took 1.061022994s to LoadCachedImages
	W0917 18:28:08.351739   78008 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0917 18:28:08.351760   78008 kubeadm.go:934] updating node { 192.168.61.143 8443 v1.20.0 crio true true} ...
	I0917 18:28:08.351869   78008 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-190698 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.143
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:28:08.351947   78008 ssh_runner.go:195] Run: crio config
	I0917 18:28:08.404304   78008 cni.go:84] Creating CNI manager for ""
	I0917 18:28:08.404333   78008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:28:08.404347   78008 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:28:08.404369   78008 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.143 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-190698 NodeName:old-k8s-version-190698 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.143"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.143 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0917 18:28:08.404554   78008 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.143
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-190698"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.143
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.143"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:28:08.404636   78008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0917 18:28:08.415712   78008 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:28:08.415788   78008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:28:08.426074   78008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0917 18:28:08.446765   78008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:28:08.467884   78008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0917 18:28:08.489565   78008 ssh_runner.go:195] Run: grep 192.168.61.143	control-plane.minikube.internal$ /etc/hosts
	I0917 18:28:08.494030   78008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.143	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:28:08.510100   78008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:28:08.667598   78008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:28:08.686416   78008 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698 for IP: 192.168.61.143
	I0917 18:28:08.686453   78008 certs.go:194] generating shared ca certs ...
	I0917 18:28:08.686477   78008 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:28:08.686680   78008 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:28:08.686743   78008 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:28:08.686762   78008 certs.go:256] generating profile certs ...
	I0917 18:28:08.686886   78008 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/client.key
	I0917 18:28:08.686962   78008 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.key.8ffdb4af
	I0917 18:28:08.687069   78008 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/proxy-client.key
	I0917 18:28:08.687256   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:28:08.687302   78008 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:28:08.687318   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:28:08.687360   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:28:08.687397   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:28:08.687441   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:28:08.687511   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:28:08.688412   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:28:08.729318   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:28:08.772932   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:28:08.815329   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:28:08.866305   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 18:28:08.910004   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 18:28:08.950902   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:28:08.993679   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 18:28:09.021272   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:28:09.046848   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:28:09.078938   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:28:09.110919   78008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:28:09.134493   78008 ssh_runner.go:195] Run: openssl version
	I0917 18:28:09.142920   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:28:09.157440   78008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:28:09.163382   78008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:28:09.163460   78008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:28:09.170446   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:28:09.182690   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:28:09.195144   78008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:28:09.200544   78008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:28:09.200612   78008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:28:09.207418   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:28:09.219931   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:28:09.234765   78008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:09.240859   78008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:09.240930   78008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:09.249168   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:28:09.262225   78008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:28:09.267923   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 18:28:09.276136   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 18:28:09.284356   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 18:28:09.292809   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 18:28:09.301175   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 18:28:09.309486   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 18:28:09.317652   78008 kubeadm.go:392] StartCluster: {Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:28:09.317788   78008 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:28:09.317862   78008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:28:09.367633   78008 cri.go:89] found id: ""
	I0917 18:28:09.367714   78008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:28:09.378721   78008 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 18:28:09.378751   78008 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 18:28:09.378823   78008 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 18:28:09.389949   78008 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 18:28:09.391438   78008 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-190698" does not appear in /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:28:09.392494   78008 kubeconfig.go:62] /home/jenkins/minikube-integration/19662-11085/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-190698" cluster setting kubeconfig missing "old-k8s-version-190698" context setting]
	I0917 18:28:09.393951   78008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:28:09.396482   78008 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 18:28:09.407488   78008 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.143
	I0917 18:28:09.407541   78008 kubeadm.go:1160] stopping kube-system containers ...
	I0917 18:28:09.407555   78008 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0917 18:28:09.407617   78008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:28:09.454529   78008 cri.go:89] found id: ""
	I0917 18:28:09.454609   78008 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 18:28:09.473001   78008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:28:09.483455   78008 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:28:09.483478   78008 kubeadm.go:157] found existing configuration files:
	
	I0917 18:28:09.483524   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:28:09.492941   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:28:09.493015   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:28:09.503733   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:28:09.513646   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:28:09.513744   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:28:09.523852   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:28:09.533964   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:28:09.534023   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:28:09.544196   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:28:09.554778   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:28:09.554867   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:28:09.565305   78008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:28:09.576177   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:09.717093   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:10.376689   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:10.619407   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:05.910824   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:05.911297   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:05.911326   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:05.911249   79167 retry.go:31] will retry after 994.146737ms: waiting for machine to come up
	I0917 18:28:06.906856   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:06.907429   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:06.907489   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:06.907376   79167 retry.go:31] will retry after 1.592998284s: waiting for machine to come up
	I0917 18:28:08.502438   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:08.502946   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:08.502969   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:08.502894   79167 retry.go:31] will retry after 1.71066586s: waiting for machine to come up
	I0917 18:28:10.215620   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:10.216060   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:10.216088   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:10.216019   79167 retry.go:31] will retry after 2.640762654s: waiting for machine to come up
	I0917 18:28:07.502981   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:10.000910   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:12.002029   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:09.068583   77819 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:09.068620   77819 pod_ready.go:82] duration metric: took 3.349915006s for pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.068634   77819 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.104652   77819 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:09.104685   77819 pod_ready.go:82] duration metric: took 36.042715ms for pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.104698   77819 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-pbjlc" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.111983   77819 pod_ready.go:93] pod "kube-proxy-pbjlc" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:09.112010   77819 pod_ready.go:82] duration metric: took 7.304378ms for pod "kube-proxy-pbjlc" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.112022   77819 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.118242   77819 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:09.118270   77819 pod_ready.go:82] duration metric: took 6.238909ms for pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.118284   77819 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:11.128221   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:10.743928   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:10.832172   78008 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:28:10.832275   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:11.333365   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:11.832631   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:12.332364   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:12.832978   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:13.333348   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:13.833325   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:14.333130   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:14.833200   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:15.333019   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:12.859438   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:12.859907   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:12.859933   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:12.859855   79167 retry.go:31] will retry after 2.872904917s: waiting for machine to come up
	I0917 18:28:15.734778   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:15.735248   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:15.735276   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:15.735204   79167 retry.go:31] will retry after 3.980802088s: waiting for machine to come up
	I0917 18:28:14.002604   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:16.501220   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:13.625926   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:16.124315   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:18.125564   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:15.832326   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:16.333353   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:16.833183   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:17.332967   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:17.833315   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:18.333025   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:18.832727   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:19.333388   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:19.833387   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:20.332777   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:19.720378   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.720874   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has current primary IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.720895   77264 main.go:141] libmachine: (embed-certs-081863) Found IP for machine: 192.168.50.61
	I0917 18:28:19.720909   77264 main.go:141] libmachine: (embed-certs-081863) Reserving static IP address...
	I0917 18:28:19.721385   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "embed-certs-081863", mac: "52:54:00:3f:17:3d", ip: "192.168.50.61"} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:19.721428   77264 main.go:141] libmachine: (embed-certs-081863) DBG | skip adding static IP to network mk-embed-certs-081863 - found existing host DHCP lease matching {name: "embed-certs-081863", mac: "52:54:00:3f:17:3d", ip: "192.168.50.61"}
	I0917 18:28:19.721444   77264 main.go:141] libmachine: (embed-certs-081863) Reserved static IP address: 192.168.50.61
	I0917 18:28:19.721461   77264 main.go:141] libmachine: (embed-certs-081863) Waiting for SSH to be available...
	I0917 18:28:19.721478   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Getting to WaitForSSH function...
	I0917 18:28:19.723623   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.723932   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:19.723960   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.724082   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Using SSH client type: external
	I0917 18:28:19.724109   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa (-rw-------)
	I0917 18:28:19.724139   77264 main.go:141] libmachine: (embed-certs-081863) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.61 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:28:19.724161   77264 main.go:141] libmachine: (embed-certs-081863) DBG | About to run SSH command:
	I0917 18:28:19.724173   77264 main.go:141] libmachine: (embed-certs-081863) DBG | exit 0
	I0917 18:28:19.849714   77264 main.go:141] libmachine: (embed-certs-081863) DBG | SSH cmd err, output: <nil>: 
	I0917 18:28:19.850124   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetConfigRaw
	I0917 18:28:19.850841   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetIP
	I0917 18:28:19.853490   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.853866   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:19.853891   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.854193   77264 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/config.json ...
	I0917 18:28:19.854396   77264 machine.go:93] provisionDockerMachine start ...
	I0917 18:28:19.854414   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:19.854653   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:19.857041   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.857395   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:19.857423   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.857547   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:19.857729   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:19.857863   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:19.857975   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:19.858079   77264 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:19.858237   77264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I0917 18:28:19.858247   77264 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 18:28:19.965775   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 18:28:19.965805   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetMachineName
	I0917 18:28:19.966057   77264 buildroot.go:166] provisioning hostname "embed-certs-081863"
	I0917 18:28:19.966091   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetMachineName
	I0917 18:28:19.966278   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:19.968957   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.969277   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:19.969308   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.969469   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:19.969656   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:19.969816   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:19.969923   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:19.970068   77264 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:19.970294   77264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I0917 18:28:19.970314   77264 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-081863 && echo "embed-certs-081863" | sudo tee /etc/hostname
	I0917 18:28:20.096717   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-081863
	
	I0917 18:28:20.096753   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.099788   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.100162   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.100195   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.100351   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.100571   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.100731   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.100864   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.101043   77264 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:20.101273   77264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I0917 18:28:20.101297   77264 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-081863' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-081863/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-081863' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:28:20.224405   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:28:20.224447   77264 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:28:20.224468   77264 buildroot.go:174] setting up certificates
	I0917 18:28:20.224476   77264 provision.go:84] configureAuth start
	I0917 18:28:20.224487   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetMachineName
	I0917 18:28:20.224796   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetIP
	I0917 18:28:20.227642   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.227990   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.228020   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.228128   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.230411   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.230785   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.230819   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.230945   77264 provision.go:143] copyHostCerts
	I0917 18:28:20.231012   77264 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:28:20.231026   77264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:28:20.231097   77264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:28:20.231220   77264 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:28:20.231232   77264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:28:20.231263   77264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:28:20.231349   77264 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:28:20.231361   77264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:28:20.231387   77264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:28:20.231460   77264 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.embed-certs-081863 san=[127.0.0.1 192.168.50.61 embed-certs-081863 localhost minikube]
	I0917 18:28:20.293317   77264 provision.go:177] copyRemoteCerts
	I0917 18:28:20.293370   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:28:20.293395   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.296247   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.296611   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.296649   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.296878   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.297065   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.297251   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.297411   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:28:20.384577   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:28:20.409805   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0917 18:28:20.436199   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 18:28:20.463040   77264 provision.go:87] duration metric: took 238.548615ms to configureAuth
	I0917 18:28:20.463072   77264 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:28:20.463270   77264 config.go:182] Loaded profile config "embed-certs-081863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:28:20.463371   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.466291   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.466656   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.466688   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.466942   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.467172   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.467363   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.467511   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.467661   77264 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:20.467850   77264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I0917 18:28:20.467864   77264 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:28:20.713934   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:28:20.713961   77264 machine.go:96] duration metric: took 859.552656ms to provisionDockerMachine
	I0917 18:28:20.713975   77264 start.go:293] postStartSetup for "embed-certs-081863" (driver="kvm2")
	I0917 18:28:20.713989   77264 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:28:20.714017   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:20.714338   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:28:20.714366   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.717415   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.717784   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.717810   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.717979   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.718181   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.718334   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.718489   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:28:18.501410   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:21.001625   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:20.808582   77264 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:28:20.812874   77264 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:28:20.812903   77264 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:28:20.812985   77264 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:28:20.813082   77264 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:28:20.813202   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:28:20.823533   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:28:20.853907   77264 start.go:296] duration metric: took 139.917603ms for postStartSetup
	I0917 18:28:20.853950   77264 fix.go:56] duration metric: took 20.287354242s for fixHost
	I0917 18:28:20.853974   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.856746   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.857114   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.857141   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.857324   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.857572   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.857749   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.857925   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.858084   77264 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:20.858314   77264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I0917 18:28:20.858329   77264 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:28:20.970530   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726597700.949100009
	
	I0917 18:28:20.970553   77264 fix.go:216] guest clock: 1726597700.949100009
	I0917 18:28:20.970561   77264 fix.go:229] Guest: 2024-09-17 18:28:20.949100009 +0000 UTC Remote: 2024-09-17 18:28:20.853955257 +0000 UTC m=+355.105413575 (delta=95.144752ms)
	I0917 18:28:20.970581   77264 fix.go:200] guest clock delta is within tolerance: 95.144752ms
	I0917 18:28:20.970586   77264 start.go:83] releasing machines lock for "embed-certs-081863", held for 20.404030588s
	I0917 18:28:20.970604   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:20.970874   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetIP
	I0917 18:28:20.973477   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.973786   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.973813   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.973938   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:20.974529   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:20.974733   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:20.974825   77264 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:28:20.974881   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.974945   77264 ssh_runner.go:195] Run: cat /version.json
	I0917 18:28:20.974973   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.977671   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.977994   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.978044   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.978074   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.978203   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.978365   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.978517   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.978555   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.978590   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.978659   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:28:20.978775   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.978915   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.979042   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.979161   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:28:21.080649   77264 ssh_runner.go:195] Run: systemctl --version
	I0917 18:28:21.087412   77264 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:28:21.241355   77264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:28:21.249173   77264 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:28:21.249245   77264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:28:21.266337   77264 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:28:21.266369   77264 start.go:495] detecting cgroup driver to use...
	I0917 18:28:21.266441   77264 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:28:21.284535   77264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:28:21.300191   77264 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:28:21.300262   77264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:28:21.315687   77264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:28:21.331132   77264 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:28:21.469564   77264 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:28:21.618385   77264 docker.go:233] disabling docker service ...
	I0917 18:28:21.618465   77264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:28:21.635746   77264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:28:21.653011   77264 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:28:21.806397   77264 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:28:21.942768   77264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:28:21.957319   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:28:21.977409   77264 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 18:28:21.977479   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:21.989090   77264 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:28:21.989165   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.001555   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.013044   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.024634   77264 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:28:22.036482   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.048082   77264 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.067971   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.079429   77264 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:28:22.089772   77264 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:28:22.089841   77264 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:28:22.104492   77264 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:28:22.116429   77264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:28:22.250299   77264 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:28:22.353115   77264 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:28:22.353195   77264 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:28:22.359475   77264 start.go:563] Will wait 60s for crictl version
	I0917 18:28:22.359527   77264 ssh_runner.go:195] Run: which crictl
	I0917 18:28:22.363627   77264 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:28:22.402802   77264 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:28:22.402902   77264 ssh_runner.go:195] Run: crio --version
	I0917 18:28:22.432389   77264 ssh_runner.go:195] Run: crio --version
	I0917 18:28:22.463277   77264 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 18:28:20.625519   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:23.126788   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:20.832698   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:21.332644   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:21.832955   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:22.332859   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:22.832393   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:23.333067   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:23.833266   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:24.332837   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:24.832669   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:25.332772   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:22.464498   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetIP
	I0917 18:28:22.467595   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:22.468070   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:22.468104   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:22.468400   77264 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0917 18:28:22.473355   77264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:28:22.487043   77264 kubeadm.go:883] updating cluster {Name:embed-certs-081863 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-081863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.61 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:28:22.487162   77264 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 18:28:22.487204   77264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:28:22.525877   77264 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0917 18:28:22.525947   77264 ssh_runner.go:195] Run: which lz4
	I0917 18:28:22.530318   77264 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 18:28:22.534779   77264 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 18:28:22.534821   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0917 18:28:24.007808   77264 crio.go:462] duration metric: took 1.477544842s to copy over tarball
	I0917 18:28:24.007895   77264 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 18:28:23.002565   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:25.501068   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:25.627993   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:28.126373   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:25.832772   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:26.332949   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:26.833016   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:27.332604   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:27.833127   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:28.332337   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:28.832430   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.332564   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.833193   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:30.333057   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:26.210912   77264 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.202977006s)
	I0917 18:28:26.210942   77264 crio.go:469] duration metric: took 2.203106209s to extract the tarball
	I0917 18:28:26.210950   77264 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 18:28:26.249979   77264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:28:26.297086   77264 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 18:28:26.297112   77264 cache_images.go:84] Images are preloaded, skipping loading
	I0917 18:28:26.297122   77264 kubeadm.go:934] updating node { 192.168.50.61 8443 v1.31.1 crio true true} ...
	I0917 18:28:26.297238   77264 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-081863 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-081863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:28:26.297323   77264 ssh_runner.go:195] Run: crio config
	I0917 18:28:26.343491   77264 cni.go:84] Creating CNI manager for ""
	I0917 18:28:26.343516   77264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:28:26.343526   77264 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:28:26.343547   77264 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.61 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-081863 NodeName:embed-certs-081863 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 18:28:26.343711   77264 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-081863"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:28:26.343786   77264 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 18:28:26.354782   77264 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:28:26.354863   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:28:26.365347   77264 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0917 18:28:26.383377   77264 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:28:26.401629   77264 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0917 18:28:26.420595   77264 ssh_runner.go:195] Run: grep 192.168.50.61	control-plane.minikube.internal$ /etc/hosts
	I0917 18:28:26.424760   77264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.61	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:28:26.439152   77264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:28:26.582540   77264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:28:26.600662   77264 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863 for IP: 192.168.50.61
	I0917 18:28:26.600684   77264 certs.go:194] generating shared ca certs ...
	I0917 18:28:26.600701   77264 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:28:26.600877   77264 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:28:26.600932   77264 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:28:26.600946   77264 certs.go:256] generating profile certs ...
	I0917 18:28:26.601065   77264 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/client.key
	I0917 18:28:26.601154   77264 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/apiserver.key.b407faea
	I0917 18:28:26.601218   77264 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/proxy-client.key
	I0917 18:28:26.601382   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:28:26.601423   77264 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:28:26.601438   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:28:26.601501   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:28:26.601537   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:28:26.601568   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:28:26.601625   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:28:26.602482   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:28:26.641066   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:28:26.665154   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:28:26.699573   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:28:26.749625   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0917 18:28:26.790757   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 18:28:26.818331   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:28:26.848575   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 18:28:26.875901   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:28:26.902547   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:28:26.929873   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:28:26.954674   77264 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:28:26.972433   77264 ssh_runner.go:195] Run: openssl version
	I0917 18:28:26.978761   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:28:26.991752   77264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:28:26.996704   77264 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:28:26.996771   77264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:28:27.003567   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:28:27.015305   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:28:27.027052   77264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:27.032815   77264 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:27.032880   77264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:27.039495   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:28:27.051331   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:28:27.062771   77264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:28:27.067404   77264 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:28:27.067461   77264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:28:27.073663   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:28:27.085283   77264 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:28:27.090171   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 18:28:27.096537   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 18:28:27.103011   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 18:28:27.110516   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 18:28:27.116647   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 18:28:27.123087   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 18:28:27.129689   77264 kubeadm.go:392] StartCluster: {Name:embed-certs-081863 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-081863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.61 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:28:27.129958   77264 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:28:27.130021   77264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:28:27.171240   77264 cri.go:89] found id: ""
	I0917 18:28:27.171312   77264 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:28:27.183474   77264 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 18:28:27.183494   77264 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 18:28:27.183555   77264 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 18:28:27.195418   77264 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 18:28:27.196485   77264 kubeconfig.go:125] found "embed-certs-081863" server: "https://192.168.50.61:8443"
	I0917 18:28:27.198613   77264 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 18:28:27.210454   77264 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.61
	I0917 18:28:27.210489   77264 kubeadm.go:1160] stopping kube-system containers ...
	I0917 18:28:27.210503   77264 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0917 18:28:27.210560   77264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:28:27.249423   77264 cri.go:89] found id: ""
	I0917 18:28:27.249495   77264 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 18:28:27.270900   77264 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:28:27.283556   77264 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:28:27.283577   77264 kubeadm.go:157] found existing configuration files:
	
	I0917 18:28:27.283636   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:28:27.293555   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:28:27.293619   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:28:27.303876   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:28:27.313465   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:28:27.313533   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:28:27.323675   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:28:27.333753   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:28:27.333828   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:28:27.345276   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:28:27.356223   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:28:27.356278   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:28:27.366916   77264 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:28:27.380179   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:27.518193   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:28.381642   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:28.600807   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:28.674888   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:28.751910   77264 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:28:28.752037   77264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.252499   77264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.752690   77264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.792406   77264 api_server.go:72] duration metric: took 1.040494132s to wait for apiserver process to appear ...
	I0917 18:28:29.792439   77264 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:28:29.792463   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:29.793008   77264 api_server.go:269] stopped: https://192.168.50.61:8443/healthz: Get "https://192.168.50.61:8443/healthz": dial tcp 192.168.50.61:8443: connect: connection refused
	I0917 18:28:30.292587   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:27.501185   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:29.501753   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:32.000659   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:30.626195   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:33.126180   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:30.832853   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:31.332521   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:31.832513   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:32.332347   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:32.833201   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:33.332485   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:33.833002   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:34.333150   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:34.832985   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:35.332584   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:32.308247   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:28:32.308273   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:28:32.308286   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:32.327248   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:28:32.327283   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:28:32.792628   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:32.798368   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:32.798399   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:33.292887   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:33.298137   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:33.298162   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:33.792634   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:33.797062   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:33.797095   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:34.292626   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:34.297161   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:34.297198   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:34.792621   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:34.797092   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:34.797124   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:35.292693   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:35.298774   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:35.298806   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:35.793350   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:35.798559   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 200:
	ok
	I0917 18:28:35.805421   77264 api_server.go:141] control plane version: v1.31.1
	I0917 18:28:35.805455   77264 api_server.go:131] duration metric: took 6.013008084s to wait for apiserver health ...
	I0917 18:28:35.805467   77264 cni.go:84] Creating CNI manager for ""
	I0917 18:28:35.805476   77264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:28:35.807270   77264 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:28:34.500180   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:36.501455   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:35.625916   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:38.124412   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:35.833375   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:36.332518   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:36.833057   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:37.333093   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:37.832449   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:38.333260   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:38.832592   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:39.332352   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:39.833094   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:40.332401   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:35.808509   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:28:35.820438   77264 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:28:35.843308   77264 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:28:35.858341   77264 system_pods.go:59] 8 kube-system pods found
	I0917 18:28:35.858375   77264 system_pods.go:61] "coredns-7c65d6cfc9-fv5t2" [6d147703-1be6-4e14-b00a-00563bb9f05d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:28:35.858383   77264 system_pods.go:61] "etcd-embed-certs-081863" [e7da3a2f-02a8-4fb8-bcc1-2057560e2a99] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 18:28:35.858390   77264 system_pods.go:61] "kube-apiserver-embed-certs-081863" [f576f758-867b-45ff-83e7-c7ec010c784d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 18:28:35.858396   77264 system_pods.go:61] "kube-controller-manager-embed-certs-081863" [864cfdcd-bba9-41ef-a014-9b44f90d10fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 18:28:35.858400   77264 system_pods.go:61] "kube-proxy-5ctps" [adbf43b1-986e-4bef-b515-9bf20e847369] Running
	I0917 18:28:35.858407   77264 system_pods.go:61] "kube-scheduler-embed-certs-081863" [1c6dc904-888a-43e2-9edf-ad87025d9cd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 18:28:35.858425   77264 system_pods.go:61] "metrics-server-6867b74b74-g2ttm" [dbb935ab-664c-420e-8b8e-4c033c3e07d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:28:35.858438   77264 system_pods.go:61] "storage-provisioner" [3a81abf3-c894-4279-91ce-6a66e4517de9] Running
	I0917 18:28:35.858446   77264 system_pods.go:74] duration metric: took 15.115932ms to wait for pod list to return data ...
	I0917 18:28:35.858459   77264 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:28:35.865686   77264 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:28:35.865715   77264 node_conditions.go:123] node cpu capacity is 2
	I0917 18:28:35.865728   77264 node_conditions.go:105] duration metric: took 7.262354ms to run NodePressure ...
	I0917 18:28:35.865747   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:36.133217   77264 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0917 18:28:36.142193   77264 kubeadm.go:739] kubelet initialised
	I0917 18:28:36.142216   77264 kubeadm.go:740] duration metric: took 8.957553ms waiting for restarted kubelet to initialise ...
	I0917 18:28:36.142223   77264 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:28:36.148365   77264 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-fv5t2" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.154605   77264 pod_ready.go:98] node "embed-certs-081863" hosting pod "coredns-7c65d6cfc9-fv5t2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.154633   77264 pod_ready.go:82] duration metric: took 6.241589ms for pod "coredns-7c65d6cfc9-fv5t2" in "kube-system" namespace to be "Ready" ...
	E0917 18:28:36.154644   77264 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-081863" hosting pod "coredns-7c65d6cfc9-fv5t2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.154654   77264 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.160864   77264 pod_ready.go:98] node "embed-certs-081863" hosting pod "etcd-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.160888   77264 pod_ready.go:82] duration metric: took 6.224743ms for pod "etcd-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	E0917 18:28:36.160899   77264 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-081863" hosting pod "etcd-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.160906   77264 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.167006   77264 pod_ready.go:98] node "embed-certs-081863" hosting pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.167038   77264 pod_ready.go:82] duration metric: took 6.114714ms for pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	E0917 18:28:36.167049   77264 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-081863" hosting pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.167058   77264 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.247310   77264 pod_ready.go:98] node "embed-certs-081863" hosting pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.247349   77264 pod_ready.go:82] duration metric: took 80.274557ms for pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	E0917 18:28:36.247361   77264 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-081863" hosting pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.247368   77264 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5ctps" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.647989   77264 pod_ready.go:93] pod "kube-proxy-5ctps" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:36.648012   77264 pod_ready.go:82] duration metric: took 400.635503ms for pod "kube-proxy-5ctps" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.648022   77264 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:38.654947   77264 pod_ready.go:103] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:40.658044   77264 pod_ready.go:103] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:39.000917   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:41.001794   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:40.124879   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:42.125939   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:40.832609   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:41.332438   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:41.832456   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:42.332846   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:42.832374   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:43.332703   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:43.832502   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:44.332845   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:44.832341   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:45.333377   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:43.154904   77264 pod_ready.go:103] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:45.155253   77264 pod_ready.go:103] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:43.001900   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:45.501989   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:44.625492   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:47.124276   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:45.832541   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:46.332842   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:46.832446   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:47.333344   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:47.833087   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:48.332527   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:48.832377   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:49.332937   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:49.833254   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:50.332394   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:47.157575   77264 pod_ready.go:93] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:47.157603   77264 pod_ready.go:82] duration metric: took 10.509573459s for pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:47.157614   77264 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:49.163957   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:48.000696   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:50.001527   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:49.627381   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:52.125550   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:50.833049   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:51.333314   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:51.832959   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:52.332830   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:52.832394   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:53.333004   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:53.832841   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:54.333310   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:54.832648   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:55.332487   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:51.164376   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:53.164866   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:55.165065   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:52.501375   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:54.501792   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:57.006451   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:54.624863   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:57.125005   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:55.832339   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:56.333257   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:56.833293   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:57.332665   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:57.833189   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:58.332409   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:58.833030   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:59.333251   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:59.832903   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:00.333365   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:57.664921   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:00.165972   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:59.500173   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:01.501014   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:59.125299   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:01.125883   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:00.833018   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:01.332976   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:01.832860   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:02.332401   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:02.832409   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:03.333273   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:03.832435   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:04.332572   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:04.832618   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:05.333051   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:02.166251   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:04.665729   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:04.000731   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:06.000850   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:03.624799   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:05.625817   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:08.124471   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:05.833109   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:06.332870   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:06.833248   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:07.332856   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:07.832795   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:08.332779   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:08.832356   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:09.333340   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:09.832899   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:10.332646   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:06.666037   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:09.163623   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:08.501863   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:10.504311   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:10.125479   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:12.625676   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:10.833153   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:10.833224   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:10.877318   78008 cri.go:89] found id: ""
	I0917 18:29:10.877347   78008 logs.go:276] 0 containers: []
	W0917 18:29:10.877356   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:10.877363   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:10.877433   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:10.913506   78008 cri.go:89] found id: ""
	I0917 18:29:10.913532   78008 logs.go:276] 0 containers: []
	W0917 18:29:10.913540   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:10.913546   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:10.913607   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:10.952648   78008 cri.go:89] found id: ""
	I0917 18:29:10.952679   78008 logs.go:276] 0 containers: []
	W0917 18:29:10.952689   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:10.952699   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:10.952761   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:10.992819   78008 cri.go:89] found id: ""
	I0917 18:29:10.992851   78008 logs.go:276] 0 containers: []
	W0917 18:29:10.992863   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:10.992870   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:10.992923   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:11.032717   78008 cri.go:89] found id: ""
	I0917 18:29:11.032752   78008 logs.go:276] 0 containers: []
	W0917 18:29:11.032764   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:11.032772   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:11.032831   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:11.070909   78008 cri.go:89] found id: ""
	I0917 18:29:11.070934   78008 logs.go:276] 0 containers: []
	W0917 18:29:11.070944   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:11.070953   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:11.071005   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:11.111115   78008 cri.go:89] found id: ""
	I0917 18:29:11.111146   78008 logs.go:276] 0 containers: []
	W0917 18:29:11.111157   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:11.111164   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:11.111233   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:11.147704   78008 cri.go:89] found id: ""
	I0917 18:29:11.147738   78008 logs.go:276] 0 containers: []
	W0917 18:29:11.147751   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:11.147770   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:11.147783   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:11.222086   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:11.222131   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:11.268572   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:11.268598   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:11.320140   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:11.320179   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:11.336820   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:11.336862   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:11.476726   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:13.977359   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:13.991780   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:13.991861   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:14.029657   78008 cri.go:89] found id: ""
	I0917 18:29:14.029686   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.029697   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:14.029703   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:14.029761   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:14.070673   78008 cri.go:89] found id: ""
	I0917 18:29:14.070707   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.070716   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:14.070722   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:14.070781   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:14.109826   78008 cri.go:89] found id: ""
	I0917 18:29:14.109862   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.109872   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:14.109880   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:14.109938   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:14.156812   78008 cri.go:89] found id: ""
	I0917 18:29:14.156839   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.156848   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:14.156853   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:14.156909   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:14.203877   78008 cri.go:89] found id: ""
	I0917 18:29:14.203906   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.203915   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:14.203921   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:14.203973   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:14.263366   78008 cri.go:89] found id: ""
	I0917 18:29:14.263395   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.263403   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:14.263408   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:14.263469   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:14.305300   78008 cri.go:89] found id: ""
	I0917 18:29:14.305324   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.305331   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:14.305337   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:14.305393   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:14.342838   78008 cri.go:89] found id: ""
	I0917 18:29:14.342874   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.342888   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:14.342900   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:14.342915   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:14.394814   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:14.394864   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:14.410058   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:14.410084   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:14.497503   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:14.497532   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:14.497547   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:14.578545   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:14.578582   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:11.164670   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:13.664310   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:15.664728   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:13.001122   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:15.001204   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:15.124476   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:17.125696   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:17.119953   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:17.134019   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:17.134078   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:17.174236   78008 cri.go:89] found id: ""
	I0917 18:29:17.174259   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.174268   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:17.174273   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:17.174317   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:17.208678   78008 cri.go:89] found id: ""
	I0917 18:29:17.208738   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.208749   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:17.208757   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:17.208820   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:17.242890   78008 cri.go:89] found id: ""
	I0917 18:29:17.242915   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.242923   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:17.242929   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:17.242983   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:17.281990   78008 cri.go:89] found id: ""
	I0917 18:29:17.282013   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.282038   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:17.282046   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:17.282105   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:17.320104   78008 cri.go:89] found id: ""
	I0917 18:29:17.320140   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.320153   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:17.320160   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:17.320220   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:17.361959   78008 cri.go:89] found id: ""
	I0917 18:29:17.361993   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.362004   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:17.362012   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:17.362120   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:17.400493   78008 cri.go:89] found id: ""
	I0917 18:29:17.400531   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.400543   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:17.400550   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:17.400611   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:17.435549   78008 cri.go:89] found id: ""
	I0917 18:29:17.435574   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.435582   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:17.435590   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:17.435605   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:17.483883   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:17.483919   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:17.498771   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:17.498801   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:17.583654   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:17.583680   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:17.583695   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:17.670903   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:17.670935   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:20.213963   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:20.228410   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:20.228487   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:20.268252   78008 cri.go:89] found id: ""
	I0917 18:29:20.268290   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.268301   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:20.268308   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:20.268385   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:20.307725   78008 cri.go:89] found id: ""
	I0917 18:29:20.307765   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.307774   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:20.307779   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:20.307840   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:20.350112   78008 cri.go:89] found id: ""
	I0917 18:29:20.350138   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.350146   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:20.350151   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:20.350209   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:20.386658   78008 cri.go:89] found id: ""
	I0917 18:29:20.386683   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.386692   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:20.386697   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:20.386758   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:20.427135   78008 cri.go:89] found id: ""
	I0917 18:29:20.427168   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.427180   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:20.427186   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:20.427253   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:20.464054   78008 cri.go:89] found id: ""
	I0917 18:29:20.464081   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.464091   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:20.464098   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:20.464162   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:20.503008   78008 cri.go:89] found id: ""
	I0917 18:29:20.503034   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.503043   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:20.503048   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:20.503107   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:20.539095   78008 cri.go:89] found id: ""
	I0917 18:29:20.539125   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.539137   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:20.539149   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:20.539165   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:20.552429   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:20.552457   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:20.631977   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:20.632000   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:20.632012   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:18.164593   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:20.164968   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:17.501184   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:19.503422   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:22.001605   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:19.624854   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:21.625397   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:20.709917   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:20.709950   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:20.752312   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:20.752349   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:23.310520   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:23.327230   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:23.327296   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:23.369648   78008 cri.go:89] found id: ""
	I0917 18:29:23.369677   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.369687   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:23.369692   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:23.369756   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:23.406968   78008 cri.go:89] found id: ""
	I0917 18:29:23.407002   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.407010   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:23.407017   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:23.407079   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:23.448246   78008 cri.go:89] found id: ""
	I0917 18:29:23.448275   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.448285   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:23.448290   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:23.448350   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:23.486975   78008 cri.go:89] found id: ""
	I0917 18:29:23.487006   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.487016   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:23.487024   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:23.487077   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:23.523614   78008 cri.go:89] found id: ""
	I0917 18:29:23.523645   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.523656   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:23.523672   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:23.523751   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:23.567735   78008 cri.go:89] found id: ""
	I0917 18:29:23.567763   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.567774   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:23.567781   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:23.567846   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:23.610952   78008 cri.go:89] found id: ""
	I0917 18:29:23.610985   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.610995   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:23.611002   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:23.611063   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:23.647601   78008 cri.go:89] found id: ""
	I0917 18:29:23.647633   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.647645   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:23.647657   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:23.647674   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:23.720969   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:23.720998   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:23.721014   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:23.802089   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:23.802124   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:23.847641   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:23.847673   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:23.901447   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:23.901488   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:22.663696   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:25.164022   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:24.001853   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:26.002572   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:24.124362   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:26.125485   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:26.416524   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:26.432087   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:26.432148   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:26.473403   78008 cri.go:89] found id: ""
	I0917 18:29:26.473435   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.473446   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:26.473453   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:26.473516   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:26.510736   78008 cri.go:89] found id: ""
	I0917 18:29:26.510764   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.510774   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:26.510780   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:26.510847   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:26.549732   78008 cri.go:89] found id: ""
	I0917 18:29:26.549766   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.549779   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:26.549789   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:26.549857   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:26.586548   78008 cri.go:89] found id: ""
	I0917 18:29:26.586580   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.586592   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:26.586599   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:26.586664   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:26.624246   78008 cri.go:89] found id: ""
	I0917 18:29:26.624276   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.624286   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:26.624294   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:26.624353   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:26.662535   78008 cri.go:89] found id: ""
	I0917 18:29:26.662565   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.662576   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:26.662584   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:26.662648   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:26.697775   78008 cri.go:89] found id: ""
	I0917 18:29:26.697810   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.697820   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:26.697826   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:26.697885   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:26.734181   78008 cri.go:89] found id: ""
	I0917 18:29:26.734209   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.734218   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:26.734228   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:26.734239   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:26.783128   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:26.783163   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:26.797674   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:26.797713   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:26.873548   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:26.873570   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:26.873581   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:26.954031   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:26.954066   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:29.494364   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:29.508545   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:29.508616   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:29.545854   78008 cri.go:89] found id: ""
	I0917 18:29:29.545880   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.545888   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:29.545893   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:29.545941   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:29.581646   78008 cri.go:89] found id: ""
	I0917 18:29:29.581680   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.581691   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:29.581698   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:29.581770   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:29.627071   78008 cri.go:89] found id: ""
	I0917 18:29:29.627101   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.627112   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:29.627119   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:29.627176   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:29.662514   78008 cri.go:89] found id: ""
	I0917 18:29:29.662544   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.662555   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:29.662562   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:29.662622   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:29.699246   78008 cri.go:89] found id: ""
	I0917 18:29:29.699278   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.699291   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:29.699299   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:29.699359   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:29.736018   78008 cri.go:89] found id: ""
	I0917 18:29:29.736057   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.736070   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:29.736077   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:29.736138   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:29.773420   78008 cri.go:89] found id: ""
	I0917 18:29:29.773449   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.773459   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:29.773467   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:29.773527   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:29.811530   78008 cri.go:89] found id: ""
	I0917 18:29:29.811556   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.811568   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:29.811578   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:29.811592   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:29.870083   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:29.870123   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:29.885471   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:29.885500   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:29.964699   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:29.964730   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:29.964754   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:30.048858   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:30.048899   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:27.165404   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:29.166367   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:28.500007   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:30.500594   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:28.626043   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:31.125419   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:33.125872   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:32.597013   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:32.611613   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:32.611691   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:32.648043   78008 cri.go:89] found id: ""
	I0917 18:29:32.648074   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.648086   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:32.648093   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:32.648159   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:32.686471   78008 cri.go:89] found id: ""
	I0917 18:29:32.686514   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.686526   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:32.686533   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:32.686594   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:32.721495   78008 cri.go:89] found id: ""
	I0917 18:29:32.721521   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.721530   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:32.721536   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:32.721595   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:32.757916   78008 cri.go:89] found id: ""
	I0917 18:29:32.757949   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.757960   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:32.757968   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:32.758035   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:32.793880   78008 cri.go:89] found id: ""
	I0917 18:29:32.793913   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.793925   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:32.793933   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:32.794006   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:32.834944   78008 cri.go:89] found id: ""
	I0917 18:29:32.834965   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.834973   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:32.834983   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:32.835044   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:32.872852   78008 cri.go:89] found id: ""
	I0917 18:29:32.872875   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.872883   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:32.872888   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:32.872939   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:32.913506   78008 cri.go:89] found id: ""
	I0917 18:29:32.913530   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.913538   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:32.913547   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:32.913562   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:32.928726   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:32.928751   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:33.001220   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:33.001259   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:33.001274   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:33.080268   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:33.080304   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:33.123977   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:33.124008   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:31.664513   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:34.164735   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:33.001341   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:35.500975   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:35.625484   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:37.625964   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:35.678936   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:35.692953   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:35.693036   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:35.736947   78008 cri.go:89] found id: ""
	I0917 18:29:35.736984   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.737004   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:35.737012   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:35.737076   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:35.776148   78008 cri.go:89] found id: ""
	I0917 18:29:35.776173   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.776184   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:35.776191   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:35.776253   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:35.814136   78008 cri.go:89] found id: ""
	I0917 18:29:35.814167   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.814179   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:35.814189   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:35.814252   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:35.854451   78008 cri.go:89] found id: ""
	I0917 18:29:35.854480   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.854492   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:35.854505   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:35.854573   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:35.893068   78008 cri.go:89] found id: ""
	I0917 18:29:35.893091   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.893102   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:35.893108   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:35.893174   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:35.929116   78008 cri.go:89] found id: ""
	I0917 18:29:35.929140   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.929148   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:35.929153   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:35.929211   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:35.964253   78008 cri.go:89] found id: ""
	I0917 18:29:35.964284   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.964294   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:35.964300   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:35.964364   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:36.002761   78008 cri.go:89] found id: ""
	I0917 18:29:36.002790   78008 logs.go:276] 0 containers: []
	W0917 18:29:36.002800   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:36.002810   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:36.002825   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:36.017581   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:36.017614   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:36.086982   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:36.087008   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:36.087024   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:36.169886   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:36.169919   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:36.215327   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:36.215355   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:38.768619   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:38.781979   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:38.782049   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:38.818874   78008 cri.go:89] found id: ""
	I0917 18:29:38.818903   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.818911   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:38.818918   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:38.818967   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:38.857619   78008 cri.go:89] found id: ""
	I0917 18:29:38.857648   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.857664   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:38.857670   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:38.857747   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:38.896861   78008 cri.go:89] found id: ""
	I0917 18:29:38.896896   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.896907   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:38.896914   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:38.896977   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:38.934593   78008 cri.go:89] found id: ""
	I0917 18:29:38.934616   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.934625   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:38.934632   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:38.934707   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:38.972359   78008 cri.go:89] found id: ""
	I0917 18:29:38.972383   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.972394   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:38.972400   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:38.972468   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:39.007529   78008 cri.go:89] found id: ""
	I0917 18:29:39.007554   78008 logs.go:276] 0 containers: []
	W0917 18:29:39.007561   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:39.007567   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:39.007613   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:39.042646   78008 cri.go:89] found id: ""
	I0917 18:29:39.042679   78008 logs.go:276] 0 containers: []
	W0917 18:29:39.042690   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:39.042697   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:39.042758   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:39.080077   78008 cri.go:89] found id: ""
	I0917 18:29:39.080106   78008 logs.go:276] 0 containers: []
	W0917 18:29:39.080118   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:39.080128   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:39.080144   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:39.094785   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:39.094812   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:39.168149   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:39.168173   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:39.168184   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:39.258912   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:39.258958   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:39.303103   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:39.303133   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:36.664761   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:38.664881   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:37.501339   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:40.001032   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:42.001645   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:40.124869   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:42.125730   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:41.860904   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:41.875574   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:41.875644   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:41.916576   78008 cri.go:89] found id: ""
	I0917 18:29:41.916603   78008 logs.go:276] 0 containers: []
	W0917 18:29:41.916615   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:41.916623   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:41.916674   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:41.952222   78008 cri.go:89] found id: ""
	I0917 18:29:41.952284   78008 logs.go:276] 0 containers: []
	W0917 18:29:41.952298   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:41.952307   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:41.952374   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:41.992584   78008 cri.go:89] found id: ""
	I0917 18:29:41.992611   78008 logs.go:276] 0 containers: []
	W0917 18:29:41.992621   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:41.992627   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:41.992689   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:42.030490   78008 cri.go:89] found id: ""
	I0917 18:29:42.030522   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.030534   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:42.030542   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:42.030621   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:42.067240   78008 cri.go:89] found id: ""
	I0917 18:29:42.067274   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.067287   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:42.067312   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:42.067394   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:42.106093   78008 cri.go:89] found id: ""
	I0917 18:29:42.106124   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.106137   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:42.106148   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:42.106227   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:42.148581   78008 cri.go:89] found id: ""
	I0917 18:29:42.148623   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.148635   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:42.148643   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:42.148729   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:42.188248   78008 cri.go:89] found id: ""
	I0917 18:29:42.188277   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.188286   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:42.188294   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:42.188308   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:42.276866   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:42.276906   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:42.325636   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:42.325671   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:42.379370   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:42.379406   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:42.396321   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:42.396357   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:42.481770   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:44.982800   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:44.996898   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:44.997053   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:45.036594   78008 cri.go:89] found id: ""
	I0917 18:29:45.036623   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.036632   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:45.036638   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:45.036699   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:45.073760   78008 cri.go:89] found id: ""
	I0917 18:29:45.073788   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.073799   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:45.073807   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:45.073868   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:45.111080   78008 cri.go:89] found id: ""
	I0917 18:29:45.111106   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.111116   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:45.111127   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:45.111196   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:45.149986   78008 cri.go:89] found id: ""
	I0917 18:29:45.150017   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.150027   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:45.150035   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:45.150099   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:45.187597   78008 cri.go:89] found id: ""
	I0917 18:29:45.187620   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.187629   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:45.187635   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:45.187701   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:45.234149   78008 cri.go:89] found id: ""
	I0917 18:29:45.234174   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.234182   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:45.234188   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:45.234236   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:45.269840   78008 cri.go:89] found id: ""
	I0917 18:29:45.269867   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.269875   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:45.269882   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:45.269944   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:45.306377   78008 cri.go:89] found id: ""
	I0917 18:29:45.306407   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.306418   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:45.306427   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:45.306441   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:45.388767   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:45.388788   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:45.388799   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:45.470114   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:45.470147   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:45.516157   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:45.516185   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:45.573857   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:45.573895   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:41.166141   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:43.664951   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:44.501916   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:47.000980   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:44.626656   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:47.124445   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:48.090706   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:48.105691   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:48.105776   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:48.150986   78008 cri.go:89] found id: ""
	I0917 18:29:48.151013   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.151024   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:48.151032   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:48.151100   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:48.192061   78008 cri.go:89] found id: ""
	I0917 18:29:48.192090   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.192099   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:48.192104   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:48.192161   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:48.229101   78008 cri.go:89] found id: ""
	I0917 18:29:48.229131   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.229148   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:48.229157   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:48.229220   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:48.265986   78008 cri.go:89] found id: ""
	I0917 18:29:48.266016   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.266027   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:48.266034   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:48.266095   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:48.303726   78008 cri.go:89] found id: ""
	I0917 18:29:48.303766   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.303776   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:48.303781   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:48.303830   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:48.339658   78008 cri.go:89] found id: ""
	I0917 18:29:48.339686   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.339696   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:48.339704   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:48.339774   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:48.379115   78008 cri.go:89] found id: ""
	I0917 18:29:48.379140   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.379157   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:48.379164   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:48.379218   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:48.414414   78008 cri.go:89] found id: ""
	I0917 18:29:48.414449   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.414461   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:48.414472   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:48.414488   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:48.428450   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:48.428477   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:48.514098   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:48.514125   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:48.514140   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:48.593472   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:48.593505   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:48.644071   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:48.644108   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:46.165499   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:48.166008   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:50.663751   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:49.001133   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:51.001465   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:49.125957   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:51.126670   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:51.202414   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:51.216803   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:51.216880   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:51.258947   78008 cri.go:89] found id: ""
	I0917 18:29:51.258982   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.259000   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:51.259009   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:51.259075   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:51.298904   78008 cri.go:89] found id: ""
	I0917 18:29:51.298937   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.298949   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:51.298957   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:51.299019   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:51.340714   78008 cri.go:89] found id: ""
	I0917 18:29:51.340743   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.340755   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:51.340761   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:51.340823   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:51.382480   78008 cri.go:89] found id: ""
	I0917 18:29:51.382518   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.382527   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:51.382532   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:51.382584   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:51.423788   78008 cri.go:89] found id: ""
	I0917 18:29:51.423818   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.423829   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:51.423836   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:51.423905   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:51.459714   78008 cri.go:89] found id: ""
	I0917 18:29:51.459740   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.459755   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:51.459762   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:51.459810   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:51.495817   78008 cri.go:89] found id: ""
	I0917 18:29:51.495850   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.495862   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:51.495870   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:51.495926   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:51.531481   78008 cri.go:89] found id: ""
	I0917 18:29:51.531521   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.531538   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:51.531550   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:51.531566   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:51.547085   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:51.547120   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:51.622717   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:51.622743   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:51.622758   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:51.701363   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:51.701404   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:51.749746   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:51.749779   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:54.306208   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:54.320659   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:54.320737   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:54.365488   78008 cri.go:89] found id: ""
	I0917 18:29:54.365513   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.365521   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:54.365527   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:54.365588   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:54.417659   78008 cri.go:89] found id: ""
	I0917 18:29:54.417689   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.417700   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:54.417706   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:54.417773   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:54.460760   78008 cri.go:89] found id: ""
	I0917 18:29:54.460795   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.460806   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:54.460814   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:54.460865   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:54.501371   78008 cri.go:89] found id: ""
	I0917 18:29:54.501405   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.501419   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:54.501428   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:54.501501   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:54.549810   78008 cri.go:89] found id: ""
	I0917 18:29:54.549844   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.549853   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:54.549859   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:54.549910   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:54.586837   78008 cri.go:89] found id: ""
	I0917 18:29:54.586860   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.586867   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:54.586881   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:54.586942   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:54.623858   78008 cri.go:89] found id: ""
	I0917 18:29:54.623887   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.623898   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:54.623905   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:54.623967   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:54.660913   78008 cri.go:89] found id: ""
	I0917 18:29:54.660945   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.660955   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:54.660965   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:54.660979   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:54.716523   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:54.716560   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:54.731846   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:54.731877   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:54.812288   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:54.812311   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:54.812323   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:54.892779   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:54.892819   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:52.663861   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:54.664903   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:53.501802   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:56.001407   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:53.624682   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:56.124445   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:57.440435   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:57.454886   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:57.454964   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:57.491408   78008 cri.go:89] found id: ""
	I0917 18:29:57.491440   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.491453   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:57.491461   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:57.491523   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:57.535786   78008 cri.go:89] found id: ""
	I0917 18:29:57.535814   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.535829   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:57.535837   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:57.535904   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:57.578014   78008 cri.go:89] found id: ""
	I0917 18:29:57.578043   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.578051   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:57.578057   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:57.578108   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:57.615580   78008 cri.go:89] found id: ""
	I0917 18:29:57.615615   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.615626   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:57.615634   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:57.615699   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:57.660250   78008 cri.go:89] found id: ""
	I0917 18:29:57.660285   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.660296   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:57.660305   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:57.660366   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:57.700495   78008 cri.go:89] found id: ""
	I0917 18:29:57.700526   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.700536   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:57.700542   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:57.700600   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:57.740580   78008 cri.go:89] found id: ""
	I0917 18:29:57.740616   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.740627   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:57.740635   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:57.740694   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:57.776982   78008 cri.go:89] found id: ""
	I0917 18:29:57.777012   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.777024   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:57.777035   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:57.777049   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:57.877144   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:57.877184   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:57.923875   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:57.923912   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:57.976988   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:57.977025   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:57.992196   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:57.992223   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:58.071161   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:00.571930   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:00.586999   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:00.587083   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:00.625833   78008 cri.go:89] found id: ""
	I0917 18:30:00.625856   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.625864   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:00.625869   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:00.625924   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:00.669976   78008 cri.go:89] found id: ""
	I0917 18:30:00.669999   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.670007   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:00.670012   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:00.670072   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:56.665386   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:59.163695   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:58.002576   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:00.500510   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:58.624759   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:00.633084   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:03.124695   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:00.708223   78008 cri.go:89] found id: ""
	I0917 18:30:00.708249   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.708257   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:00.708263   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:00.708315   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:00.743322   78008 cri.go:89] found id: ""
	I0917 18:30:00.743352   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.743364   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:00.743371   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:00.743508   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:00.778595   78008 cri.go:89] found id: ""
	I0917 18:30:00.778625   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.778635   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:00.778643   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:00.778706   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:00.816878   78008 cri.go:89] found id: ""
	I0917 18:30:00.816911   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.816923   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:00.816930   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:00.816983   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:00.855841   78008 cri.go:89] found id: ""
	I0917 18:30:00.855876   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.855889   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:00.855898   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:00.855974   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:00.897170   78008 cri.go:89] found id: ""
	I0917 18:30:00.897195   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.897203   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:00.897210   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:00.897236   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:00.949640   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:00.949680   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:00.963799   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:00.963825   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:01.050102   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:01.050123   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:01.050135   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:01.129012   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:01.129061   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:03.672160   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:03.687572   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:03.687648   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:03.729586   78008 cri.go:89] found id: ""
	I0917 18:30:03.729615   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.729626   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:03.729632   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:03.729692   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:03.766993   78008 cri.go:89] found id: ""
	I0917 18:30:03.767022   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.767032   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:03.767039   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:03.767104   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:03.804340   78008 cri.go:89] found id: ""
	I0917 18:30:03.804368   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.804378   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:03.804385   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:03.804451   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:03.847020   78008 cri.go:89] found id: ""
	I0917 18:30:03.847050   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.847061   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:03.847068   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:03.847158   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:03.885900   78008 cri.go:89] found id: ""
	I0917 18:30:03.885927   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.885938   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:03.885946   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:03.886009   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:03.925137   78008 cri.go:89] found id: ""
	I0917 18:30:03.925167   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.925178   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:03.925184   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:03.925259   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:03.962225   78008 cri.go:89] found id: ""
	I0917 18:30:03.962261   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.962275   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:03.962283   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:03.962352   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:04.005866   78008 cri.go:89] found id: ""
	I0917 18:30:04.005892   78008 logs.go:276] 0 containers: []
	W0917 18:30:04.005902   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:04.005909   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:04.005921   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:04.057578   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:04.057615   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:04.072178   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:04.072213   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:04.145219   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:04.145251   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:04.145285   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:04.234230   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:04.234282   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:01.165075   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:03.666085   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:05.672830   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:03.000954   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:05.501361   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:05.124840   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:07.126821   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:06.777988   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:06.793426   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:06.793500   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:06.833313   78008 cri.go:89] found id: ""
	I0917 18:30:06.833352   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.833360   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:06.833365   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:06.833424   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:06.870020   78008 cri.go:89] found id: ""
	I0917 18:30:06.870047   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.870056   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:06.870062   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:06.870124   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:06.906682   78008 cri.go:89] found id: ""
	I0917 18:30:06.906716   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.906728   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:06.906735   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:06.906810   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:06.946328   78008 cri.go:89] found id: ""
	I0917 18:30:06.946356   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.946365   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:06.946371   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:06.946418   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:06.983832   78008 cri.go:89] found id: ""
	I0917 18:30:06.983856   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.983865   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:06.983871   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:06.983918   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:07.024526   78008 cri.go:89] found id: ""
	I0917 18:30:07.024560   78008 logs.go:276] 0 containers: []
	W0917 18:30:07.024571   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:07.024579   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:07.024637   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:07.066891   78008 cri.go:89] found id: ""
	I0917 18:30:07.066917   78008 logs.go:276] 0 containers: []
	W0917 18:30:07.066928   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:07.066935   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:07.066997   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:07.105669   78008 cri.go:89] found id: ""
	I0917 18:30:07.105709   78008 logs.go:276] 0 containers: []
	W0917 18:30:07.105721   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:07.105732   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:07.105754   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:07.120771   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:07.120802   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:07.195243   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:07.195272   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:07.195287   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:07.284377   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:07.284428   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:07.326894   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:07.326924   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:09.886998   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:09.900710   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:09.900773   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:09.943198   78008 cri.go:89] found id: ""
	I0917 18:30:09.943225   78008 logs.go:276] 0 containers: []
	W0917 18:30:09.943234   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:09.943240   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:09.943300   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:09.980113   78008 cri.go:89] found id: ""
	I0917 18:30:09.980148   78008 logs.go:276] 0 containers: []
	W0917 18:30:09.980160   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:09.980167   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:09.980226   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:10.017582   78008 cri.go:89] found id: ""
	I0917 18:30:10.017613   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.017625   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:10.017632   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:10.017681   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:10.053698   78008 cri.go:89] found id: ""
	I0917 18:30:10.053722   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.053731   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:10.053736   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:10.053784   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:10.091391   78008 cri.go:89] found id: ""
	I0917 18:30:10.091421   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.091433   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:10.091439   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:10.091496   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:10.130636   78008 cri.go:89] found id: ""
	I0917 18:30:10.130668   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.130677   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:10.130682   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:10.130736   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:10.168175   78008 cri.go:89] found id: ""
	I0917 18:30:10.168203   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.168214   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:10.168222   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:10.168313   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:10.207085   78008 cri.go:89] found id: ""
	I0917 18:30:10.207109   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.207118   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:10.207126   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:10.207139   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:10.245978   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:10.246007   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:10.298522   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:10.298569   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:10.312878   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:10.312904   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:10.387530   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:10.387553   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:10.387565   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:08.165955   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:10.663887   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:08.000401   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:10.000928   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:12.001022   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:09.625405   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:12.124546   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:12.967663   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:12.982157   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:12.982215   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:13.020177   78008 cri.go:89] found id: ""
	I0917 18:30:13.020224   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.020235   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:13.020241   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:13.020310   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:13.056317   78008 cri.go:89] found id: ""
	I0917 18:30:13.056342   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.056351   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:13.056356   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:13.056404   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:13.091799   78008 cri.go:89] found id: ""
	I0917 18:30:13.091823   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.091832   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:13.091838   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:13.091888   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:13.130421   78008 cri.go:89] found id: ""
	I0917 18:30:13.130450   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.130460   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:13.130465   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:13.130518   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:13.170623   78008 cri.go:89] found id: ""
	I0917 18:30:13.170654   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.170664   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:13.170672   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:13.170732   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:13.206396   78008 cri.go:89] found id: ""
	I0917 18:30:13.206441   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.206452   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:13.206460   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:13.206514   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:13.243090   78008 cri.go:89] found id: ""
	I0917 18:30:13.243121   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.243132   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:13.243139   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:13.243192   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:13.285690   78008 cri.go:89] found id: ""
	I0917 18:30:13.285730   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.285740   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:13.285747   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:13.285759   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:13.361992   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:13.362021   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:13.362043   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:13.448424   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:13.448467   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:13.489256   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:13.489284   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:13.544698   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:13.544735   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:12.665127   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:15.164296   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:14.501748   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:17.001119   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:14.124965   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:16.625638   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:16.060014   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:16.073504   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:16.073564   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:16.110538   78008 cri.go:89] found id: ""
	I0917 18:30:16.110567   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.110579   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:16.110587   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:16.110648   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:16.148521   78008 cri.go:89] found id: ""
	I0917 18:30:16.148551   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.148562   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:16.148570   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:16.148640   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:16.182772   78008 cri.go:89] found id: ""
	I0917 18:30:16.182796   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.182804   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:16.182809   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:16.182858   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:16.219617   78008 cri.go:89] found id: ""
	I0917 18:30:16.219642   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.219653   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:16.219660   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:16.219714   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:16.257320   78008 cri.go:89] found id: ""
	I0917 18:30:16.257345   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.257354   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:16.257359   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:16.257419   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:16.295118   78008 cri.go:89] found id: ""
	I0917 18:30:16.295150   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.295161   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:16.295168   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:16.295234   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:16.332448   78008 cri.go:89] found id: ""
	I0917 18:30:16.332482   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.332493   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:16.332500   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:16.332564   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:16.370155   78008 cri.go:89] found id: ""
	I0917 18:30:16.370182   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.370189   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:16.370197   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:16.370208   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:16.410230   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:16.410260   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:16.462306   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:16.462342   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:16.476472   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:16.476506   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:16.550449   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:16.550479   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:16.550497   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:19.129550   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:19.143333   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:19.143415   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:19.184184   78008 cri.go:89] found id: ""
	I0917 18:30:19.184213   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.184224   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:19.184231   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:19.184289   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:19.219455   78008 cri.go:89] found id: ""
	I0917 18:30:19.219489   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.219501   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:19.219508   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:19.219568   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:19.257269   78008 cri.go:89] found id: ""
	I0917 18:30:19.257303   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.257315   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:19.257328   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:19.257405   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:19.293898   78008 cri.go:89] found id: ""
	I0917 18:30:19.293931   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.293943   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:19.293951   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:19.294009   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:19.339154   78008 cri.go:89] found id: ""
	I0917 18:30:19.339183   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.339194   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:19.339201   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:19.339268   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:19.378608   78008 cri.go:89] found id: ""
	I0917 18:30:19.378634   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.378646   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:19.378653   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:19.378720   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:19.415280   78008 cri.go:89] found id: ""
	I0917 18:30:19.415311   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.415322   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:19.415330   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:19.415396   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:19.454025   78008 cri.go:89] found id: ""
	I0917 18:30:19.454066   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.454079   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:19.454089   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:19.454107   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:19.505918   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:19.505950   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:19.520996   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:19.521027   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:19.597408   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:19.597431   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:19.597442   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:19.678454   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:19.678487   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:17.165495   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:19.665976   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:19.001210   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:21.001549   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:19.123461   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:21.124423   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:23.124646   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:22.223094   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:22.238644   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:22.238722   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:22.279497   78008 cri.go:89] found id: ""
	I0917 18:30:22.279529   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.279541   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:22.279554   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:22.279616   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:22.315953   78008 cri.go:89] found id: ""
	I0917 18:30:22.315980   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.315990   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:22.315997   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:22.316061   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:22.355157   78008 cri.go:89] found id: ""
	I0917 18:30:22.355191   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.355204   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:22.355212   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:22.355278   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:22.393304   78008 cri.go:89] found id: ""
	I0917 18:30:22.393335   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.393346   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:22.393353   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:22.393405   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:22.437541   78008 cri.go:89] found id: ""
	I0917 18:30:22.437567   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.437576   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:22.437582   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:22.437637   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:22.478560   78008 cri.go:89] found id: ""
	I0917 18:30:22.478588   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.478596   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:22.478601   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:22.478661   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:22.516049   78008 cri.go:89] found id: ""
	I0917 18:30:22.516084   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.516093   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:22.516099   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:22.516151   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:22.554321   78008 cri.go:89] found id: ""
	I0917 18:30:22.554350   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.554359   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:22.554367   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:22.554377   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:22.613073   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:22.613110   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:22.627768   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:22.627797   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:22.710291   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:22.710318   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:22.710333   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:22.807999   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:22.808035   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:25.350639   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:25.366302   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:25.366405   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:25.411585   78008 cri.go:89] found id: ""
	I0917 18:30:25.411613   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.411625   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:25.411632   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:25.411694   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:25.453414   78008 cri.go:89] found id: ""
	I0917 18:30:25.453441   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.453461   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:25.453467   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:25.453529   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:25.489776   78008 cri.go:89] found id: ""
	I0917 18:30:25.489803   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.489812   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:25.489817   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:25.489868   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:25.531594   78008 cri.go:89] found id: ""
	I0917 18:30:25.531624   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.531633   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:25.531638   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:25.531686   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:25.568796   78008 cri.go:89] found id: ""
	I0917 18:30:25.568820   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.568831   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:25.568837   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:25.568888   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:25.605612   78008 cri.go:89] found id: ""
	I0917 18:30:25.605643   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.605654   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:25.605661   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:25.605719   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:25.647673   78008 cri.go:89] found id: ""
	I0917 18:30:25.647698   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.647708   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:25.647713   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:25.647772   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:22.164631   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:24.165353   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:23.500355   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:25.503250   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:25.125192   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:27.125540   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:25.686943   78008 cri.go:89] found id: ""
	I0917 18:30:25.686976   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.686989   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:25.687000   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:25.687022   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:25.728440   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:25.728477   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:25.778211   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:25.778254   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:25.792519   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:25.792547   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:25.879452   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:25.879477   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:25.879492   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:28.460531   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:28.474595   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:28.474689   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:28.531065   78008 cri.go:89] found id: ""
	I0917 18:30:28.531099   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.531108   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:28.531117   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:28.531184   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:28.571952   78008 cri.go:89] found id: ""
	I0917 18:30:28.571991   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.572002   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:28.572012   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:28.572081   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:28.608315   78008 cri.go:89] found id: ""
	I0917 18:30:28.608348   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.608364   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:28.608371   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:28.608433   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:28.647882   78008 cri.go:89] found id: ""
	I0917 18:30:28.647913   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.647925   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:28.647932   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:28.647997   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:28.684998   78008 cri.go:89] found id: ""
	I0917 18:30:28.685021   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.685030   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:28.685036   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:28.685098   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:28.724249   78008 cri.go:89] found id: ""
	I0917 18:30:28.724274   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.724282   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:28.724287   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:28.724348   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:28.765932   78008 cri.go:89] found id: ""
	I0917 18:30:28.765965   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.765976   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:28.765982   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:28.766047   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:28.803857   78008 cri.go:89] found id: ""
	I0917 18:30:28.803888   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.803899   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:28.803910   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:28.803923   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:28.863667   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:28.863703   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:28.878148   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:28.878187   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:28.956714   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:28.956743   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:28.956760   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:29.036303   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:29.036342   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:26.664369   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:28.665390   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:28.001973   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:30.500284   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:29.126782   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:31.626235   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:31.581741   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:31.595509   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:31.595592   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:31.631185   78008 cri.go:89] found id: ""
	I0917 18:30:31.631215   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.631227   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:31.631234   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:31.631286   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:31.668059   78008 cri.go:89] found id: ""
	I0917 18:30:31.668091   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.668102   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:31.668109   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:31.668168   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:31.705807   78008 cri.go:89] found id: ""
	I0917 18:30:31.705838   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.705849   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:31.705856   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:31.705925   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:31.750168   78008 cri.go:89] found id: ""
	I0917 18:30:31.750198   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.750212   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:31.750220   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:31.750282   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:31.792032   78008 cri.go:89] found id: ""
	I0917 18:30:31.792054   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.792063   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:31.792069   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:31.792130   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:31.828596   78008 cri.go:89] found id: ""
	I0917 18:30:31.828632   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.828646   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:31.828654   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:31.828708   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:31.871963   78008 cri.go:89] found id: ""
	I0917 18:30:31.872000   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.872013   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:31.872023   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:31.872094   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:31.906688   78008 cri.go:89] found id: ""
	I0917 18:30:31.906718   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.906727   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:31.906735   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:31.906746   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:31.920311   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:31.920339   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:32.009966   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:32.009992   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:32.010006   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:32.088409   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:32.088447   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:32.132771   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:32.132806   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:34.686159   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:34.700133   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:34.700211   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:34.739392   78008 cri.go:89] found id: ""
	I0917 18:30:34.739431   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.739445   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:34.739453   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:34.739522   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:34.779141   78008 cri.go:89] found id: ""
	I0917 18:30:34.779175   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.779188   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:34.779195   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:34.779260   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:34.819883   78008 cri.go:89] found id: ""
	I0917 18:30:34.819907   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.819915   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:34.819920   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:34.819967   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:34.855886   78008 cri.go:89] found id: ""
	I0917 18:30:34.855912   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.855923   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:34.855931   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:34.855999   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:34.903919   78008 cri.go:89] found id: ""
	I0917 18:30:34.903956   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.903968   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:34.903975   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:34.904042   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:34.951895   78008 cri.go:89] found id: ""
	I0917 18:30:34.951925   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.951936   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:34.951943   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:34.952007   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:35.013084   78008 cri.go:89] found id: ""
	I0917 18:30:35.013124   78008 logs.go:276] 0 containers: []
	W0917 18:30:35.013132   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:35.013137   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:35.013189   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:35.051565   78008 cri.go:89] found id: ""
	I0917 18:30:35.051589   78008 logs.go:276] 0 containers: []
	W0917 18:30:35.051598   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:35.051606   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:35.051616   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:35.092723   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:35.092753   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:35.147996   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:35.148037   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:35.164989   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:35.165030   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:35.246216   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:35.246239   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:35.246252   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:31.163920   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:33.664255   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:32.500662   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:35.002015   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:34.124883   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:36.125144   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:38.125514   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:37.828811   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:37.846467   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:37.846534   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:37.884725   78008 cri.go:89] found id: ""
	I0917 18:30:37.884758   78008 logs.go:276] 0 containers: []
	W0917 18:30:37.884769   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:37.884777   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:37.884836   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:37.923485   78008 cri.go:89] found id: ""
	I0917 18:30:37.923517   78008 logs.go:276] 0 containers: []
	W0917 18:30:37.923525   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:37.923531   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:37.923597   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:37.962829   78008 cri.go:89] found id: ""
	I0917 18:30:37.962857   78008 logs.go:276] 0 containers: []
	W0917 18:30:37.962867   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:37.962873   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:37.962938   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:38.003277   78008 cri.go:89] found id: ""
	I0917 18:30:38.003305   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.003313   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:38.003319   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:38.003380   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:38.047919   78008 cri.go:89] found id: ""
	I0917 18:30:38.047952   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.047963   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:38.047971   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:38.048043   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:38.084853   78008 cri.go:89] found id: ""
	I0917 18:30:38.084883   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.084896   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:38.084904   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:38.084967   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:38.122340   78008 cri.go:89] found id: ""
	I0917 18:30:38.122369   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.122379   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:38.122387   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:38.122446   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:38.163071   78008 cri.go:89] found id: ""
	I0917 18:30:38.163101   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.163112   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:38.163121   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:38.163134   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:38.243772   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:38.243812   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:38.291744   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:38.291777   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:38.346738   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:38.346778   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:38.361908   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:38.361953   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:38.441730   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:36.165051   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:38.165173   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:40.664192   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:37.500496   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:39.501199   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:42.000608   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:40.626165   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:43.125533   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:40.942693   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:40.960643   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:40.960713   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:41.016226   78008 cri.go:89] found id: ""
	I0917 18:30:41.016255   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.016265   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:41.016270   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:41.016328   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:41.054315   78008 cri.go:89] found id: ""
	I0917 18:30:41.054342   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.054353   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:41.054360   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:41.054426   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:41.092946   78008 cri.go:89] found id: ""
	I0917 18:30:41.092978   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.092991   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:41.092998   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:41.093058   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:41.133385   78008 cri.go:89] found id: ""
	I0917 18:30:41.133415   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.133423   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:41.133430   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:41.133487   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:41.173993   78008 cri.go:89] found id: ""
	I0917 18:30:41.174017   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.174025   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:41.174030   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:41.174083   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:41.211127   78008 cri.go:89] found id: ""
	I0917 18:30:41.211154   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.211168   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:41.211174   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:41.211244   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:41.248607   78008 cri.go:89] found id: ""
	I0917 18:30:41.248632   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.248645   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:41.248652   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:41.248714   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:41.284580   78008 cri.go:89] found id: ""
	I0917 18:30:41.284612   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.284621   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:41.284629   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:41.284640   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:41.336573   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:41.336613   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:41.352134   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:41.352167   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:41.419061   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:41.419085   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:41.419099   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:41.499758   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:41.499792   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:44.043361   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:44.057270   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:44.057339   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:44.096130   78008 cri.go:89] found id: ""
	I0917 18:30:44.096165   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.096176   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:44.096184   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:44.096238   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:44.134483   78008 cri.go:89] found id: ""
	I0917 18:30:44.134514   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.134526   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:44.134534   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:44.134601   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:44.172723   78008 cri.go:89] found id: ""
	I0917 18:30:44.172759   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.172774   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:44.172782   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:44.172855   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:44.208478   78008 cri.go:89] found id: ""
	I0917 18:30:44.208506   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.208514   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:44.208519   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:44.208577   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:44.249352   78008 cri.go:89] found id: ""
	I0917 18:30:44.249381   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.249391   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:44.249398   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:44.249457   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:44.291156   78008 cri.go:89] found id: ""
	I0917 18:30:44.291180   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.291188   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:44.291194   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:44.291243   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:44.331580   78008 cri.go:89] found id: ""
	I0917 18:30:44.331612   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.331623   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:44.331632   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:44.331705   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:44.370722   78008 cri.go:89] found id: ""
	I0917 18:30:44.370750   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.370763   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:44.370774   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:44.370797   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:44.421126   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:44.421161   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:44.478581   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:44.478624   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:44.493492   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:44.493522   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:44.566317   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:44.566347   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:44.566358   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:42.664631   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:44.664871   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:44.001209   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:46.003437   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:45.625415   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:47.626515   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:47.147466   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:47.162590   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:47.162654   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:47.201382   78008 cri.go:89] found id: ""
	I0917 18:30:47.201409   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.201418   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:47.201423   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:47.201474   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:47.249536   78008 cri.go:89] found id: ""
	I0917 18:30:47.249561   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.249569   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:47.249574   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:47.249631   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:47.292337   78008 cri.go:89] found id: ""
	I0917 18:30:47.292361   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.292369   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:47.292376   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:47.292438   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:47.341387   78008 cri.go:89] found id: ""
	I0917 18:30:47.341421   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.341433   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:47.341447   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:47.341531   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:47.382687   78008 cri.go:89] found id: ""
	I0917 18:30:47.382719   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.382748   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:47.382762   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:47.382827   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:47.419598   78008 cri.go:89] found id: ""
	I0917 18:30:47.419632   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.419644   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:47.419650   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:47.419717   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:47.456104   78008 cri.go:89] found id: ""
	I0917 18:30:47.456131   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.456141   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:47.456148   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:47.456210   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:47.498610   78008 cri.go:89] found id: ""
	I0917 18:30:47.498643   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.498654   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:47.498665   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:47.498706   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:47.573796   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:47.573819   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:47.573830   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:47.651234   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:47.651271   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:47.692875   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:47.692902   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:47.747088   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:47.747128   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:50.262789   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:50.277262   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:50.277415   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:50.314866   78008 cri.go:89] found id: ""
	I0917 18:30:50.314902   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.314911   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:50.314916   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:50.314971   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:50.353490   78008 cri.go:89] found id: ""
	I0917 18:30:50.353527   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.353536   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:50.353542   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:50.353590   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:50.391922   78008 cri.go:89] found id: ""
	I0917 18:30:50.391944   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.391952   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:50.391957   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:50.392003   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:50.431088   78008 cri.go:89] found id: ""
	I0917 18:30:50.431118   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.431129   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:50.431136   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:50.431186   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:50.469971   78008 cri.go:89] found id: ""
	I0917 18:30:50.469999   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.470010   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:50.470018   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:50.470083   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:50.509121   78008 cri.go:89] found id: ""
	I0917 18:30:50.509153   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.509165   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:50.509172   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:50.509256   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:50.546569   78008 cri.go:89] found id: ""
	I0917 18:30:50.546594   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.546602   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:50.546607   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:50.546656   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:50.586045   78008 cri.go:89] found id: ""
	I0917 18:30:50.586071   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.586080   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:50.586088   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:50.586098   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:50.642994   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:50.643040   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:50.658018   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:50.658050   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 18:30:46.665597   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:49.164714   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:48.501502   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:51.001554   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:50.124526   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:52.625006   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	W0917 18:30:50.730760   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:50.730792   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:50.730808   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:50.810154   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:50.810185   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:53.356859   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:53.371313   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:53.371406   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:53.412822   78008 cri.go:89] found id: ""
	I0917 18:30:53.412847   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.412858   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:53.412865   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:53.412931   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:53.448900   78008 cri.go:89] found id: ""
	I0917 18:30:53.448932   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.448943   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:53.448950   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:53.449014   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:53.487141   78008 cri.go:89] found id: ""
	I0917 18:30:53.487167   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.487176   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:53.487182   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:53.487251   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:53.528899   78008 cri.go:89] found id: ""
	I0917 18:30:53.528928   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.528940   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:53.528947   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:53.529008   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:53.564795   78008 cri.go:89] found id: ""
	I0917 18:30:53.564827   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.564839   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:53.564847   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:53.564914   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:53.605208   78008 cri.go:89] found id: ""
	I0917 18:30:53.605257   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.605268   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:53.605277   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:53.605339   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:53.647177   78008 cri.go:89] found id: ""
	I0917 18:30:53.647205   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.647214   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:53.647219   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:53.647278   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:53.694030   78008 cri.go:89] found id: ""
	I0917 18:30:53.694057   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.694067   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:53.694075   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:53.694085   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:53.746611   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:53.746645   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:53.761563   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:53.761595   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:53.835663   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:53.835694   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:53.835709   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:53.920796   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:53.920848   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:51.166015   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:53.665173   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:53.001959   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:55.501150   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:54.625124   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:56.626246   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:56.468452   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:56.482077   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:56.482148   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:56.518569   78008 cri.go:89] found id: ""
	I0917 18:30:56.518593   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.518601   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:56.518607   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:56.518665   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:56.560000   78008 cri.go:89] found id: ""
	I0917 18:30:56.560033   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.560045   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:56.560054   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:56.560117   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:56.600391   78008 cri.go:89] found id: ""
	I0917 18:30:56.600423   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.600435   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:56.600442   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:56.600519   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:56.637674   78008 cri.go:89] found id: ""
	I0917 18:30:56.637706   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.637720   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:56.637728   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:56.637781   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:56.673297   78008 cri.go:89] found id: ""
	I0917 18:30:56.673329   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.673340   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:56.673348   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:56.673414   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:56.708863   78008 cri.go:89] found id: ""
	I0917 18:30:56.708898   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.708910   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:56.708917   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:56.708979   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:56.745165   78008 cri.go:89] found id: ""
	I0917 18:30:56.745199   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.745211   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:56.745220   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:56.745297   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:56.793206   78008 cri.go:89] found id: ""
	I0917 18:30:56.793260   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.793273   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:56.793284   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:56.793297   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:56.880661   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:56.880699   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:56.926789   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:56.926820   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:56.978914   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:56.978965   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:56.993199   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:56.993236   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:57.065180   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:59.565927   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:59.579838   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:59.579921   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:59.616623   78008 cri.go:89] found id: ""
	I0917 18:30:59.616648   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.616656   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:59.616662   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:59.616716   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:59.659048   78008 cri.go:89] found id: ""
	I0917 18:30:59.659074   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.659084   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:59.659091   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:59.659153   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:59.694874   78008 cri.go:89] found id: ""
	I0917 18:30:59.694899   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.694910   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:59.694921   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:59.694988   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:59.732858   78008 cri.go:89] found id: ""
	I0917 18:30:59.732889   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.732902   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:59.732909   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:59.732972   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:59.771178   78008 cri.go:89] found id: ""
	I0917 18:30:59.771203   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.771212   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:59.771218   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:59.771271   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:59.812456   78008 cri.go:89] found id: ""
	I0917 18:30:59.812481   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.812490   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:59.812498   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:59.812560   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:59.849876   78008 cri.go:89] found id: ""
	I0917 18:30:59.849906   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.849917   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:59.849924   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:59.849988   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:59.889796   78008 cri.go:89] found id: ""
	I0917 18:30:59.889827   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.889839   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:59.889850   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:59.889865   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:59.942735   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:59.942774   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:59.957159   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:59.957186   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:00.030497   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:00.030522   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:00.030537   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:00.112077   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:00.112134   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:56.164011   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:58.164643   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:00.164831   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:57.502585   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:00.002013   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:02.002047   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:59.125188   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:01.127691   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:02.656525   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:02.671313   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:02.671379   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:02.710779   78008 cri.go:89] found id: ""
	I0917 18:31:02.710807   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.710820   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:02.710827   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:02.710890   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:02.750285   78008 cri.go:89] found id: ""
	I0917 18:31:02.750315   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.750326   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:02.750335   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:02.750399   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:02.790676   78008 cri.go:89] found id: ""
	I0917 18:31:02.790704   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.790712   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:02.790718   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:02.790766   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:02.832124   78008 cri.go:89] found id: ""
	I0917 18:31:02.832154   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.832166   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:02.832174   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:02.832236   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:02.868769   78008 cri.go:89] found id: ""
	I0917 18:31:02.868801   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.868813   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:02.868820   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:02.868886   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:02.910482   78008 cri.go:89] found id: ""
	I0917 18:31:02.910512   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.910524   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:02.910533   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:02.910587   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:02.948128   78008 cri.go:89] found id: ""
	I0917 18:31:02.948154   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.948165   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:02.948172   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:02.948239   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:02.987981   78008 cri.go:89] found id: ""
	I0917 18:31:02.988007   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.988018   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:02.988028   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:02.988042   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:03.044116   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:03.044157   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:03.059837   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:03.059866   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:03.134048   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:03.134073   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:03.134086   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:03.214751   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:03.214792   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:02.169026   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:04.664829   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:04.501493   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:07.001722   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:03.625165   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:06.126203   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:05.768145   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:05.782375   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:05.782455   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:05.820083   78008 cri.go:89] found id: ""
	I0917 18:31:05.820116   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.820127   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:05.820134   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:05.820188   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:05.856626   78008 cri.go:89] found id: ""
	I0917 18:31:05.856655   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.856666   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:05.856673   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:05.856737   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:05.893119   78008 cri.go:89] found id: ""
	I0917 18:31:05.893149   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.893162   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:05.893172   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:05.893299   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:05.931892   78008 cri.go:89] found id: ""
	I0917 18:31:05.931916   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.931924   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:05.931930   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:05.931991   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:05.968770   78008 cri.go:89] found id: ""
	I0917 18:31:05.968802   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.968814   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:05.968820   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:05.968888   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:06.008183   78008 cri.go:89] found id: ""
	I0917 18:31:06.008208   78008 logs.go:276] 0 containers: []
	W0917 18:31:06.008217   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:06.008222   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:06.008267   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:06.043161   78008 cri.go:89] found id: ""
	I0917 18:31:06.043189   78008 logs.go:276] 0 containers: []
	W0917 18:31:06.043199   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:06.043204   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:06.043271   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:06.079285   78008 cri.go:89] found id: ""
	I0917 18:31:06.079315   78008 logs.go:276] 0 containers: []
	W0917 18:31:06.079326   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:06.079336   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:06.079347   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:06.160863   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:06.160913   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:06.202101   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:06.202127   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:06.255482   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:06.255517   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:06.271518   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:06.271545   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:06.344034   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:08.844243   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:08.859312   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:08.859381   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:08.896915   78008 cri.go:89] found id: ""
	I0917 18:31:08.896942   78008 logs.go:276] 0 containers: []
	W0917 18:31:08.896952   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:08.896959   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:08.897022   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:08.937979   78008 cri.go:89] found id: ""
	I0917 18:31:08.938005   78008 logs.go:276] 0 containers: []
	W0917 18:31:08.938014   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:08.938022   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:08.938072   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:08.978502   78008 cri.go:89] found id: ""
	I0917 18:31:08.978536   78008 logs.go:276] 0 containers: []
	W0917 18:31:08.978548   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:08.978556   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:08.978616   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:09.044664   78008 cri.go:89] found id: ""
	I0917 18:31:09.044699   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.044711   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:09.044719   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:09.044796   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:09.082888   78008 cri.go:89] found id: ""
	I0917 18:31:09.082923   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.082944   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:09.082954   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:09.083027   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:09.120314   78008 cri.go:89] found id: ""
	I0917 18:31:09.120339   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.120350   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:09.120357   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:09.120418   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:09.160137   78008 cri.go:89] found id: ""
	I0917 18:31:09.160165   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.160176   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:09.160183   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:09.160241   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:09.198711   78008 cri.go:89] found id: ""
	I0917 18:31:09.198741   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.198749   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:09.198756   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:09.198766   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:09.253431   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:09.253485   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:09.270520   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:09.270554   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:09.349865   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:09.349889   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:09.349909   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:09.436606   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:09.436650   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:07.165101   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:09.165704   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:09.001786   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:11.500557   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:08.625085   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:11.124817   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:13.125531   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:11.981998   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:11.995472   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:11.995556   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:12.035854   78008 cri.go:89] found id: ""
	I0917 18:31:12.035883   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.035894   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:12.035902   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:12.035953   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:12.070923   78008 cri.go:89] found id: ""
	I0917 18:31:12.070953   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.070965   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:12.070973   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:12.071041   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:12.108151   78008 cri.go:89] found id: ""
	I0917 18:31:12.108176   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.108185   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:12.108190   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:12.108238   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:12.146050   78008 cri.go:89] found id: ""
	I0917 18:31:12.146081   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.146092   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:12.146100   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:12.146158   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:12.185355   78008 cri.go:89] found id: ""
	I0917 18:31:12.185387   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.185396   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:12.185402   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:12.185449   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:12.222377   78008 cri.go:89] found id: ""
	I0917 18:31:12.222403   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.222412   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:12.222418   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:12.222488   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:12.258190   78008 cri.go:89] found id: ""
	I0917 18:31:12.258231   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.258242   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:12.258249   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:12.258326   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:12.295674   78008 cri.go:89] found id: ""
	I0917 18:31:12.295710   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.295722   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:12.295731   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:12.295742   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:12.348185   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:12.348223   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:12.363961   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:12.363992   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:12.438630   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:12.438661   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:12.438676   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:12.520086   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:12.520133   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:15.061926   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:15.079141   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:15.079206   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:15.122722   78008 cri.go:89] found id: ""
	I0917 18:31:15.122812   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.122828   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:15.122837   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:15.122895   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:15.168184   78008 cri.go:89] found id: ""
	I0917 18:31:15.168209   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.168218   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:15.168225   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:15.168288   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:15.208219   78008 cri.go:89] found id: ""
	I0917 18:31:15.208246   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.208259   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:15.208266   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:15.208318   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:15.248082   78008 cri.go:89] found id: ""
	I0917 18:31:15.248114   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.248126   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:15.248133   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:15.248197   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:15.285215   78008 cri.go:89] found id: ""
	I0917 18:31:15.285263   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.285274   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:15.285281   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:15.285339   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:15.328617   78008 cri.go:89] found id: ""
	I0917 18:31:15.328650   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.328669   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:15.328675   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:15.328738   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:15.371869   78008 cri.go:89] found id: ""
	I0917 18:31:15.371895   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.371903   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:15.371909   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:15.371955   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:15.418109   78008 cri.go:89] found id: ""
	I0917 18:31:15.418136   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.418145   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:15.418153   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:15.418166   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:15.443709   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:15.443741   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:15.540475   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:15.540499   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:15.540511   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:15.627751   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:15.627781   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:15.671027   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:15.671056   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:11.664755   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:14.164563   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:14.001567   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:16.500724   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:15.127715   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:17.624831   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:18.223732   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:18.239161   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:18.239242   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:18.280252   78008 cri.go:89] found id: ""
	I0917 18:31:18.280282   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.280294   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:18.280301   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:18.280350   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:18.318774   78008 cri.go:89] found id: ""
	I0917 18:31:18.318805   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.318815   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:18.318821   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:18.318877   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:18.354755   78008 cri.go:89] found id: ""
	I0917 18:31:18.354785   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.354796   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:18.354804   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:18.354862   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:18.391283   78008 cri.go:89] found id: ""
	I0917 18:31:18.391310   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.391318   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:18.391324   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:18.391372   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:18.429026   78008 cri.go:89] found id: ""
	I0917 18:31:18.429062   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.429074   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:18.429081   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:18.429135   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:18.468318   78008 cri.go:89] found id: ""
	I0917 18:31:18.468351   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.468365   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:18.468372   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:18.468421   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:18.509871   78008 cri.go:89] found id: ""
	I0917 18:31:18.509903   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.509914   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:18.509922   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:18.509979   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:18.548662   78008 cri.go:89] found id: ""
	I0917 18:31:18.548694   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.548705   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:18.548714   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:18.548729   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:18.587633   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:18.587662   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:18.640867   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:18.640910   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:18.658020   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:18.658054   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:18.729643   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:18.729674   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:18.729686   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:16.664372   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:18.666834   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:18.501952   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:21.001547   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:20.125423   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:22.626597   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:21.313013   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:21.329702   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:21.329768   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:21.378972   78008 cri.go:89] found id: ""
	I0917 18:31:21.378996   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.379004   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:21.379010   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:21.379065   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:21.433355   78008 cri.go:89] found id: ""
	I0917 18:31:21.433382   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.433393   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:21.433400   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:21.433462   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:21.489030   78008 cri.go:89] found id: ""
	I0917 18:31:21.489055   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.489063   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:21.489068   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:21.489124   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:21.529089   78008 cri.go:89] found id: ""
	I0917 18:31:21.529119   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.529131   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:21.529138   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:21.529188   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:21.566892   78008 cri.go:89] found id: ""
	I0917 18:31:21.566919   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.566929   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:21.566935   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:21.566985   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:21.605453   78008 cri.go:89] found id: ""
	I0917 18:31:21.605484   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.605496   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:21.605504   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:21.605569   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:21.647710   78008 cri.go:89] found id: ""
	I0917 18:31:21.647732   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.647740   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:21.647745   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:21.647804   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:21.687002   78008 cri.go:89] found id: ""
	I0917 18:31:21.687036   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.687048   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:21.687058   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:21.687074   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:21.738591   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:21.738631   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:21.752950   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:21.752987   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:21.826268   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:21.826292   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:21.826306   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:21.906598   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:21.906646   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:24.453057   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:24.468867   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:24.468930   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:24.511103   78008 cri.go:89] found id: ""
	I0917 18:31:24.511129   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.511140   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:24.511147   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:24.511200   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:24.546392   78008 cri.go:89] found id: ""
	I0917 18:31:24.546423   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.546434   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:24.546443   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:24.546505   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:24.583266   78008 cri.go:89] found id: ""
	I0917 18:31:24.583299   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.583310   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:24.583320   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:24.583381   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:24.620018   78008 cri.go:89] found id: ""
	I0917 18:31:24.620051   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.620063   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:24.620070   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:24.620133   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:24.659528   78008 cri.go:89] found id: ""
	I0917 18:31:24.659556   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.659566   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:24.659573   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:24.659636   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:24.699115   78008 cri.go:89] found id: ""
	I0917 18:31:24.699153   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.699167   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:24.699175   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:24.699234   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:24.745358   78008 cri.go:89] found id: ""
	I0917 18:31:24.745392   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.745404   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:24.745414   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:24.745483   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:24.786606   78008 cri.go:89] found id: ""
	I0917 18:31:24.786635   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.786644   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:24.786657   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:24.786671   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:24.838417   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:24.838462   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:24.852959   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:24.852988   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:24.927013   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:24.927039   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:24.927058   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:25.008679   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:25.008720   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:21.164500   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:23.165380   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:25.665618   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:23.501265   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:26.002113   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:25.126406   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:27.627599   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:27.549945   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:27.565336   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:27.565450   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:27.605806   78008 cri.go:89] found id: ""
	I0917 18:31:27.605844   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.605853   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:27.605860   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:27.605909   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:27.652915   78008 cri.go:89] found id: ""
	I0917 18:31:27.652955   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.652968   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:27.652977   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:27.653044   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:27.701732   78008 cri.go:89] found id: ""
	I0917 18:31:27.701759   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.701771   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:27.701778   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:27.701841   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:27.744587   78008 cri.go:89] found id: ""
	I0917 18:31:27.744616   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.744628   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:27.744635   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:27.744705   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:27.789161   78008 cri.go:89] found id: ""
	I0917 18:31:27.789196   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.789207   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:27.789214   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:27.789296   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:27.833484   78008 cri.go:89] found id: ""
	I0917 18:31:27.833513   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.833525   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:27.833532   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:27.833591   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:27.873669   78008 cri.go:89] found id: ""
	I0917 18:31:27.873703   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.873715   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:27.873722   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:27.873792   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:27.911270   78008 cri.go:89] found id: ""
	I0917 18:31:27.911301   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.911313   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:27.911323   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:27.911336   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:27.951769   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:27.951798   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:28.002220   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:28.002254   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:28.017358   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:28.017392   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:28.091456   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:28.091481   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:28.091492   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:27.666003   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:30.164548   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:28.501094   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:31.005569   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:30.124439   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:32.126247   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:30.679643   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:30.693877   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:30.693948   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:30.732196   78008 cri.go:89] found id: ""
	I0917 18:31:30.732228   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.732240   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:30.732247   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:30.732320   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:30.774700   78008 cri.go:89] found id: ""
	I0917 18:31:30.774730   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.774742   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:30.774749   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:30.774838   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:30.814394   78008 cri.go:89] found id: ""
	I0917 18:31:30.814420   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.814428   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:30.814434   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:30.814487   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:30.854746   78008 cri.go:89] found id: ""
	I0917 18:31:30.854788   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.854801   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:30.854830   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:30.854899   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:30.893533   78008 cri.go:89] found id: ""
	I0917 18:31:30.893564   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.893574   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:30.893580   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:30.893649   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:30.932719   78008 cri.go:89] found id: ""
	I0917 18:31:30.932746   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.932757   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:30.932763   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:30.932811   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:30.974004   78008 cri.go:89] found id: ""
	I0917 18:31:30.974047   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.974056   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:30.974061   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:30.974117   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:31.017469   78008 cri.go:89] found id: ""
	I0917 18:31:31.017498   78008 logs.go:276] 0 containers: []
	W0917 18:31:31.017509   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:31.017517   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:31.017529   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:31.094385   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:31.094409   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:31.094424   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:31.177975   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:31.178012   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:31.218773   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:31.218804   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:31.272960   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:31.272996   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:33.788825   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:33.804904   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:33.804985   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:33.847149   78008 cri.go:89] found id: ""
	I0917 18:31:33.847178   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.847190   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:33.847198   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:33.847259   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:33.883548   78008 cri.go:89] found id: ""
	I0917 18:31:33.883573   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.883581   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:33.883586   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:33.883632   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:33.917495   78008 cri.go:89] found id: ""
	I0917 18:31:33.917523   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.917535   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:33.917542   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:33.917634   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:33.954931   78008 cri.go:89] found id: ""
	I0917 18:31:33.954955   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.954963   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:33.954969   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:33.955019   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:33.991535   78008 cri.go:89] found id: ""
	I0917 18:31:33.991568   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.991577   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:33.991582   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:33.991639   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:34.039451   78008 cri.go:89] found id: ""
	I0917 18:31:34.039478   78008 logs.go:276] 0 containers: []
	W0917 18:31:34.039489   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:34.039497   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:34.039557   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:34.081258   78008 cri.go:89] found id: ""
	I0917 18:31:34.081300   78008 logs.go:276] 0 containers: []
	W0917 18:31:34.081311   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:34.081317   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:34.081379   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:34.119557   78008 cri.go:89] found id: ""
	I0917 18:31:34.119586   78008 logs.go:276] 0 containers: []
	W0917 18:31:34.119597   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:34.119608   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:34.119623   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:34.163345   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:34.163379   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:34.218399   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:34.218454   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:34.232705   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:34.232736   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:34.309948   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:34.309972   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:34.309984   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:32.164688   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:34.165267   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:33.500604   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:35.501094   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:34.624847   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:36.624971   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:36.896504   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:36.913784   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:36.913870   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:36.954525   78008 cri.go:89] found id: ""
	I0917 18:31:36.954557   78008 logs.go:276] 0 containers: []
	W0917 18:31:36.954568   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:36.954578   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:36.954648   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:36.989379   78008 cri.go:89] found id: ""
	I0917 18:31:36.989408   78008 logs.go:276] 0 containers: []
	W0917 18:31:36.989419   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:36.989426   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:36.989491   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:37.029078   78008 cri.go:89] found id: ""
	I0917 18:31:37.029107   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.029119   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:37.029126   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:37.029180   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:37.066636   78008 cri.go:89] found id: ""
	I0917 18:31:37.066670   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.066683   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:37.066691   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:37.066754   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:37.109791   78008 cri.go:89] found id: ""
	I0917 18:31:37.109827   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.109838   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:37.109849   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:37.109925   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:37.153415   78008 cri.go:89] found id: ""
	I0917 18:31:37.153448   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.153459   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:37.153467   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:37.153527   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:37.192826   78008 cri.go:89] found id: ""
	I0917 18:31:37.192853   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.192864   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:37.192871   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:37.192930   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:37.230579   78008 cri.go:89] found id: ""
	I0917 18:31:37.230632   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.230647   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:37.230665   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:37.230677   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:37.315392   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:37.315430   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:37.356521   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:37.356554   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:37.410552   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:37.410591   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:37.426013   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:37.426040   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:37.499352   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:39.999538   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:40.014515   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:40.014590   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:40.051511   78008 cri.go:89] found id: ""
	I0917 18:31:40.051548   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.051558   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:40.051564   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:40.051623   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:40.089707   78008 cri.go:89] found id: ""
	I0917 18:31:40.089733   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.089747   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:40.089752   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:40.089802   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:40.137303   78008 cri.go:89] found id: ""
	I0917 18:31:40.137326   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.137335   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:40.137341   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:40.137389   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:40.176721   78008 cri.go:89] found id: ""
	I0917 18:31:40.176746   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.176755   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:40.176761   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:40.176809   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:40.212369   78008 cri.go:89] found id: ""
	I0917 18:31:40.212401   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.212412   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:40.212421   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:40.212494   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:40.255798   78008 cri.go:89] found id: ""
	I0917 18:31:40.255828   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.255838   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:40.255847   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:40.255982   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:40.293643   78008 cri.go:89] found id: ""
	I0917 18:31:40.293672   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.293682   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:40.293689   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:40.293752   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:40.332300   78008 cri.go:89] found id: ""
	I0917 18:31:40.332330   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.332340   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:40.332350   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:40.332365   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:40.389068   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:40.389115   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:40.403118   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:40.403149   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:40.476043   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:40.476067   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:40.476081   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:40.563164   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:40.563204   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:36.664291   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:38.666750   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:37.501943   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:40.000891   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:42.001550   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:38.625406   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:41.124655   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:43.126544   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:43.112107   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:43.127968   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:43.128034   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:43.166351   78008 cri.go:89] found id: ""
	I0917 18:31:43.166371   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.166379   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:43.166384   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:43.166433   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:43.201124   78008 cri.go:89] found id: ""
	I0917 18:31:43.201160   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.201173   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:43.201181   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:43.201265   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:43.245684   78008 cri.go:89] found id: ""
	I0917 18:31:43.245717   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.245728   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:43.245735   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:43.245796   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:43.282751   78008 cri.go:89] found id: ""
	I0917 18:31:43.282777   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.282785   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:43.282791   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:43.282844   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:43.322180   78008 cri.go:89] found id: ""
	I0917 18:31:43.322212   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.322223   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:43.322230   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:43.322294   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:43.359575   78008 cri.go:89] found id: ""
	I0917 18:31:43.359608   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.359620   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:43.359627   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:43.359689   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:43.398782   78008 cri.go:89] found id: ""
	I0917 18:31:43.398811   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.398824   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:43.398833   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:43.398913   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:43.437747   78008 cri.go:89] found id: ""
	I0917 18:31:43.437771   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.437779   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:43.437787   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:43.437800   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:43.477986   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:43.478019   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:43.532637   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:43.532674   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:43.547552   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:43.547577   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:43.632556   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:43.632578   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:43.632592   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:41.163988   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:43.165378   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:45.664803   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:44.500302   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:46.500489   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:45.128136   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:47.626024   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:46.214890   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:46.229327   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:46.229408   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:46.268605   78008 cri.go:89] found id: ""
	I0917 18:31:46.268632   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.268642   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:46.268649   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:46.268711   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:46.309508   78008 cri.go:89] found id: ""
	I0917 18:31:46.309539   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.309549   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:46.309558   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:46.309620   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:46.352610   78008 cri.go:89] found id: ""
	I0917 18:31:46.352639   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.352648   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:46.352654   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:46.352723   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:46.398702   78008 cri.go:89] found id: ""
	I0917 18:31:46.398738   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.398747   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:46.398753   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:46.398811   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:46.437522   78008 cri.go:89] found id: ""
	I0917 18:31:46.437545   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.437554   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:46.437559   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:46.437641   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:46.474865   78008 cri.go:89] found id: ""
	I0917 18:31:46.474893   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.474902   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:46.474909   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:46.474957   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:46.514497   78008 cri.go:89] found id: ""
	I0917 18:31:46.514525   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.514536   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:46.514543   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:46.514605   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:46.556948   78008 cri.go:89] found id: ""
	I0917 18:31:46.556979   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.556988   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:46.556997   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:46.557008   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:46.609300   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:46.609337   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:46.626321   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:46.626351   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:46.707669   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:46.707701   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:46.707714   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:46.789774   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:46.789815   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:49.332780   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:49.347262   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:49.347334   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:49.388368   78008 cri.go:89] found id: ""
	I0917 18:31:49.388411   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.388423   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:49.388431   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:49.388493   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:49.423664   78008 cri.go:89] found id: ""
	I0917 18:31:49.423694   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.423707   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:49.423714   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:49.423776   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:49.462882   78008 cri.go:89] found id: ""
	I0917 18:31:49.462911   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.462924   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:49.462931   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:49.462988   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:49.524014   78008 cri.go:89] found id: ""
	I0917 18:31:49.524047   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.524056   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:49.524062   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:49.524114   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:49.564703   78008 cri.go:89] found id: ""
	I0917 18:31:49.564737   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.564748   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:49.564762   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:49.564827   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:49.609460   78008 cri.go:89] found id: ""
	I0917 18:31:49.609484   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.609493   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:49.609499   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:49.609554   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:49.651008   78008 cri.go:89] found id: ""
	I0917 18:31:49.651032   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.651040   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:49.651045   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:49.651106   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:49.693928   78008 cri.go:89] found id: ""
	I0917 18:31:49.693954   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.693961   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:49.693969   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:49.693981   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:49.774940   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:49.774977   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:49.820362   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:49.820398   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:49.875508   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:49.875549   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:49.890690   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:49.890723   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:49.967803   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:47.664890   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:49.664943   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:48.502246   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:51.001296   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:50.125915   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:52.625169   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:52.468533   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:52.483749   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:52.483812   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:52.523017   78008 cri.go:89] found id: ""
	I0917 18:31:52.523040   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.523048   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:52.523055   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:52.523101   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:52.559848   78008 cri.go:89] found id: ""
	I0917 18:31:52.559879   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.559889   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:52.559895   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:52.559955   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:52.597168   78008 cri.go:89] found id: ""
	I0917 18:31:52.597192   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.597200   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:52.597207   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:52.597278   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:52.634213   78008 cri.go:89] found id: ""
	I0917 18:31:52.634241   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.634252   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:52.634265   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:52.634326   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:52.673842   78008 cri.go:89] found id: ""
	I0917 18:31:52.673865   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.673873   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:52.673878   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:52.673926   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:52.711568   78008 cri.go:89] found id: ""
	I0917 18:31:52.711596   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.711609   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:52.711617   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:52.711676   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:52.757002   78008 cri.go:89] found id: ""
	I0917 18:31:52.757030   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.757038   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:52.757043   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:52.757092   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:52.793092   78008 cri.go:89] found id: ""
	I0917 18:31:52.793126   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.793135   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:52.793143   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:52.793155   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:52.847641   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:52.847682   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:52.862287   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:52.862314   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:52.941307   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:52.941331   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:52.941344   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:53.026114   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:53.026155   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:55.573116   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:55.588063   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:55.588125   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:55.633398   78008 cri.go:89] found id: ""
	I0917 18:31:55.633422   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.633430   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:55.633437   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:55.633511   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:55.669754   78008 cri.go:89] found id: ""
	I0917 18:31:55.669785   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.669796   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:55.669803   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:55.669876   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:52.165645   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:54.166228   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:53.500688   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:55.501849   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:55.126327   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:57.624683   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:55.711492   78008 cri.go:89] found id: ""
	I0917 18:31:55.711521   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.711533   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:55.711541   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:55.711603   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:55.749594   78008 cri.go:89] found id: ""
	I0917 18:31:55.749628   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.749638   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:55.749643   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:55.749695   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:55.786114   78008 cri.go:89] found id: ""
	I0917 18:31:55.786143   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.786155   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:55.786162   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:55.786222   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:55.824254   78008 cri.go:89] found id: ""
	I0917 18:31:55.824282   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.824293   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:55.824301   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:55.824361   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:55.861690   78008 cri.go:89] found id: ""
	I0917 18:31:55.861718   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.861728   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:55.861733   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:55.861794   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:55.913729   78008 cri.go:89] found id: ""
	I0917 18:31:55.913754   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.913766   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:55.913775   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:55.913788   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:55.976835   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:55.976880   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:56.003201   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:56.003236   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:56.092101   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:56.092123   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:56.092137   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:56.170498   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:56.170533   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:58.714212   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:58.730997   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:58.731072   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:58.775640   78008 cri.go:89] found id: ""
	I0917 18:31:58.775678   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.775693   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:58.775701   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:58.775770   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:58.811738   78008 cri.go:89] found id: ""
	I0917 18:31:58.811764   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.811776   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:58.811783   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:58.811852   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:58.849803   78008 cri.go:89] found id: ""
	I0917 18:31:58.849827   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.849836   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:58.849841   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:58.849898   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:58.885827   78008 cri.go:89] found id: ""
	I0917 18:31:58.885856   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.885871   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:58.885878   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:58.885943   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:58.925539   78008 cri.go:89] found id: ""
	I0917 18:31:58.925565   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.925574   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:58.925579   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:58.925628   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:58.961074   78008 cri.go:89] found id: ""
	I0917 18:31:58.961104   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.961116   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:58.961123   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:58.961190   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:58.997843   78008 cri.go:89] found id: ""
	I0917 18:31:58.997878   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.997889   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:58.997896   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:58.997962   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:59.034836   78008 cri.go:89] found id: ""
	I0917 18:31:59.034866   78008 logs.go:276] 0 containers: []
	W0917 18:31:59.034876   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:59.034884   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:59.034899   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:59.049085   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:59.049116   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:59.126143   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:59.126168   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:59.126183   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:59.210043   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:59.210096   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:59.258546   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:59.258575   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:56.664145   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:58.664990   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:58.000809   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:58.494554   77433 pod_ready.go:82] duration metric: took 4m0.000545882s for pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace to be "Ready" ...
	E0917 18:31:58.494588   77433 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace to be "Ready" (will not retry!)
	I0917 18:31:58.494611   77433 pod_ready.go:39] duration metric: took 4m9.313096637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:31:58.494638   77433 kubeadm.go:597] duration metric: took 4m19.208089477s to restartPrimaryControlPlane
	W0917 18:31:58.494716   77433 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 18:31:58.494760   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:31:59.625888   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:02.125831   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:01.811930   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:01.833160   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:32:01.833223   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:32:01.891148   78008 cri.go:89] found id: ""
	I0917 18:32:01.891178   78008 logs.go:276] 0 containers: []
	W0917 18:32:01.891189   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:32:01.891197   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:32:01.891260   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:32:01.954367   78008 cri.go:89] found id: ""
	I0917 18:32:01.954407   78008 logs.go:276] 0 containers: []
	W0917 18:32:01.954418   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:32:01.954425   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:32:01.954490   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:32:01.998154   78008 cri.go:89] found id: ""
	I0917 18:32:01.998187   78008 logs.go:276] 0 containers: []
	W0917 18:32:01.998199   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:32:01.998206   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:32:01.998267   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:32:02.035412   78008 cri.go:89] found id: ""
	I0917 18:32:02.035446   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.035457   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:32:02.035464   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:32:02.035539   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:32:02.070552   78008 cri.go:89] found id: ""
	I0917 18:32:02.070586   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.070599   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:32:02.070604   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:32:02.070673   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:32:02.108680   78008 cri.go:89] found id: ""
	I0917 18:32:02.108717   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.108729   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:32:02.108737   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:32:02.108787   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:32:02.148560   78008 cri.go:89] found id: ""
	I0917 18:32:02.148585   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.148594   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:32:02.148600   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:32:02.148647   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:32:02.186398   78008 cri.go:89] found id: ""
	I0917 18:32:02.186434   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.186445   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:32:02.186454   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:32:02.186468   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:32:02.273674   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:32:02.273695   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:32:02.273708   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:32:02.359656   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:32:02.359704   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:32:02.405465   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:32:02.405494   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:32:02.466534   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:32:02.466568   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:32:04.983572   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:04.998711   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:32:04.998796   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:32:05.038080   78008 cri.go:89] found id: ""
	I0917 18:32:05.038111   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.038121   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:32:05.038129   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:32:05.038189   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:32:05.074542   78008 cri.go:89] found id: ""
	I0917 18:32:05.074571   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.074582   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:32:05.074588   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:32:05.074652   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:32:05.113115   78008 cri.go:89] found id: ""
	I0917 18:32:05.113140   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.113149   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:32:05.113156   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:32:05.113216   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:32:05.151752   78008 cri.go:89] found id: ""
	I0917 18:32:05.151777   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.151786   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:32:05.151791   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:32:05.151840   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:32:05.191014   78008 cri.go:89] found id: ""
	I0917 18:32:05.191044   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.191056   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:32:05.191064   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:32:05.191126   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:32:05.226738   78008 cri.go:89] found id: ""
	I0917 18:32:05.226774   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.226787   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:32:05.226794   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:32:05.226856   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:32:05.263072   78008 cri.go:89] found id: ""
	I0917 18:32:05.263102   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.263115   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:32:05.263124   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:32:05.263184   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:32:05.302622   78008 cri.go:89] found id: ""
	I0917 18:32:05.302651   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.302666   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:32:05.302677   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:32:05.302691   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:32:05.358101   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:32:05.358150   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:32:05.373289   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:32:05.373326   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:32:05.451451   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:32:05.451484   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:32:05.451496   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:32:05.532529   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:32:05.532570   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:32:01.165911   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:03.665523   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:04.126090   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:06.625207   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:08.079204   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:08.093914   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:32:08.093996   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:32:08.131132   78008 cri.go:89] found id: ""
	I0917 18:32:08.131164   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.131173   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:32:08.131178   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:32:08.131230   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:32:08.168017   78008 cri.go:89] found id: ""
	I0917 18:32:08.168044   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.168055   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:32:08.168062   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:32:08.168124   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:32:08.210190   78008 cri.go:89] found id: ""
	I0917 18:32:08.210212   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.210221   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:32:08.210226   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:32:08.210279   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:32:08.250264   78008 cri.go:89] found id: ""
	I0917 18:32:08.250291   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.250299   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:32:08.250304   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:32:08.250352   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:32:08.287732   78008 cri.go:89] found id: ""
	I0917 18:32:08.287760   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.287768   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:32:08.287775   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:32:08.287826   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:32:08.325131   78008 cri.go:89] found id: ""
	I0917 18:32:08.325161   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.325170   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:32:08.325176   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:32:08.325243   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:32:08.365979   78008 cri.go:89] found id: ""
	I0917 18:32:08.366008   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.366019   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:32:08.366027   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:32:08.366088   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:32:08.403430   78008 cri.go:89] found id: ""
	I0917 18:32:08.403472   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.403484   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:32:08.403495   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:32:08.403511   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:32:08.444834   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:32:08.444869   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:32:08.500363   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:32:08.500408   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:32:08.516624   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:32:08.516653   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:32:08.591279   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:32:08.591304   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:32:08.591317   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:32:06.165279   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:08.168012   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:10.665050   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:11.173345   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:11.187689   78008 kubeadm.go:597] duration metric: took 4m1.808927826s to restartPrimaryControlPlane
	W0917 18:32:11.187762   78008 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 18:32:11.187786   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:32:12.794262   78008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.606454478s)
	I0917 18:32:12.794344   78008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:32:12.809379   78008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:32:12.821912   78008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:32:12.833176   78008 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:32:12.833201   78008 kubeadm.go:157] found existing configuration files:
	
	I0917 18:32:12.833279   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:32:12.843175   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:32:12.843245   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:32:12.855310   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:32:12.866777   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:32:12.866846   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:32:12.878436   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:32:12.889677   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:32:12.889763   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:32:12.900141   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:32:12.909916   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:32:12.909994   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:32:12.920578   78008 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:32:12.993663   78008 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0917 18:32:12.993743   78008 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:32:13.145113   78008 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:32:13.145321   78008 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:32:13.145451   78008 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0917 18:32:13.346279   78008 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:32:08.627002   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:09.118558   77819 pod_ready.go:82] duration metric: took 4m0.00024297s for pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace to be "Ready" ...
	E0917 18:32:09.118584   77819 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0917 18:32:09.118600   77819 pod_ready.go:39] duration metric: took 4m13.424544466s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:09.118628   77819 kubeadm.go:597] duration metric: took 4m21.847475999s to restartPrimaryControlPlane
	W0917 18:32:09.118695   77819 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 18:32:09.118723   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:32:13.348308   78008 out.go:235]   - Generating certificates and keys ...
	I0917 18:32:13.348411   78008 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:32:13.348505   78008 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:32:13.348622   78008 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:32:13.348719   78008 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:32:13.348814   78008 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:32:13.348895   78008 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:32:13.348991   78008 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:32:13.349126   78008 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:32:13.349595   78008 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:32:13.349939   78008 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:32:13.350010   78008 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:32:13.350096   78008 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:32:13.677314   78008 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:32:13.840807   78008 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:32:13.886801   78008 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:32:13.937675   78008 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:32:13.956057   78008 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:32:13.957185   78008 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:32:13.957266   78008 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:32:14.099317   78008 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:32:14.101339   78008 out.go:235]   - Booting up control plane ...
	I0917 18:32:14.101446   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:32:14.107518   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:32:14.107630   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:32:14.107964   78008 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:32:14.118995   78008 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0917 18:32:13.164003   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:15.165309   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:17.664956   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:20.165073   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:24.890884   77433 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.396095322s)
	I0917 18:32:24.890966   77433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:32:24.915367   77433 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:32:24.928191   77433 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:32:24.945924   77433 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:32:24.945943   77433 kubeadm.go:157] found existing configuration files:
	
	I0917 18:32:24.945988   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:32:24.961382   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:32:24.961454   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:32:24.977324   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:32:24.989771   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:32:24.989861   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:32:25.001342   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:32:25.035933   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:32:25.036004   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:32:25.047185   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:32:25.058299   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:32:25.058358   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:32:25.070264   77433 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:32:25.124517   77433 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 18:32:25.124634   77433 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:32:25.257042   77433 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:32:25.257211   77433 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:32:25.257378   77433 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 18:32:25.267568   77433 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:32:22.663592   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:24.665849   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:25.269902   77433 out.go:235]   - Generating certificates and keys ...
	I0917 18:32:25.270012   77433 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:32:25.270115   77433 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:32:25.270221   77433 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:32:25.270288   77433 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:32:25.270379   77433 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:32:25.270462   77433 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:32:25.270563   77433 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:32:25.270664   77433 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:32:25.270747   77433 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:32:25.270810   77433 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:32:25.270844   77433 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:32:25.270892   77433 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:32:25.425276   77433 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:32:25.498604   77433 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 18:32:25.848094   77433 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:32:26.011742   77433 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:32:26.097462   77433 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:32:26.097929   77433 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:32:26.100735   77433 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:32:26.102662   77433 out.go:235]   - Booting up control plane ...
	I0917 18:32:26.102777   77433 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:32:26.102880   77433 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:32:26.102954   77433 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:32:26.123221   77433 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:32:26.130932   77433 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:32:26.131021   77433 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:32:26.291311   77433 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 18:32:26.291462   77433 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 18:32:27.164870   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:29.165716   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:27.298734   77433 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00350356s
	I0917 18:32:27.298851   77433 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 18:32:32.298994   77433 kubeadm.go:310] [api-check] The API server is healthy after 5.002867585s
	I0917 18:32:32.319430   77433 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 18:32:32.345527   77433 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 18:32:32.381518   77433 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 18:32:32.381817   77433 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-328741 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 18:32:32.398185   77433 kubeadm.go:310] [bootstrap-token] Using token: jgy27g.uvhet1w3psx1hofx
	I0917 18:32:32.399853   77433 out.go:235]   - Configuring RBAC rules ...
	I0917 18:32:32.400009   77433 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 18:32:32.407740   77433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 18:32:32.421320   77433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 18:32:32.427046   77433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 18:32:32.434506   77433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 18:32:32.438950   77433 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 18:32:32.705233   77433 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 18:32:33.140761   77433 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 18:32:33.720560   77433 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 18:32:33.720589   77433 kubeadm.go:310] 
	I0917 18:32:33.720679   77433 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 18:32:33.720690   77433 kubeadm.go:310] 
	I0917 18:32:33.720803   77433 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 18:32:33.720823   77433 kubeadm.go:310] 
	I0917 18:32:33.720869   77433 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 18:32:33.720932   77433 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 18:32:33.721010   77433 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 18:32:33.721021   77433 kubeadm.go:310] 
	I0917 18:32:33.721094   77433 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 18:32:33.721103   77433 kubeadm.go:310] 
	I0917 18:32:33.721168   77433 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 18:32:33.721176   77433 kubeadm.go:310] 
	I0917 18:32:33.721291   77433 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 18:32:33.721406   77433 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 18:32:33.721515   77433 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 18:32:33.721527   77433 kubeadm.go:310] 
	I0917 18:32:33.721653   77433 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 18:32:33.721780   77433 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 18:32:33.721797   77433 kubeadm.go:310] 
	I0917 18:32:33.721923   77433 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jgy27g.uvhet1w3psx1hofx \
	I0917 18:32:33.722093   77433 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 \
	I0917 18:32:33.722131   77433 kubeadm.go:310] 	--control-plane 
	I0917 18:32:33.722140   77433 kubeadm.go:310] 
	I0917 18:32:33.722267   77433 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 18:32:33.722278   77433 kubeadm.go:310] 
	I0917 18:32:33.722389   77433 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jgy27g.uvhet1w3psx1hofx \
	I0917 18:32:33.722565   77433 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 
	I0917 18:32:33.723290   77433 kubeadm.go:310] W0917 18:32:25.090856    2941 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:32:33.723705   77433 kubeadm.go:310] W0917 18:32:25.092716    2941 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:32:33.723861   77433 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:32:33.723883   77433 cni.go:84] Creating CNI manager for ""
	I0917 18:32:33.723896   77433 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:32:33.725956   77433 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:32:31.665048   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:34.166586   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:33.727153   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:32:33.739127   77433 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:32:33.759704   77433 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 18:32:33.759766   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:33.759799   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-328741 minikube.k8s.io/updated_at=2024_09_17T18_32_33_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=no-preload-328741 minikube.k8s.io/primary=true
	I0917 18:32:33.977462   77433 ops.go:34] apiserver oom_adj: -16
	I0917 18:32:33.977485   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:34.477572   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:34.977644   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:35.477829   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:35.977732   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:36.477549   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:36.978147   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:37.477629   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:37.977554   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:38.125930   77433 kubeadm.go:1113] duration metric: took 4.366225265s to wait for elevateKubeSystemPrivileges
	I0917 18:32:38.125973   77433 kubeadm.go:394] duration metric: took 4m58.899335742s to StartCluster
	I0917 18:32:38.125999   77433 settings.go:142] acquiring lock: {Name:mk34898861b3ed534da4bd8404feab95fe690001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:32:38.126117   77433 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:32:38.128667   77433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:32:38.129071   77433 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 18:32:38.129134   77433 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 18:32:38.129258   77433 addons.go:69] Setting storage-provisioner=true in profile "no-preload-328741"
	I0917 18:32:38.129284   77433 addons.go:234] Setting addon storage-provisioner=true in "no-preload-328741"
	W0917 18:32:38.129295   77433 addons.go:243] addon storage-provisioner should already be in state true
	I0917 18:32:38.129331   77433 host.go:66] Checking if "no-preload-328741" exists ...
	I0917 18:32:38.129364   77433 config.go:182] Loaded profile config "no-preload-328741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:32:38.129374   77433 addons.go:69] Setting default-storageclass=true in profile "no-preload-328741"
	I0917 18:32:38.129397   77433 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-328741"
	I0917 18:32:38.129397   77433 addons.go:69] Setting metrics-server=true in profile "no-preload-328741"
	I0917 18:32:38.129440   77433 addons.go:234] Setting addon metrics-server=true in "no-preload-328741"
	W0917 18:32:38.129451   77433 addons.go:243] addon metrics-server should already be in state true
	I0917 18:32:38.129491   77433 host.go:66] Checking if "no-preload-328741" exists ...
	I0917 18:32:38.129831   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.129832   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.129875   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.129965   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.129980   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.129992   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.130833   77433 out.go:177] * Verifying Kubernetes components...
	I0917 18:32:38.132232   77433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:32:38.151440   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37047
	I0917 18:32:38.151521   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43717
	I0917 18:32:38.151524   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46631
	I0917 18:32:38.152003   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.152216   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.152574   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.152591   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.152728   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.152743   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.153076   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.153077   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.153304   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetState
	I0917 18:32:38.153689   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.153731   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.156960   77433 addons.go:234] Setting addon default-storageclass=true in "no-preload-328741"
	W0917 18:32:38.156980   77433 addons.go:243] addon default-storageclass should already be in state true
	I0917 18:32:38.157007   77433 host.go:66] Checking if "no-preload-328741" exists ...
	I0917 18:32:38.157358   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.157404   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.157700   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.158314   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.158332   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.158738   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.159296   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.159332   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.179409   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41819
	I0917 18:32:38.179948   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.180402   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.180433   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.180922   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.181082   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetState
	I0917 18:32:38.183522   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35935
	I0917 18:32:38.183904   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.184373   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.184389   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.184750   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.184911   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetState
	I0917 18:32:38.187520   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37647
	I0917 18:32:38.187560   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:32:38.187560   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:32:38.188071   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.188750   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.188768   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.189208   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.189573   77433 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:32:38.189597   77433 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0917 18:32:35.488250   77819 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.369501216s)
	I0917 18:32:35.488328   77819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:32:35.507245   77819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:32:35.522739   77819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:32:35.537981   77819 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:32:35.538002   77819 kubeadm.go:157] found existing configuration files:
	
	I0917 18:32:35.538060   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0917 18:32:35.552269   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:32:35.552346   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:32:35.566005   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0917 18:32:35.577402   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:32:35.577482   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:32:35.588633   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0917 18:32:35.600487   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:32:35.600559   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:32:35.612243   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0917 18:32:35.623548   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:32:35.623630   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:32:35.635837   77819 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:32:35.690169   77819 kubeadm.go:310] W0917 18:32:35.657767    2589 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:32:35.690728   77819 kubeadm.go:310] W0917 18:32:35.658500    2589 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:32:35.819945   77819 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:32:38.189867   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.189904   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.191297   77433 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 18:32:38.191318   77433 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 18:32:38.191340   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:32:38.191421   77433 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:32:38.191441   77433 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 18:32:38.191467   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:32:38.195617   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.196040   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:32:38.196070   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.196098   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.196292   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:32:38.196554   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:32:38.196633   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:32:38.196645   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.196829   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:32:38.196868   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:32:38.196999   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:32:38.197320   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:32:38.197549   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:32:38.197724   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:32:38.211021   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34023
	I0917 18:32:38.211713   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.212330   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.212349   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.212900   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.213161   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetState
	I0917 18:32:38.214937   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:32:38.215252   77433 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 18:32:38.215267   77433 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 18:32:38.215284   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:32:38.218542   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.219120   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:32:38.219141   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.219398   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:32:38.219649   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:32:38.219795   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:32:38.219983   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:32:38.350631   77433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:32:38.420361   77433 node_ready.go:35] waiting up to 6m0s for node "no-preload-328741" to be "Ready" ...
	I0917 18:32:38.445121   77433 node_ready.go:49] node "no-preload-328741" has status "Ready":"True"
	I0917 18:32:38.445147   77433 node_ready.go:38] duration metric: took 24.749282ms for node "no-preload-328741" to be "Ready" ...
	I0917 18:32:38.445159   77433 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:38.468481   77433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:38.473593   77433 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 18:32:38.529563   77433 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 18:32:38.529592   77433 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0917 18:32:38.569714   77433 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:32:38.611817   77433 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 18:32:38.611845   77433 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 18:32:38.681763   77433 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:32:38.681791   77433 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 18:32:38.754936   77433 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:32:38.771115   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:38.771142   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:38.771564   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:38.771583   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:38.771603   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:38.771612   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:38.773362   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:38.773370   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:38.773381   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:38.782423   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:38.782468   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:38.782821   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:38.782877   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:38.782889   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:39.826176   77433 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.256415127s)
	I0917 18:32:39.826230   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:39.826241   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:39.826591   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:39.826618   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:39.826619   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:39.826627   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:39.826638   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:39.826905   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:39.828259   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:39.828279   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:40.095498   77433 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.340502717s)
	I0917 18:32:40.095562   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:40.095578   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:40.096020   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:40.096018   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:40.096047   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:40.096056   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:40.096064   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:40.096372   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:40.096391   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:40.097299   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:40.097317   77433 addons.go:475] Verifying addon metrics-server=true in "no-preload-328741"
	I0917 18:32:40.099125   77433 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0917 18:32:36.663739   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:38.666621   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:40.100317   77433 addons.go:510] duration metric: took 1.971194765s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0917 18:32:40.481646   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:44.319473   77819 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 18:32:44.319570   77819 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:32:44.319698   77819 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:32:44.319793   77819 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:32:44.319888   77819 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 18:32:44.319977   77819 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:32:44.322424   77819 out.go:235]   - Generating certificates and keys ...
	I0917 18:32:44.322509   77819 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:32:44.322570   77819 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:32:44.322640   77819 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:32:44.322732   77819 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:32:44.322806   77819 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:32:44.322854   77819 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:32:44.322911   77819 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:32:44.322993   77819 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:32:44.323071   77819 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:32:44.323150   77819 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:32:44.323197   77819 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:32:44.323246   77819 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:32:44.323289   77819 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:32:44.323337   77819 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 18:32:44.323390   77819 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:32:44.323456   77819 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:32:44.323521   77819 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:32:44.323613   77819 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:32:44.323704   77819 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:32:44.324959   77819 out.go:235]   - Booting up control plane ...
	I0917 18:32:44.325043   77819 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:32:44.325120   77819 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:32:44.325187   77819 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:32:44.325303   77819 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:32:44.325371   77819 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:32:44.325404   77819 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:32:44.325533   77819 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 18:32:44.325635   77819 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 18:32:44.325710   77819 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001958745s
	I0917 18:32:44.325774   77819 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 18:32:44.325830   77819 kubeadm.go:310] [api-check] The API server is healthy after 5.002835169s
	I0917 18:32:44.325919   77819 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 18:32:44.326028   77819 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 18:32:44.326086   77819 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 18:32:44.326239   77819 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-438836 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 18:32:44.326311   77819 kubeadm.go:310] [bootstrap-token] Using token: xgap2f.3rz1qjyfivkbqx8u
	I0917 18:32:44.327661   77819 out.go:235]   - Configuring RBAC rules ...
	I0917 18:32:44.327770   77819 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 18:32:44.327838   77819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 18:32:44.328050   77819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 18:32:44.328166   77819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 18:32:44.328266   77819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 18:32:44.328337   77819 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 18:32:44.328483   77819 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 18:32:44.328519   77819 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 18:32:44.328564   77819 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 18:32:44.328573   77819 kubeadm.go:310] 
	I0917 18:32:44.328628   77819 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 18:32:44.328634   77819 kubeadm.go:310] 
	I0917 18:32:44.328702   77819 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 18:32:44.328710   77819 kubeadm.go:310] 
	I0917 18:32:44.328736   77819 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 18:32:44.328798   77819 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 18:32:44.328849   77819 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 18:32:44.328858   77819 kubeadm.go:310] 
	I0917 18:32:44.328940   77819 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 18:32:44.328949   77819 kubeadm.go:310] 
	I0917 18:32:44.328997   77819 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 18:32:44.329011   77819 kubeadm.go:310] 
	I0917 18:32:44.329054   77819 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 18:32:44.329122   77819 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 18:32:44.329184   77819 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 18:32:44.329191   77819 kubeadm.go:310] 
	I0917 18:32:44.329281   77819 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 18:32:44.329359   77819 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 18:32:44.329372   77819 kubeadm.go:310] 
	I0917 18:32:44.329487   77819 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token xgap2f.3rz1qjyfivkbqx8u \
	I0917 18:32:44.329599   77819 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 \
	I0917 18:32:44.329619   77819 kubeadm.go:310] 	--control-plane 
	I0917 18:32:44.329625   77819 kubeadm.go:310] 
	I0917 18:32:44.329709   77819 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 18:32:44.329716   77819 kubeadm.go:310] 
	I0917 18:32:44.329784   77819 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token xgap2f.3rz1qjyfivkbqx8u \
	I0917 18:32:44.329896   77819 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 
	I0917 18:32:44.329910   77819 cni.go:84] Creating CNI manager for ""
	I0917 18:32:44.329916   77819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:32:44.331403   77819 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:32:41.165452   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:43.167184   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:45.664612   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:42.976970   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:45.475620   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:44.332786   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:32:44.344553   77819 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:32:44.365355   77819 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 18:32:44.365417   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:44.365457   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-438836 minikube.k8s.io/updated_at=2024_09_17T18_32_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=default-k8s-diff-port-438836 minikube.k8s.io/primary=true
	I0917 18:32:44.393987   77819 ops.go:34] apiserver oom_adj: -16
	I0917 18:32:44.608512   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:45.109295   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:45.609455   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:46.108538   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:46.609062   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:47.108933   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:47.608565   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:48.109355   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:48.609390   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:49.109204   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:49.305574   77819 kubeadm.go:1113] duration metric: took 4.940218828s to wait for elevateKubeSystemPrivileges
	I0917 18:32:49.305616   77819 kubeadm.go:394] duration metric: took 5m2.086280483s to StartCluster
	I0917 18:32:49.305640   77819 settings.go:142] acquiring lock: {Name:mk34898861b3ed534da4bd8404feab95fe690001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:32:49.305743   77819 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:32:49.308226   77819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:32:49.308590   77819 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.58 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 18:32:49.308755   77819 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 18:32:49.308838   77819 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-438836"
	I0917 18:32:49.308861   77819 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-438836"
	I0917 18:32:49.308863   77819 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-438836"
	I0917 18:32:49.308882   77819 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-438836"
	I0917 18:32:49.308881   77819 config.go:182] Loaded profile config "default-k8s-diff-port-438836": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:32:49.308895   77819 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-438836"
	W0917 18:32:49.308946   77819 addons.go:243] addon metrics-server should already be in state true
	I0917 18:32:49.309006   77819 host.go:66] Checking if "default-k8s-diff-port-438836" exists ...
	I0917 18:32:49.308895   77819 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-438836"
	W0917 18:32:49.308873   77819 addons.go:243] addon storage-provisioner should already be in state true
	I0917 18:32:49.309151   77819 host.go:66] Checking if "default-k8s-diff-port-438836" exists ...
	I0917 18:32:49.309458   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.309509   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.309544   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.309580   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.309585   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.309613   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.310410   77819 out.go:177] * Verifying Kubernetes components...
	I0917 18:32:49.311819   77819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:32:49.326762   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40995
	I0917 18:32:49.327055   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41171
	I0917 18:32:49.327287   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.327615   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.327869   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.327888   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.328171   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.328194   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.328215   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.328403   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetState
	I0917 18:32:49.328622   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.329285   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.329330   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.329573   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42919
	I0917 18:32:49.330145   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.330651   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.330674   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.331084   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.331715   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.331767   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.332232   77819 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-438836"
	W0917 18:32:49.332250   77819 addons.go:243] addon default-storageclass should already be in state true
	I0917 18:32:49.332278   77819 host.go:66] Checking if "default-k8s-diff-port-438836" exists ...
	I0917 18:32:49.332550   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.332595   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.346536   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35171
	I0917 18:32:49.347084   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.347712   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.347737   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.348229   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.348469   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetState
	I0917 18:32:49.350631   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35929
	I0917 18:32:49.351520   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.351581   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:32:49.352110   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.352138   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.352297   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35853
	I0917 18:32:49.352720   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.352736   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.353270   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.353310   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.353318   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.353334   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.353707   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.353861   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetState
	I0917 18:32:49.354855   77819 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0917 18:32:49.356031   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:32:49.356123   77819 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 18:32:49.356153   77819 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 18:32:49.356181   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:32:49.358023   77819 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:32:47.475181   77433 pod_ready.go:93] pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:47.475212   77433 pod_ready.go:82] duration metric: took 9.006699747s for pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:47.475230   77433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qv4pq" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.483276   77433 pod_ready.go:93] pod "coredns-7c65d6cfc9-qv4pq" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:48.483301   77433 pod_ready.go:82] duration metric: took 1.008063055s for pod "coredns-7c65d6cfc9-qv4pq" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.483310   77433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.488897   77433 pod_ready.go:93] pod "etcd-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:48.488922   77433 pod_ready.go:82] duration metric: took 5.605818ms for pod "etcd-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.488931   77433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.493809   77433 pod_ready.go:93] pod "kube-apiserver-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:48.493840   77433 pod_ready.go:82] duration metric: took 4.899361ms for pod "kube-apiserver-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.493853   77433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.498703   77433 pod_ready.go:93] pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:48.498730   77433 pod_ready.go:82] duration metric: took 4.869599ms for pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.498741   77433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2945m" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.673260   77433 pod_ready.go:93] pod "kube-proxy-2945m" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:48.673288   77433 pod_ready.go:82] duration metric: took 174.539603ms for pod "kube-proxy-2945m" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.673300   77433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:49.073094   77433 pod_ready.go:93] pod "kube-scheduler-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:49.073121   77433 pod_ready.go:82] duration metric: took 399.810804ms for pod "kube-scheduler-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:49.073132   77433 pod_ready.go:39] duration metric: took 10.627960333s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:49.073148   77433 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:32:49.073220   77433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:49.089310   77433 api_server.go:72] duration metric: took 10.960186006s to wait for apiserver process to appear ...
	I0917 18:32:49.089337   77433 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:32:49.089360   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:32:49.094838   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 200:
	ok
	I0917 18:32:49.095838   77433 api_server.go:141] control plane version: v1.31.1
	I0917 18:32:49.095862   77433 api_server.go:131] duration metric: took 6.516706ms to wait for apiserver health ...
	I0917 18:32:49.095872   77433 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:32:49.278262   77433 system_pods.go:59] 9 kube-system pods found
	I0917 18:32:49.278306   77433 system_pods.go:61] "coredns-7c65d6cfc9-gddwk" [57f85dd3-be48-4648-8d70-7a06aeaecdc2] Running
	I0917 18:32:49.278312   77433 system_pods.go:61] "coredns-7c65d6cfc9-qv4pq" [31f7e4b5-3870-41a1-96f8-8e13511fe684] Running
	I0917 18:32:49.278315   77433 system_pods.go:61] "etcd-no-preload-328741" [42b632f3-5576-4779-8895-3adcecfb278c] Running
	I0917 18:32:49.278319   77433 system_pods.go:61] "kube-apiserver-no-preload-328741" [ff2d44e3-dad5-4c24-a24d-2df425466747] Running
	I0917 18:32:49.278323   77433 system_pods.go:61] "kube-controller-manager-no-preload-328741" [eec3bebd-16ed-428e-8411-bca31800b36c] Running
	I0917 18:32:49.278326   77433 system_pods.go:61] "kube-proxy-2945m" [8a7b75b4-28c5-476a-b002-05313976c138] Running
	I0917 18:32:49.278329   77433 system_pods.go:61] "kube-scheduler-no-preload-328741" [06c97bf5-3ad3-45c5-8eaa-aa3cdbf51f12] Running
	I0917 18:32:49.278337   77433 system_pods.go:61] "metrics-server-6867b74b74-cvttg" [1b2d6700-2e3c-4a35-9794-0ec095eed0d4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:32:49.278341   77433 system_pods.go:61] "storage-provisioner" [03a8e7f5-ea70-4653-837b-5ad54de48136] Running
	I0917 18:32:49.278348   77433 system_pods.go:74] duration metric: took 182.470522ms to wait for pod list to return data ...
	I0917 18:32:49.278355   77433 default_sa.go:34] waiting for default service account to be created ...
	I0917 18:32:49.474126   77433 default_sa.go:45] found service account: "default"
	I0917 18:32:49.474155   77433 default_sa.go:55] duration metric: took 195.79307ms for default service account to be created ...
	I0917 18:32:49.474166   77433 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 18:32:49.678032   77433 system_pods.go:86] 9 kube-system pods found
	I0917 18:32:49.678062   77433 system_pods.go:89] "coredns-7c65d6cfc9-gddwk" [57f85dd3-be48-4648-8d70-7a06aeaecdc2] Running
	I0917 18:32:49.678068   77433 system_pods.go:89] "coredns-7c65d6cfc9-qv4pq" [31f7e4b5-3870-41a1-96f8-8e13511fe684] Running
	I0917 18:32:49.678072   77433 system_pods.go:89] "etcd-no-preload-328741" [42b632f3-5576-4779-8895-3adcecfb278c] Running
	I0917 18:32:49.678076   77433 system_pods.go:89] "kube-apiserver-no-preload-328741" [ff2d44e3-dad5-4c24-a24d-2df425466747] Running
	I0917 18:32:49.678080   77433 system_pods.go:89] "kube-controller-manager-no-preload-328741" [eec3bebd-16ed-428e-8411-bca31800b36c] Running
	I0917 18:32:49.678083   77433 system_pods.go:89] "kube-proxy-2945m" [8a7b75b4-28c5-476a-b002-05313976c138] Running
	I0917 18:32:49.678086   77433 system_pods.go:89] "kube-scheduler-no-preload-328741" [06c97bf5-3ad3-45c5-8eaa-aa3cdbf51f12] Running
	I0917 18:32:49.678095   77433 system_pods.go:89] "metrics-server-6867b74b74-cvttg" [1b2d6700-2e3c-4a35-9794-0ec095eed0d4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:32:49.678101   77433 system_pods.go:89] "storage-provisioner" [03a8e7f5-ea70-4653-837b-5ad54de48136] Running
	I0917 18:32:49.678111   77433 system_pods.go:126] duration metric: took 203.938016ms to wait for k8s-apps to be running ...
	I0917 18:32:49.678120   77433 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 18:32:49.678169   77433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:32:49.698139   77433 system_svc.go:56] duration metric: took 20.008261ms WaitForService to wait for kubelet
	I0917 18:32:49.698169   77433 kubeadm.go:582] duration metric: took 11.569050863s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 18:32:49.698188   77433 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:32:49.873214   77433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:32:49.873286   77433 node_conditions.go:123] node cpu capacity is 2
	I0917 18:32:49.873304   77433 node_conditions.go:105] duration metric: took 175.108582ms to run NodePressure ...
	I0917 18:32:49.873319   77433 start.go:241] waiting for startup goroutines ...
	I0917 18:32:49.873329   77433 start.go:246] waiting for cluster config update ...
	I0917 18:32:49.873342   77433 start.go:255] writing updated cluster config ...
	I0917 18:32:49.873719   77433 ssh_runner.go:195] Run: rm -f paused
	I0917 18:32:49.928157   77433 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 18:32:49.930136   77433 out.go:177] * Done! kubectl is now configured to use "no-preload-328741" cluster and "default" namespace by default
	I0917 18:32:47.158355   77264 pod_ready.go:82] duration metric: took 4m0.000722561s for pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace to be "Ready" ...
	E0917 18:32:47.158398   77264 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0917 18:32:47.158416   77264 pod_ready.go:39] duration metric: took 4m11.016184959s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:47.158443   77264 kubeadm.go:597] duration metric: took 4m19.974943276s to restartPrimaryControlPlane
	W0917 18:32:47.158508   77264 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 18:32:47.158539   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:32:49.359450   77819 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:32:49.359475   77819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 18:32:49.359496   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:32:49.360356   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.361125   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:32:49.360783   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:32:49.361427   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:32:49.361439   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.361615   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:32:49.361803   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:32:49.363091   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.363388   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:32:49.363420   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.363601   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:32:49.363803   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:32:49.363956   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:32:49.364108   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:32:49.374395   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40731
	I0917 18:32:49.374937   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.375474   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.375506   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.375858   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.376073   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetState
	I0917 18:32:49.377667   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:32:49.377884   77819 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 18:32:49.377899   77819 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 18:32:49.377912   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:32:49.381821   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.381992   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:32:49.382009   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.382202   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:32:49.382366   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:32:49.382534   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:32:49.382855   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:32:49.601072   77819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:32:49.657872   77819 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-438836" to be "Ready" ...
	I0917 18:32:49.669721   77819 node_ready.go:49] node "default-k8s-diff-port-438836" has status "Ready":"True"
	I0917 18:32:49.669750   77819 node_ready.go:38] duration metric: took 11.838649ms for node "default-k8s-diff-port-438836" to be "Ready" ...
	I0917 18:32:49.669761   77819 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:49.692344   77819 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8nrnc" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:49.774555   77819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:32:49.821754   77819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 18:32:49.826676   77819 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 18:32:49.826694   77819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0917 18:32:49.941685   77819 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 18:32:49.941712   77819 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 18:32:50.121418   77819 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:32:50.121444   77819 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 18:32:50.233586   77819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:32:50.948870   77819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.174278798s)
	I0917 18:32:50.948915   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:50.948926   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:50.948941   77819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.12715113s)
	I0917 18:32:50.948983   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:50.948997   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:50.949213   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:50.949240   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:50.949249   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:50.949257   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:50.949335   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:50.949346   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Closing plugin on server side
	I0917 18:32:50.949349   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:50.949367   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:50.949375   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:50.949484   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Closing plugin on server side
	I0917 18:32:50.949517   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:50.949530   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:50.949689   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Closing plugin on server side
	I0917 18:32:50.949700   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:50.949720   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:50.971989   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:50.972009   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:50.972307   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:50.972326   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:51.167019   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:51.167041   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:51.167324   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:51.167350   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:51.167358   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:51.167356   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Closing plugin on server side
	I0917 18:32:51.167366   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:51.167581   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:51.167593   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:51.167605   77819 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-438836"
	I0917 18:32:51.170208   77819 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0917 18:32:51.171345   77819 addons.go:510] duration metric: took 1.86260047s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0917 18:32:51.701056   77819 pod_ready.go:103] pod "coredns-7c65d6cfc9-8nrnc" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:53.199802   77819 pod_ready.go:93] pod "coredns-7c65d6cfc9-8nrnc" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:53.199832   77819 pod_ready.go:82] duration metric: took 3.507449551s for pod "coredns-7c65d6cfc9-8nrnc" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:53.199846   77819 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x4l48" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:54.116602   78008 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0917 18:32:54.116783   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:32:54.117004   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:32:55.207337   77819 pod_ready.go:103] pod "coredns-7c65d6cfc9-x4l48" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:56.207361   77819 pod_ready.go:93] pod "coredns-7c65d6cfc9-x4l48" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:56.207390   77819 pod_ready.go:82] duration metric: took 3.007535449s for pod "coredns-7c65d6cfc9-x4l48" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.207403   77819 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.212003   77819 pod_ready.go:93] pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:56.212025   77819 pod_ready.go:82] duration metric: took 4.613897ms for pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.212034   77819 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.216625   77819 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:56.216645   77819 pod_ready.go:82] duration metric: took 4.604444ms for pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.216654   77819 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.724223   77819 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:56.724257   77819 pod_ready.go:82] duration metric: took 507.594976ms for pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.724277   77819 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xwqtr" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.729284   77819 pod_ready.go:93] pod "kube-proxy-xwqtr" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:56.729312   77819 pod_ready.go:82] duration metric: took 5.025818ms for pod "kube-proxy-xwqtr" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.729324   77819 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:57.004900   77819 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:57.004926   77819 pod_ready.go:82] duration metric: took 275.593421ms for pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:57.004935   77819 pod_ready.go:39] duration metric: took 7.335162837s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:57.004951   77819 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:32:57.004999   77819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:57.020042   77819 api_server.go:72] duration metric: took 7.711410338s to wait for apiserver process to appear ...
	I0917 18:32:57.020070   77819 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:32:57.020095   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:32:57.024504   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 200:
	ok
	I0917 18:32:57.025722   77819 api_server.go:141] control plane version: v1.31.1
	I0917 18:32:57.025749   77819 api_server.go:131] duration metric: took 5.670742ms to wait for apiserver health ...
	I0917 18:32:57.025759   77819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:32:57.206512   77819 system_pods.go:59] 9 kube-system pods found
	I0917 18:32:57.206548   77819 system_pods.go:61] "coredns-7c65d6cfc9-8nrnc" [96eeb328-605e-468b-a022-dbb7b5b44501] Running
	I0917 18:32:57.206555   77819 system_pods.go:61] "coredns-7c65d6cfc9-x4l48" [12a20eeb-edd1-4f5b-bf64-ba3d2c8ae05b] Running
	I0917 18:32:57.206561   77819 system_pods.go:61] "etcd-default-k8s-diff-port-438836" [091ba47e-1133-4557-b3d7-eb39578840ab] Running
	I0917 18:32:57.206567   77819 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-438836" [cbb0e5fe-7583-4f3e-a0cd-dc32b00bb161] Running
	I0917 18:32:57.206573   77819 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-438836" [fe0a5927-2747-4e04-b9fc-c3071cb01ceb] Running
	I0917 18:32:57.206577   77819 system_pods.go:61] "kube-proxy-xwqtr" [5875ff28-7e41-4887-94da-d7632d8141e8] Running
	I0917 18:32:57.206582   77819 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-438836" [b25c5a55-a0e5-432a-a490-69b75d3a48d8] Running
	I0917 18:32:57.206593   77819 system_pods.go:61] "metrics-server-6867b74b74-qnfv2" [75be5ed8-b62d-42c8-8ea9-5809187be05a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:32:57.206599   77819 system_pods.go:61] "storage-provisioner" [a1ae1ecf-9311-4d61-a56d-9147876d4a9d] Running
	I0917 18:32:57.206609   77819 system_pods.go:74] duration metric: took 180.842325ms to wait for pod list to return data ...
	I0917 18:32:57.206619   77819 default_sa.go:34] waiting for default service account to be created ...
	I0917 18:32:57.404368   77819 default_sa.go:45] found service account: "default"
	I0917 18:32:57.404395   77819 default_sa.go:55] duration metric: took 197.770326ms for default service account to be created ...
	I0917 18:32:57.404404   77819 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 18:32:57.607472   77819 system_pods.go:86] 9 kube-system pods found
	I0917 18:32:57.607504   77819 system_pods.go:89] "coredns-7c65d6cfc9-8nrnc" [96eeb328-605e-468b-a022-dbb7b5b44501] Running
	I0917 18:32:57.607513   77819 system_pods.go:89] "coredns-7c65d6cfc9-x4l48" [12a20eeb-edd1-4f5b-bf64-ba3d2c8ae05b] Running
	I0917 18:32:57.607519   77819 system_pods.go:89] "etcd-default-k8s-diff-port-438836" [091ba47e-1133-4557-b3d7-eb39578840ab] Running
	I0917 18:32:57.607523   77819 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-438836" [cbb0e5fe-7583-4f3e-a0cd-dc32b00bb161] Running
	I0917 18:32:57.607529   77819 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-438836" [fe0a5927-2747-4e04-b9fc-c3071cb01ceb] Running
	I0917 18:32:57.607536   77819 system_pods.go:89] "kube-proxy-xwqtr" [5875ff28-7e41-4887-94da-d7632d8141e8] Running
	I0917 18:32:57.607542   77819 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-438836" [b25c5a55-a0e5-432a-a490-69b75d3a48d8] Running
	I0917 18:32:57.607552   77819 system_pods.go:89] "metrics-server-6867b74b74-qnfv2" [75be5ed8-b62d-42c8-8ea9-5809187be05a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:32:57.607558   77819 system_pods.go:89] "storage-provisioner" [a1ae1ecf-9311-4d61-a56d-9147876d4a9d] Running
	I0917 18:32:57.607573   77819 system_pods.go:126] duration metric: took 203.161716ms to wait for k8s-apps to be running ...
	I0917 18:32:57.607584   77819 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 18:32:57.607642   77819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:32:57.623570   77819 system_svc.go:56] duration metric: took 15.976138ms WaitForService to wait for kubelet
	I0917 18:32:57.623607   77819 kubeadm.go:582] duration metric: took 8.314980472s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 18:32:57.623629   77819 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:32:57.804485   77819 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:32:57.804510   77819 node_conditions.go:123] node cpu capacity is 2
	I0917 18:32:57.804520   77819 node_conditions.go:105] duration metric: took 180.885929ms to run NodePressure ...
	I0917 18:32:57.804532   77819 start.go:241] waiting for startup goroutines ...
	I0917 18:32:57.804539   77819 start.go:246] waiting for cluster config update ...
	I0917 18:32:57.804549   77819 start.go:255] writing updated cluster config ...
	I0917 18:32:57.804868   77819 ssh_runner.go:195] Run: rm -f paused
	I0917 18:32:57.854248   77819 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 18:32:57.856295   77819 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-438836" cluster and "default" namespace by default
	I0917 18:32:59.116802   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:32:59.117073   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:33:09.116772   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:33:09.117022   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:33:13.480418   77264 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.32185403s)
	I0917 18:33:13.480497   77264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:33:13.497676   77264 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:33:13.509036   77264 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:33:13.519901   77264 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:33:13.519927   77264 kubeadm.go:157] found existing configuration files:
	
	I0917 18:33:13.519985   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:33:13.530704   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:33:13.530784   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:33:13.541442   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:33:13.553771   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:33:13.553844   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:33:13.566357   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:33:13.576787   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:33:13.576871   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:33:13.587253   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:33:13.597253   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:33:13.597331   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:33:13.607686   77264 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:33:13.657294   77264 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 18:33:13.657416   77264 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:33:13.784063   77264 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:33:13.784228   77264 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:33:13.784388   77264 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 18:33:13.797531   77264 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:33:13.799464   77264 out.go:235]   - Generating certificates and keys ...
	I0917 18:33:13.799555   77264 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:33:13.799626   77264 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:33:13.799735   77264 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:33:13.799849   77264 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:33:13.799964   77264 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:33:13.800059   77264 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:33:13.800305   77264 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:33:13.800620   77264 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:33:13.800843   77264 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:33:13.801056   77264 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:33:13.801220   77264 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:33:13.801361   77264 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:33:13.949574   77264 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:33:14.002216   77264 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 18:33:14.113507   77264 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:33:14.328861   77264 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:33:14.452448   77264 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:33:14.452956   77264 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:33:14.456029   77264 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:33:14.458085   77264 out.go:235]   - Booting up control plane ...
	I0917 18:33:14.458197   77264 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:33:14.458298   77264 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:33:14.458418   77264 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:33:14.480556   77264 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:33:14.490011   77264 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:33:14.490108   77264 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:33:14.641550   77264 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 18:33:14.641680   77264 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 18:33:16.163986   77264 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.521637216s
	I0917 18:33:16.164081   77264 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 18:33:21.167283   77264 kubeadm.go:310] [api-check] The API server is healthy after 5.003926265s
	I0917 18:33:21.187439   77264 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 18:33:21.214590   77264 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 18:33:21.256056   77264 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 18:33:21.256319   77264 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-081863 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 18:33:21.274920   77264 kubeadm.go:310] [bootstrap-token] Using token: tkf10q.2xx4v0n14dywt5kc
	I0917 18:33:21.276557   77264 out.go:235]   - Configuring RBAC rules ...
	I0917 18:33:21.276707   77264 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 18:33:21.286544   77264 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 18:33:21.299514   77264 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 18:33:21.304466   77264 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 18:33:21.309218   77264 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 18:33:21.315113   77264 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 18:33:21.575303   77264 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 18:33:22.022249   77264 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 18:33:22.576184   77264 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 18:33:22.576211   77264 kubeadm.go:310] 
	I0917 18:33:22.576279   77264 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 18:33:22.576291   77264 kubeadm.go:310] 
	I0917 18:33:22.576360   77264 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 18:33:22.576367   77264 kubeadm.go:310] 
	I0917 18:33:22.576388   77264 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 18:33:22.576480   77264 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 18:33:22.576565   77264 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 18:33:22.576576   77264 kubeadm.go:310] 
	I0917 18:33:22.576640   77264 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 18:33:22.576649   77264 kubeadm.go:310] 
	I0917 18:33:22.576725   77264 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 18:33:22.576742   77264 kubeadm.go:310] 
	I0917 18:33:22.576802   77264 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 18:33:22.576884   77264 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 18:33:22.576987   77264 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 18:33:22.577008   77264 kubeadm.go:310] 
	I0917 18:33:22.577111   77264 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 18:33:22.577221   77264 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 18:33:22.577246   77264 kubeadm.go:310] 
	I0917 18:33:22.577361   77264 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tkf10q.2xx4v0n14dywt5kc \
	I0917 18:33:22.577505   77264 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 \
	I0917 18:33:22.577543   77264 kubeadm.go:310] 	--control-plane 
	I0917 18:33:22.577552   77264 kubeadm.go:310] 
	I0917 18:33:22.577660   77264 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 18:33:22.577671   77264 kubeadm.go:310] 
	I0917 18:33:22.577778   77264 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tkf10q.2xx4v0n14dywt5kc \
	I0917 18:33:22.577908   77264 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 
	I0917 18:33:22.579092   77264 kubeadm.go:310] W0917 18:33:13.630065    2521 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:33:22.579481   77264 kubeadm.go:310] W0917 18:33:13.630936    2521 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:33:22.579593   77264 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:33:22.579621   77264 cni.go:84] Creating CNI manager for ""
	I0917 18:33:22.579630   77264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:33:22.581566   77264 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:33:22.582849   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:33:22.595489   77264 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:33:22.627349   77264 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 18:33:22.627411   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:22.627448   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-081863 minikube.k8s.io/updated_at=2024_09_17T18_33_22_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=embed-certs-081863 minikube.k8s.io/primary=true
	I0917 18:33:22.862361   77264 ops.go:34] apiserver oom_adj: -16
	I0917 18:33:22.862491   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:23.362641   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:23.863054   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:24.363374   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:24.862744   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:25.362644   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:25.863065   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:25.974152   77264 kubeadm.go:1113] duration metric: took 3.346801442s to wait for elevateKubeSystemPrivileges
	I0917 18:33:25.974185   77264 kubeadm.go:394] duration metric: took 4m58.844504582s to StartCluster
	I0917 18:33:25.974203   77264 settings.go:142] acquiring lock: {Name:mk34898861b3ed534da4bd8404feab95fe690001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:33:25.974289   77264 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:33:25.976039   77264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:33:25.976296   77264 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.61 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 18:33:25.976407   77264 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 18:33:25.976517   77264 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-081863"
	I0917 18:33:25.976528   77264 addons.go:69] Setting default-storageclass=true in profile "embed-certs-081863"
	I0917 18:33:25.976535   77264 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-081863"
	W0917 18:33:25.976543   77264 addons.go:243] addon storage-provisioner should already be in state true
	I0917 18:33:25.976547   77264 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-081863"
	I0917 18:33:25.976573   77264 host.go:66] Checking if "embed-certs-081863" exists ...
	I0917 18:33:25.976624   77264 config.go:182] Loaded profile config "embed-certs-081863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:33:25.976662   77264 addons.go:69] Setting metrics-server=true in profile "embed-certs-081863"
	I0917 18:33:25.976672   77264 addons.go:234] Setting addon metrics-server=true in "embed-certs-081863"
	W0917 18:33:25.976679   77264 addons.go:243] addon metrics-server should already be in state true
	I0917 18:33:25.976698   77264 host.go:66] Checking if "embed-certs-081863" exists ...
	I0917 18:33:25.976964   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.976984   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.976997   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:25.977013   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:25.977030   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.977050   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:25.978439   77264 out.go:177] * Verifying Kubernetes components...
	I0917 18:33:25.980250   77264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:33:25.993034   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46451
	I0917 18:33:25.993038   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45343
	I0917 18:33:25.993551   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38487
	I0917 18:33:25.993589   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:25.993625   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:25.993887   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:25.994098   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:25.994122   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:25.994193   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:25.994211   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:25.994442   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:25.994466   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:25.994523   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:25.994523   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:25.994762   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetState
	I0917 18:33:25.994791   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:25.995118   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.995168   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:25.995251   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.995284   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:25.998228   77264 addons.go:234] Setting addon default-storageclass=true in "embed-certs-081863"
	W0917 18:33:25.998260   77264 addons.go:243] addon default-storageclass should already be in state true
	I0917 18:33:25.998301   77264 host.go:66] Checking if "embed-certs-081863" exists ...
	I0917 18:33:25.998642   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.998688   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:26.011862   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0917 18:33:26.012556   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:26.013142   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:26.013168   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:26.013578   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:26.014129   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36901
	I0917 18:33:26.014246   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43409
	I0917 18:33:26.014331   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetState
	I0917 18:33:26.014633   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:26.014703   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:26.015086   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:26.015108   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:26.015379   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:26.015396   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:26.015451   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:26.015895   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:26.016078   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:26.016113   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:26.016486   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetState
	I0917 18:33:26.016525   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:33:26.018385   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:33:26.019139   77264 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0917 18:33:26.020119   77264 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:33:26.020991   77264 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 18:33:26.021013   77264 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 18:33:26.021035   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:33:26.021810   77264 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:33:26.021825   77264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 18:33:26.021839   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:33:26.025804   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.026074   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:33:26.026097   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.025803   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.026468   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:33:26.026649   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:33:26.026937   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:33:26.026982   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:33:26.026991   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.027025   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:33:26.027114   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:33:26.027232   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:33:26.027417   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:33:26.027580   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:33:26.035905   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42799
	I0917 18:33:26.036621   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:26.037566   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:26.037597   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:26.038013   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:26.038317   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetState
	I0917 18:33:26.040464   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:33:26.040887   77264 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 18:33:26.040908   77264 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 18:33:26.040922   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:33:26.043857   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.044291   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:33:26.044325   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.044488   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:33:26.044682   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:33:26.044838   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:33:26.045034   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:33:26.155880   77264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:33:26.182293   77264 node_ready.go:35] waiting up to 6m0s for node "embed-certs-081863" to be "Ready" ...
	I0917 18:33:26.191336   77264 node_ready.go:49] node "embed-certs-081863" has status "Ready":"True"
	I0917 18:33:26.191358   77264 node_ready.go:38] duration metric: took 9.032061ms for node "embed-certs-081863" to be "Ready" ...
	I0917 18:33:26.191366   77264 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:33:26.196333   77264 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:26.260819   77264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:33:26.270678   77264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 18:33:26.270701   77264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0917 18:33:26.306169   77264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 18:33:26.310271   77264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 18:33:26.310300   77264 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 18:33:26.367576   77264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:33:26.367603   77264 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 18:33:26.424838   77264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:33:27.088293   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.088326   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.088329   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.088352   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.088726   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.088759   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.088782   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Closing plugin on server side
	I0917 18:33:27.088794   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.088831   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.088845   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.088853   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.088798   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.088923   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.089075   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.089088   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.089200   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.089210   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.089242   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Closing plugin on server side
	I0917 18:33:27.162204   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.162227   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.162597   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.162619   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.423795   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.423824   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.424110   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.424127   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.424136   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.424145   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.424369   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.424385   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.424395   77264 addons.go:475] Verifying addon metrics-server=true in "embed-certs-081863"
	I0917 18:33:27.424390   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Closing plugin on server side
	I0917 18:33:27.426548   77264 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0917 18:33:29.116398   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:33:29.116681   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:33:27.427684   77264 addons.go:510] duration metric: took 1.451280405s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0917 18:33:28.311561   77264 pod_ready.go:103] pod "etcd-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:33:30.703554   77264 pod_ready.go:103] pod "etcd-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:33:31.203018   77264 pod_ready.go:93] pod "etcd-embed-certs-081863" in "kube-system" namespace has status "Ready":"True"
	I0917 18:33:31.203047   77264 pod_ready.go:82] duration metric: took 5.006684537s for pod "etcd-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:31.203057   77264 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:31.207921   77264 pod_ready.go:93] pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace has status "Ready":"True"
	I0917 18:33:31.207949   77264 pod_ready.go:82] duration metric: took 4.88424ms for pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:31.207964   77264 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:31.212804   77264 pod_ready.go:93] pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace has status "Ready":"True"
	I0917 18:33:31.212830   77264 pod_ready.go:82] duration metric: took 4.856814ms for pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:31.212842   77264 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:32.221895   77264 pod_ready.go:93] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"True"
	I0917 18:33:32.221921   77264 pod_ready.go:82] duration metric: took 1.009071567s for pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:32.221929   77264 pod_ready.go:39] duration metric: took 6.030554324s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:33:32.221942   77264 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:33:32.221991   77264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:33:32.242087   77264 api_server.go:72] duration metric: took 6.265747566s to wait for apiserver process to appear ...
	I0917 18:33:32.242113   77264 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:33:32.242129   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:33:32.246960   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 200:
	ok
	I0917 18:33:32.248201   77264 api_server.go:141] control plane version: v1.31.1
	I0917 18:33:32.248223   77264 api_server.go:131] duration metric: took 6.105102ms to wait for apiserver health ...
	I0917 18:33:32.248231   77264 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:33:32.257513   77264 system_pods.go:59] 9 kube-system pods found
	I0917 18:33:32.257546   77264 system_pods.go:61] "coredns-7c65d6cfc9-662sf" [dc7d0fc2-f0dd-420c-b2f9-16ddb84bf3c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:33:32.257557   77264 system_pods.go:61] "coredns-7c65d6cfc9-dxjr7" [16ebe197-5fcf-4988-968b-c9edd71886ff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:33:32.257563   77264 system_pods.go:61] "etcd-embed-certs-081863" [305d6255-3a64-42e2-ad46-cfb94470289d] Running
	I0917 18:33:32.257569   77264 system_pods.go:61] "kube-apiserver-embed-certs-081863" [693ee853-314d-49fc-884c-aaaa2ac17a59] Running
	I0917 18:33:32.257575   77264 system_pods.go:61] "kube-controller-manager-embed-certs-081863" [ff8d98db-0214-405a-858d-e720dccd0492] Running
	I0917 18:33:32.257579   77264 system_pods.go:61] "kube-proxy-7w64h" [46f3bcbd-64c9-4b30-9aa9-6f6e1eb9833b] Running
	I0917 18:33:32.257585   77264 system_pods.go:61] "kube-scheduler-embed-certs-081863" [fb3b40eb-5f37-486c-a897-c7d3574ea408] Running
	I0917 18:33:32.257593   77264 system_pods.go:61] "metrics-server-6867b74b74-98t8z" [941996a1-2109-4c06-88d1-19c6987f81bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:33:32.257602   77264 system_pods.go:61] "storage-provisioner" [107868ba-cf29-42b0-bb0d-c0da9b6b4c8c] Running
	I0917 18:33:32.257612   77264 system_pods.go:74] duration metric: took 9.373269ms to wait for pod list to return data ...
	I0917 18:33:32.257625   77264 default_sa.go:34] waiting for default service account to be created ...
	I0917 18:33:32.264675   77264 default_sa.go:45] found service account: "default"
	I0917 18:33:32.264700   77264 default_sa.go:55] duration metric: took 7.064658ms for default service account to be created ...
	I0917 18:33:32.264711   77264 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 18:33:32.270932   77264 system_pods.go:86] 9 kube-system pods found
	I0917 18:33:32.270964   77264 system_pods.go:89] "coredns-7c65d6cfc9-662sf" [dc7d0fc2-f0dd-420c-b2f9-16ddb84bf3c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:33:32.270975   77264 system_pods.go:89] "coredns-7c65d6cfc9-dxjr7" [16ebe197-5fcf-4988-968b-c9edd71886ff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:33:32.270983   77264 system_pods.go:89] "etcd-embed-certs-081863" [305d6255-3a64-42e2-ad46-cfb94470289d] Running
	I0917 18:33:32.270990   77264 system_pods.go:89] "kube-apiserver-embed-certs-081863" [693ee853-314d-49fc-884c-aaaa2ac17a59] Running
	I0917 18:33:32.270996   77264 system_pods.go:89] "kube-controller-manager-embed-certs-081863" [ff8d98db-0214-405a-858d-e720dccd0492] Running
	I0917 18:33:32.271002   77264 system_pods.go:89] "kube-proxy-7w64h" [46f3bcbd-64c9-4b30-9aa9-6f6e1eb9833b] Running
	I0917 18:33:32.271009   77264 system_pods.go:89] "kube-scheduler-embed-certs-081863" [fb3b40eb-5f37-486c-a897-c7d3574ea408] Running
	I0917 18:33:32.271018   77264 system_pods.go:89] "metrics-server-6867b74b74-98t8z" [941996a1-2109-4c06-88d1-19c6987f81bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:33:32.271024   77264 system_pods.go:89] "storage-provisioner" [107868ba-cf29-42b0-bb0d-c0da9b6b4c8c] Running
	I0917 18:33:32.271037   77264 system_pods.go:126] duration metric: took 6.318783ms to wait for k8s-apps to be running ...
	I0917 18:33:32.271049   77264 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 18:33:32.271102   77264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:33:32.287483   77264 system_svc.go:56] duration metric: took 16.427006ms WaitForService to wait for kubelet
	I0917 18:33:32.287516   77264 kubeadm.go:582] duration metric: took 6.311184714s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 18:33:32.287535   77264 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:33:32.406700   77264 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:33:32.406738   77264 node_conditions.go:123] node cpu capacity is 2
	I0917 18:33:32.406754   77264 node_conditions.go:105] duration metric: took 119.213403ms to run NodePressure ...
	I0917 18:33:32.406767   77264 start.go:241] waiting for startup goroutines ...
	I0917 18:33:32.406777   77264 start.go:246] waiting for cluster config update ...
	I0917 18:33:32.406791   77264 start.go:255] writing updated cluster config ...
	I0917 18:33:32.407061   77264 ssh_runner.go:195] Run: rm -f paused
	I0917 18:33:32.455606   77264 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 18:33:32.457636   77264 out.go:177] * Done! kubectl is now configured to use "embed-certs-081863" cluster and "default" namespace by default
	I0917 18:34:09.116050   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:34:09.116348   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:34:09.116382   78008 kubeadm.go:310] 
	I0917 18:34:09.116437   78008 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0917 18:34:09.116522   78008 kubeadm.go:310] 		timed out waiting for the condition
	I0917 18:34:09.116546   78008 kubeadm.go:310] 
	I0917 18:34:09.116595   78008 kubeadm.go:310] 	This error is likely caused by:
	I0917 18:34:09.116645   78008 kubeadm.go:310] 		- The kubelet is not running
	I0917 18:34:09.116792   78008 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0917 18:34:09.116804   78008 kubeadm.go:310] 
	I0917 18:34:09.116949   78008 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0917 18:34:09.116993   78008 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0917 18:34:09.117053   78008 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0917 18:34:09.117070   78008 kubeadm.go:310] 
	I0917 18:34:09.117199   78008 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0917 18:34:09.117318   78008 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0917 18:34:09.117331   78008 kubeadm.go:310] 
	I0917 18:34:09.117467   78008 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0917 18:34:09.117585   78008 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0917 18:34:09.117689   78008 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0917 18:34:09.117782   78008 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0917 18:34:09.117793   78008 kubeadm.go:310] 
	I0917 18:34:09.118509   78008 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:34:09.118613   78008 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0917 18:34:09.118740   78008 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0917 18:34:09.118821   78008 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0917 18:34:09.118869   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:34:09.597153   78008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:34:09.614431   78008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:34:09.627627   78008 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:34:09.627653   78008 kubeadm.go:157] found existing configuration files:
	
	I0917 18:34:09.627702   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:34:09.639927   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:34:09.639997   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:34:09.651694   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:34:09.662886   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:34:09.662951   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:34:09.675194   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:34:09.686971   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:34:09.687040   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:34:09.699343   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:34:09.711202   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:34:09.711259   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:34:09.722049   78008 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:34:09.800536   78008 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0917 18:34:09.800589   78008 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:34:09.951244   78008 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:34:09.951389   78008 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:34:09.951517   78008 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0917 18:34:10.148311   78008 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:34:10.150656   78008 out.go:235]   - Generating certificates and keys ...
	I0917 18:34:10.150769   78008 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:34:10.150858   78008 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:34:10.150978   78008 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:34:10.151065   78008 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:34:10.151169   78008 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:34:10.151256   78008 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:34:10.151519   78008 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:34:10.151757   78008 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:34:10.152388   78008 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:34:10.152908   78008 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:34:10.153071   78008 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:34:10.153159   78008 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:34:10.298790   78008 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:34:10.463403   78008 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:34:10.699997   78008 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:34:10.983279   78008 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:34:11.006708   78008 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:34:11.008239   78008 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:34:11.008306   78008 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:34:11.173261   78008 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:34:11.175163   78008 out.go:235]   - Booting up control plane ...
	I0917 18:34:11.175324   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:34:11.188834   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:34:11.189874   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:34:11.190719   78008 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:34:11.193221   78008 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0917 18:34:51.193814   78008 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0917 18:34:51.194231   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:34:51.194466   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:34:56.194972   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:34:56.195214   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:35:06.195454   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:35:06.195700   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:35:26.196645   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:35:26.196867   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:36:06.199013   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:36:06.199291   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:36:06.199313   78008 kubeadm.go:310] 
	I0917 18:36:06.199374   78008 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0917 18:36:06.199427   78008 kubeadm.go:310] 		timed out waiting for the condition
	I0917 18:36:06.199434   78008 kubeadm.go:310] 
	I0917 18:36:06.199481   78008 kubeadm.go:310] 	This error is likely caused by:
	I0917 18:36:06.199514   78008 kubeadm.go:310] 		- The kubelet is not running
	I0917 18:36:06.199643   78008 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0917 18:36:06.199663   78008 kubeadm.go:310] 
	I0917 18:36:06.199785   78008 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0917 18:36:06.199835   78008 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0917 18:36:06.199878   78008 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0917 18:36:06.199882   78008 kubeadm.go:310] 
	I0917 18:36:06.200017   78008 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0917 18:36:06.200218   78008 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0917 18:36:06.200235   78008 kubeadm.go:310] 
	I0917 18:36:06.200391   78008 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0917 18:36:06.200515   78008 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0917 18:36:06.200640   78008 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0917 18:36:06.200746   78008 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0917 18:36:06.200763   78008 kubeadm.go:310] 
	I0917 18:36:06.201520   78008 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:36:06.201636   78008 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0917 18:36:06.201798   78008 kubeadm.go:394] duration metric: took 7m56.884157814s to StartCluster
	I0917 18:36:06.201852   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:36:06.201800   78008 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0917 18:36:06.201920   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:36:06.251742   78008 cri.go:89] found id: ""
	I0917 18:36:06.251773   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.251781   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:36:06.251787   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:36:06.251853   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:36:06.292437   78008 cri.go:89] found id: ""
	I0917 18:36:06.292471   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.292483   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:36:06.292490   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:36:06.292548   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:36:06.334539   78008 cri.go:89] found id: ""
	I0917 18:36:06.334571   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.334580   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:36:06.334590   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:36:06.334641   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:36:06.372231   78008 cri.go:89] found id: ""
	I0917 18:36:06.372267   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.372279   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:36:06.372287   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:36:06.372346   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:36:06.411995   78008 cri.go:89] found id: ""
	I0917 18:36:06.412023   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.412031   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:36:06.412036   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:36:06.412100   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:36:06.450809   78008 cri.go:89] found id: ""
	I0917 18:36:06.450834   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.450842   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:36:06.450848   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:36:06.450897   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:36:06.486772   78008 cri.go:89] found id: ""
	I0917 18:36:06.486802   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.486814   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:36:06.486831   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:36:06.486886   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:36:06.528167   78008 cri.go:89] found id: ""
	I0917 18:36:06.528198   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.528210   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:36:06.528222   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:36:06.528234   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:36:06.610415   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:36:06.610445   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:36:06.610461   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:36:06.745881   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:36:06.745921   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:36:06.788764   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:36:06.788802   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:36:06.843477   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:36:06.843514   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0917 18:36:06.858338   78008 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0917 18:36:06.858388   78008 out.go:270] * 
	W0917 18:36:06.858456   78008 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0917 18:36:06.858485   78008 out.go:270] * 
	W0917 18:36:06.859898   78008 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 18:36:06.863606   78008 out.go:201] 
	W0917 18:36:06.865246   78008 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0917 18:36:06.865293   78008 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0917 18:36:06.865313   78008 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0917 18:36:06.866942   78008 out.go:201] 
	
	
	==> CRI-O <==
	Sep 17 18:42:00 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:42:00.001046380Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598520001021914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e93183c5-cdfa-4c0d-8a4d-cb9975bf8753 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:42:00 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:42:00.001711475Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c5986ec-e67e-4351-9d8a-308b8cc8ed84 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:42:00 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:42:00.001770006Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0c5986ec-e67e-4351-9d8a-308b8cc8ed84 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:42:00 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:42:00.001993865Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2fbd0e5e760d13f98ab9ba88d999247cdf66a8ad8098c9ee7f28e65d6572a9b2,PodSandboxId:7923027a515fafeb14cafdccf65674e8ab363eafd9c8521c201d536688427d48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726597971340365083,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ae1ecf-9311-4d61-a56d-9147876d4a9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9840debe68b5053c1a491899a5d7c656982084f2f2e4604d316cce9d1a26c7a9,PodSandboxId:ba8a4f783612b9ee3f9b29afe0de11f7d2a97125a5904615cb6eed0b9ac631e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597970014126565,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x4l48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a20eeb-edd1-4f5b-bf64-ba3d2c8ae05b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0c6cb1df5f79832219bd145ba552a3daa93c23a8b00ceb93302f6999bbc7c1f,PodSandboxId:edaaa54b9023ee5ac274786fdda52a691b3f386503d3fecbae6623f985ec1c94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597970033199716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8nrnc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 96eeb328-605e-468b-a022-dbb7b5b44501,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8198df1218bca5231f562facee0a790436f98cf41df2b13d4cd52b339a38e663,PodSandboxId:3fb89d2bd06f6aa0b8191de26abfe9a8ee98722c72030fd0b39d7f596988a198,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726597969262811666,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xwqtr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5875ff28-7e41-4887-94da-d7632d8141e8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e40fdd1fd764d351571e680a299a8cc448471f3dbd8cef20a8d2af3297a33f23,PodSandboxId:747cd257b791e23976b223b692f5ee95f70169a7f4c00f548182f38927dc66f0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:172659795
8292872227,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16c8cafb7a9faf4d563bc354343d4a14,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c01bdc8e5cd2fe1fb27733c583449b8c337d1c3156b08f4708f3e06c1c03fc6c,PodSandboxId:dbc153b93eb310e52b3f1c0d3117e0af55c074e5a14c00d8d4de005590435a11,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172
6597958298498310,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a7f45bd62ebffc8bb2ad5afa38b84c6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a88b1fb4a49aaf15b06bc8a6136326491aac09f2c6933e9fe3b24c6c2e0420f,PodSandboxId:e4ec6be3f735c261867fc8996e332c520db697ab66073baff0ae403c9e04f673,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726597958205257659,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9ce964059f89a4c4963cb520a63bb2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b58b4f5db1adeb379ee6936ee97d2de412f77fa82e9596ab6d585d73685519b0,PodSandboxId:c76c1f3a1fcee01b98aa500e538c3f66249a9c8a17e3ca0e62f264a605c9d325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726597958128230597,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663ce023ea41410cef7d5ca4b524d300,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd03b090b920a7cb9d13bcb9bb11127b97b36285f56e2c13a4ae01064016eb5,PodSandboxId:f8b9965d59910efe5fe80491c57c0e246a58c4be035388e14dc1d1f5955cb961,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726597669999094213,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663ce023ea41410cef7d5ca4b524d300,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0c5986ec-e67e-4351-9d8a-308b8cc8ed84 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:42:00 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:42:00.045036001Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ceb9a177-623a-4c7b-b2e0-3572f3dd0930 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:42:00 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:42:00.045129457Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ceb9a177-623a-4c7b-b2e0-3572f3dd0930 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:42:00 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:42:00.046879790Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bceac96d-7884-4d72-b028-23695f44077b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:42:00 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:42:00.047298989Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598520047274022,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bceac96d-7884-4d72-b028-23695f44077b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:42:00 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:42:00.047888331Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a79e320f-087d-4fd2-99b8-5f2b4b823878 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:42:00 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:42:00.047974289Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a79e320f-087d-4fd2-99b8-5f2b4b823878 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:42:00 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:42:00.048200035Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2fbd0e5e760d13f98ab9ba88d999247cdf66a8ad8098c9ee7f28e65d6572a9b2,PodSandboxId:7923027a515fafeb14cafdccf65674e8ab363eafd9c8521c201d536688427d48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726597971340365083,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ae1ecf-9311-4d61-a56d-9147876d4a9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9840debe68b5053c1a491899a5d7c656982084f2f2e4604d316cce9d1a26c7a9,PodSandboxId:ba8a4f783612b9ee3f9b29afe0de11f7d2a97125a5904615cb6eed0b9ac631e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597970014126565,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x4l48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a20eeb-edd1-4f5b-bf64-ba3d2c8ae05b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0c6cb1df5f79832219bd145ba552a3daa93c23a8b00ceb93302f6999bbc7c1f,PodSandboxId:edaaa54b9023ee5ac274786fdda52a691b3f386503d3fecbae6623f985ec1c94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597970033199716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8nrnc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 96eeb328-605e-468b-a022-dbb7b5b44501,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8198df1218bca5231f562facee0a790436f98cf41df2b13d4cd52b339a38e663,PodSandboxId:3fb89d2bd06f6aa0b8191de26abfe9a8ee98722c72030fd0b39d7f596988a198,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726597969262811666,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xwqtr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5875ff28-7e41-4887-94da-d7632d8141e8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e40fdd1fd764d351571e680a299a8cc448471f3dbd8cef20a8d2af3297a33f23,PodSandboxId:747cd257b791e23976b223b692f5ee95f70169a7f4c00f548182f38927dc66f0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:172659795
8292872227,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16c8cafb7a9faf4d563bc354343d4a14,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c01bdc8e5cd2fe1fb27733c583449b8c337d1c3156b08f4708f3e06c1c03fc6c,PodSandboxId:dbc153b93eb310e52b3f1c0d3117e0af55c074e5a14c00d8d4de005590435a11,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172
6597958298498310,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a7f45bd62ebffc8bb2ad5afa38b84c6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a88b1fb4a49aaf15b06bc8a6136326491aac09f2c6933e9fe3b24c6c2e0420f,PodSandboxId:e4ec6be3f735c261867fc8996e332c520db697ab66073baff0ae403c9e04f673,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726597958205257659,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9ce964059f89a4c4963cb520a63bb2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b58b4f5db1adeb379ee6936ee97d2de412f77fa82e9596ab6d585d73685519b0,PodSandboxId:c76c1f3a1fcee01b98aa500e538c3f66249a9c8a17e3ca0e62f264a605c9d325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726597958128230597,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663ce023ea41410cef7d5ca4b524d300,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd03b090b920a7cb9d13bcb9bb11127b97b36285f56e2c13a4ae01064016eb5,PodSandboxId:f8b9965d59910efe5fe80491c57c0e246a58c4be035388e14dc1d1f5955cb961,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726597669999094213,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663ce023ea41410cef7d5ca4b524d300,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a79e320f-087d-4fd2-99b8-5f2b4b823878 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:42:00 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:42:00.088619101Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e16e0291-d24d-46be-895c-6bd6077dabe1 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:42:00 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:42:00.088774348Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e16e0291-d24d-46be-895c-6bd6077dabe1 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:42:00 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:42:00.089973608Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8607c6fc-705f-4ca0-80e9-e62428bdf105 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:42:00 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:42:00.090389769Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598520090365908,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8607c6fc-705f-4ca0-80e9-e62428bdf105 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:42:00 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:42:00.090916410Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a9b7f8d-7c58-4177-a13b-aa9a1e175204 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:42:00 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:42:00.091000859Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a9b7f8d-7c58-4177-a13b-aa9a1e175204 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:42:00 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:42:00.091246613Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2fbd0e5e760d13f98ab9ba88d999247cdf66a8ad8098c9ee7f28e65d6572a9b2,PodSandboxId:7923027a515fafeb14cafdccf65674e8ab363eafd9c8521c201d536688427d48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726597971340365083,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ae1ecf-9311-4d61-a56d-9147876d4a9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9840debe68b5053c1a491899a5d7c656982084f2f2e4604d316cce9d1a26c7a9,PodSandboxId:ba8a4f783612b9ee3f9b29afe0de11f7d2a97125a5904615cb6eed0b9ac631e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597970014126565,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x4l48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a20eeb-edd1-4f5b-bf64-ba3d2c8ae05b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0c6cb1df5f79832219bd145ba552a3daa93c23a8b00ceb93302f6999bbc7c1f,PodSandboxId:edaaa54b9023ee5ac274786fdda52a691b3f386503d3fecbae6623f985ec1c94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597970033199716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8nrnc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 96eeb328-605e-468b-a022-dbb7b5b44501,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8198df1218bca5231f562facee0a790436f98cf41df2b13d4cd52b339a38e663,PodSandboxId:3fb89d2bd06f6aa0b8191de26abfe9a8ee98722c72030fd0b39d7f596988a198,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726597969262811666,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xwqtr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5875ff28-7e41-4887-94da-d7632d8141e8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e40fdd1fd764d351571e680a299a8cc448471f3dbd8cef20a8d2af3297a33f23,PodSandboxId:747cd257b791e23976b223b692f5ee95f70169a7f4c00f548182f38927dc66f0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:172659795
8292872227,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16c8cafb7a9faf4d563bc354343d4a14,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c01bdc8e5cd2fe1fb27733c583449b8c337d1c3156b08f4708f3e06c1c03fc6c,PodSandboxId:dbc153b93eb310e52b3f1c0d3117e0af55c074e5a14c00d8d4de005590435a11,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172
6597958298498310,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a7f45bd62ebffc8bb2ad5afa38b84c6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a88b1fb4a49aaf15b06bc8a6136326491aac09f2c6933e9fe3b24c6c2e0420f,PodSandboxId:e4ec6be3f735c261867fc8996e332c520db697ab66073baff0ae403c9e04f673,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726597958205257659,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9ce964059f89a4c4963cb520a63bb2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b58b4f5db1adeb379ee6936ee97d2de412f77fa82e9596ab6d585d73685519b0,PodSandboxId:c76c1f3a1fcee01b98aa500e538c3f66249a9c8a17e3ca0e62f264a605c9d325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726597958128230597,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663ce023ea41410cef7d5ca4b524d300,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd03b090b920a7cb9d13bcb9bb11127b97b36285f56e2c13a4ae01064016eb5,PodSandboxId:f8b9965d59910efe5fe80491c57c0e246a58c4be035388e14dc1d1f5955cb961,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726597669999094213,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663ce023ea41410cef7d5ca4b524d300,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0a9b7f8d-7c58-4177-a13b-aa9a1e175204 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:42:00 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:42:00.131885919Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=abbf8d05-c471-418c-b8cc-052c1718aaec name=/runtime.v1.RuntimeService/Version
	Sep 17 18:42:00 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:42:00.131999116Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=abbf8d05-c471-418c-b8cc-052c1718aaec name=/runtime.v1.RuntimeService/Version
	Sep 17 18:42:00 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:42:00.134279313Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b118de12-5fee-4548-9443-b3934db972d7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:42:00 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:42:00.134774252Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598520134747984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b118de12-5fee-4548-9443-b3934db972d7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:42:00 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:42:00.135421322Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da0dcd45-da10-445d-9fab-f085ac7791a2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:42:00 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:42:00.135479349Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da0dcd45-da10-445d-9fab-f085ac7791a2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:42:00 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:42:00.135791467Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2fbd0e5e760d13f98ab9ba88d999247cdf66a8ad8098c9ee7f28e65d6572a9b2,PodSandboxId:7923027a515fafeb14cafdccf65674e8ab363eafd9c8521c201d536688427d48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726597971340365083,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ae1ecf-9311-4d61-a56d-9147876d4a9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9840debe68b5053c1a491899a5d7c656982084f2f2e4604d316cce9d1a26c7a9,PodSandboxId:ba8a4f783612b9ee3f9b29afe0de11f7d2a97125a5904615cb6eed0b9ac631e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597970014126565,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x4l48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a20eeb-edd1-4f5b-bf64-ba3d2c8ae05b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0c6cb1df5f79832219bd145ba552a3daa93c23a8b00ceb93302f6999bbc7c1f,PodSandboxId:edaaa54b9023ee5ac274786fdda52a691b3f386503d3fecbae6623f985ec1c94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597970033199716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8nrnc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 96eeb328-605e-468b-a022-dbb7b5b44501,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8198df1218bca5231f562facee0a790436f98cf41df2b13d4cd52b339a38e663,PodSandboxId:3fb89d2bd06f6aa0b8191de26abfe9a8ee98722c72030fd0b39d7f596988a198,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726597969262811666,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xwqtr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5875ff28-7e41-4887-94da-d7632d8141e8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e40fdd1fd764d351571e680a299a8cc448471f3dbd8cef20a8d2af3297a33f23,PodSandboxId:747cd257b791e23976b223b692f5ee95f70169a7f4c00f548182f38927dc66f0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:172659795
8292872227,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16c8cafb7a9faf4d563bc354343d4a14,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c01bdc8e5cd2fe1fb27733c583449b8c337d1c3156b08f4708f3e06c1c03fc6c,PodSandboxId:dbc153b93eb310e52b3f1c0d3117e0af55c074e5a14c00d8d4de005590435a11,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172
6597958298498310,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a7f45bd62ebffc8bb2ad5afa38b84c6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a88b1fb4a49aaf15b06bc8a6136326491aac09f2c6933e9fe3b24c6c2e0420f,PodSandboxId:e4ec6be3f735c261867fc8996e332c520db697ab66073baff0ae403c9e04f673,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726597958205257659,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9ce964059f89a4c4963cb520a63bb2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b58b4f5db1adeb379ee6936ee97d2de412f77fa82e9596ab6d585d73685519b0,PodSandboxId:c76c1f3a1fcee01b98aa500e538c3f66249a9c8a17e3ca0e62f264a605c9d325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726597958128230597,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663ce023ea41410cef7d5ca4b524d300,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd03b090b920a7cb9d13bcb9bb11127b97b36285f56e2c13a4ae01064016eb5,PodSandboxId:f8b9965d59910efe5fe80491c57c0e246a58c4be035388e14dc1d1f5955cb961,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726597669999094213,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663ce023ea41410cef7d5ca4b524d300,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=da0dcd45-da10-445d-9fab-f085ac7791a2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2fbd0e5e760d1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   7923027a515fa       storage-provisioner
	a0c6cb1df5f79       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   edaaa54b9023e       coredns-7c65d6cfc9-8nrnc
	9840debe68b50       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   ba8a4f783612b       coredns-7c65d6cfc9-x4l48
	8198df1218bca       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   3fb89d2bd06f6       kube-proxy-xwqtr
	c01bdc8e5cd2f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   dbc153b93eb31       etcd-default-k8s-diff-port-438836
	e40fdd1fd764d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   747cd257b791e       kube-controller-manager-default-k8s-diff-port-438836
	5a88b1fb4a49a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   e4ec6be3f735c       kube-scheduler-default-k8s-diff-port-438836
	b58b4f5db1ade       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   c76c1f3a1fcee       kube-apiserver-default-k8s-diff-port-438836
	5bd03b090b920       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   f8b9965d59910       kube-apiserver-default-k8s-diff-port-438836
	
	
	==> coredns [9840debe68b5053c1a491899a5d7c656982084f2f2e4604d316cce9d1a26c7a9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [a0c6cb1df5f79832219bd145ba552a3daa93c23a8b00ceb93302f6999bbc7c1f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-438836
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-438836
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=default-k8s-diff-port-438836
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T18_32_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 18:32:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-438836
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 18:41:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 18:38:00 +0000   Tue, 17 Sep 2024 18:32:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 18:38:00 +0000   Tue, 17 Sep 2024 18:32:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 18:38:00 +0000   Tue, 17 Sep 2024 18:32:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 18:38:00 +0000   Tue, 17 Sep 2024 18:32:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.58
	  Hostname:    default-k8s-diff-port-438836
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 121495779e0d4310bb99eb1555fdbd16
	  System UUID:                12149577-9e0d-4310-bb99-eb1555fdbd16
	  Boot ID:                    ad02a2b6-bf44-4181-9070-705b317051e8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-8nrnc                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 coredns-7c65d6cfc9-x4l48                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 etcd-default-k8s-diff-port-438836                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m17s
	  kube-system                 kube-apiserver-default-k8s-diff-port-438836             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-438836    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-proxy-xwqtr                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-scheduler-default-k8s-diff-port-438836             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 metrics-server-6867b74b74-qnfv2                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m10s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m10s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  9m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m23s (x8 over 9m23s)  kubelet          Node default-k8s-diff-port-438836 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m23s (x8 over 9m23s)  kubelet          Node default-k8s-diff-port-438836 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m23s (x7 over 9m23s)  kubelet          Node default-k8s-diff-port-438836 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m17s                  kubelet          Node default-k8s-diff-port-438836 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s                  kubelet          Node default-k8s-diff-port-438836 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s                  kubelet          Node default-k8s-diff-port-438836 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m13s                  node-controller  Node default-k8s-diff-port-438836 event: Registered Node default-k8s-diff-port-438836 in Controller
	  Normal  CIDRAssignmentFailed     9m13s                  cidrAllocator    Node default-k8s-diff-port-438836 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[  +0.052542] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042240] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.915767] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.574460] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.641227] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.740995] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.064887] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.076457] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.218343] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.160693] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.376709] systemd-fstab-generator[706]: Ignoring "noauto" option for root device
	[  +4.780578] systemd-fstab-generator[799]: Ignoring "noauto" option for root device
	[  +0.067769] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.386700] systemd-fstab-generator[921]: Ignoring "noauto" option for root device
	[  +5.655402] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.801206] kauditd_printk_skb: 85 callbacks suppressed
	[Sep17 18:32] kauditd_printk_skb: 4 callbacks suppressed
	[  +1.393770] systemd-fstab-generator[2615]: Ignoring "noauto" option for root device
	[  +4.766861] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.800419] systemd-fstab-generator[2936]: Ignoring "noauto" option for root device
	[  +5.800095] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.268975] systemd-fstab-generator[3172]: Ignoring "noauto" option for root device
	[  +6.450932] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [c01bdc8e5cd2fe1fb27733c583449b8c337d1c3156b08f4708f3e06c1c03fc6c] <==
	{"level":"info","ts":"2024-09-17T18:32:38.825048Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-17T18:32:38.825310Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"ded7f9817c909548","initial-advertise-peer-urls":["https://192.168.39.58:2380"],"listen-peer-urls":["https://192.168.39.58:2380"],"advertise-client-urls":["https://192.168.39.58:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.58:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-17T18:32:38.825353Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-17T18:32:38.825520Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.58:2380"}
	{"level":"info","ts":"2024-09-17T18:32:38.825632Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.58:2380"}
	{"level":"info","ts":"2024-09-17T18:32:39.050734Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-17T18:32:39.050808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-17T18:32:39.050831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 received MsgPreVoteResp from ded7f9817c909548 at term 1"}
	{"level":"info","ts":"2024-09-17T18:32:39.050847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 became candidate at term 2"}
	{"level":"info","ts":"2024-09-17T18:32:39.050853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 received MsgVoteResp from ded7f9817c909548 at term 2"}
	{"level":"info","ts":"2024-09-17T18:32:39.050862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 became leader at term 2"}
	{"level":"info","ts":"2024-09-17T18:32:39.050869Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ded7f9817c909548 elected leader ded7f9817c909548 at term 2"}
	{"level":"info","ts":"2024-09-17T18:32:39.054926Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ded7f9817c909548","local-member-attributes":"{Name:default-k8s-diff-port-438836 ClientURLs:[https://192.168.39.58:2379]}","request-path":"/0/members/ded7f9817c909548/attributes","cluster-id":"91c640bc00cd2aea","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T18:32:39.055095Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T18:32:39.055592Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:32:39.057946Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T18:32:39.061033Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T18:32:39.062398Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.58:2379"}
	{"level":"info","ts":"2024-09-17T18:32:39.066555Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T18:32:39.069505Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-17T18:32:39.071029Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T18:32:39.075729Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T18:32:39.075925Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"91c640bc00cd2aea","local-member-id":"ded7f9817c909548","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:32:39.077834Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:32:39.077933Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 18:42:00 up 14 min,  0 users,  load average: 0.23, 0.26, 0.24
	Linux default-k8s-diff-port-438836 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5bd03b090b920a7cb9d13bcb9bb11127b97b36285f56e2c13a4ae01064016eb5] <==
	W0917 18:32:30.248021       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.251053       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.264752       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.309301       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.331945       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.375292       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.380949       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.385495       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.446278       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.551957       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.551957       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.553294       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.592605       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.658192       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.736986       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.781305       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.843139       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.857193       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.867857       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.893730       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.964164       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:31.007781       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:34.771412       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:35.043367       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:35.159594       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b58b4f5db1adeb379ee6936ee97d2de412f77fa82e9596ab6d585d73685519b0] <==
	W0917 18:37:42.124001       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 18:37:42.124194       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0917 18:37:42.125326       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0917 18:37:42.125422       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0917 18:38:42.126182       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 18:38:42.126286       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0917 18:38:42.126375       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 18:38:42.126419       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0917 18:38:42.127560       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0917 18:38:42.127627       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0917 18:40:42.128426       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 18:40:42.128577       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0917 18:40:42.128697       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 18:40:42.128714       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0917 18:40:42.129740       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0917 18:40:42.129813       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [e40fdd1fd764d351571e680a299a8cc448471f3dbd8cef20a8d2af3297a33f23] <==
	E0917 18:36:48.012136       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:36:48.551394       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:37:18.018775       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:37:18.560193       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:37:48.025824       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:37:48.569931       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0917 18:38:00.469993       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-438836"
	E0917 18:38:18.033377       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:38:18.580202       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:38:48.040295       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:38:48.590567       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0917 18:39:05.653079       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="230.669µs"
	I0917 18:39:16.650244       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="206.984µs"
	E0917 18:39:18.047089       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:39:18.599879       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:39:48.054028       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:39:48.608342       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:40:18.060780       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:40:18.618258       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:40:48.067851       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:40:48.628455       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:41:18.075627       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:41:18.637574       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:41:48.083107       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:41:48.646577       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [8198df1218bca5231f562facee0a790436f98cf41df2b13d4cd52b339a38e663] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 18:32:49.855997       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 18:32:49.901183       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.58"]
	E0917 18:32:49.901286       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 18:32:50.112081       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 18:32:50.112119       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 18:32:50.112143       1 server_linux.go:169] "Using iptables Proxier"
	I0917 18:32:50.116304       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 18:32:50.116614       1 server.go:483] "Version info" version="v1.31.1"
	I0917 18:32:50.116625       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 18:32:50.119600       1 config.go:199] "Starting service config controller"
	I0917 18:32:50.119635       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 18:32:50.119747       1 config.go:105] "Starting endpoint slice config controller"
	I0917 18:32:50.119753       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 18:32:50.120417       1 config.go:328] "Starting node config controller"
	I0917 18:32:50.120460       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 18:32:50.221768       1 shared_informer.go:320] Caches are synced for node config
	I0917 18:32:50.221814       1 shared_informer.go:320] Caches are synced for service config
	I0917 18:32:50.221841       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5a88b1fb4a49aaf15b06bc8a6136326491aac09f2c6933e9fe3b24c6c2e0420f] <==
	W0917 18:32:41.132728       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 18:32:41.135283       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:42.032550       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 18:32:42.032705       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:42.043108       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 18:32:42.043234       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:42.044682       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 18:32:42.044792       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:42.072466       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 18:32:42.072528       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:42.118207       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0917 18:32:42.118270       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:42.191111       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 18:32:42.191167       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:42.260329       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0917 18:32:42.260386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:42.319704       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0917 18:32:42.319838       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:42.353379       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0917 18:32:42.353519       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0917 18:32:42.409474       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0917 18:32:42.409564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:42.470722       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0917 18:32:42.470780       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0917 18:32:45.000820       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 17 18:40:50 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:40:50.633273    2943 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-qnfv2" podUID="75be5ed8-b62d-42c8-8ea9-5809187be05a"
	Sep 17 18:40:53 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:40:53.827422    2943 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598453827154469,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:40:53 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:40:53.827446    2943 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598453827154469,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:41:02 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:41:02.633069    2943 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-qnfv2" podUID="75be5ed8-b62d-42c8-8ea9-5809187be05a"
	Sep 17 18:41:03 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:41:03.829393    2943 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598463828887885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:41:03 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:41:03.829436    2943 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598463828887885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:41:13 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:41:13.633634    2943 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-qnfv2" podUID="75be5ed8-b62d-42c8-8ea9-5809187be05a"
	Sep 17 18:41:13 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:41:13.831422    2943 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598473831130191,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:41:13 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:41:13.831469    2943 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598473831130191,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:41:23 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:41:23.834312    2943 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598483833862676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:41:23 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:41:23.834371    2943 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598483833862676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:41:27 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:41:27.634395    2943 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-qnfv2" podUID="75be5ed8-b62d-42c8-8ea9-5809187be05a"
	Sep 17 18:41:33 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:41:33.835786    2943 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598493835300475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:41:33 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:41:33.836157    2943 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598493835300475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:41:39 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:41:39.632969    2943 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-qnfv2" podUID="75be5ed8-b62d-42c8-8ea9-5809187be05a"
	Sep 17 18:41:43 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:41:43.722923    2943 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 17 18:41:43 default-k8s-diff-port-438836 kubelet[2943]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 17 18:41:43 default-k8s-diff-port-438836 kubelet[2943]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 17 18:41:43 default-k8s-diff-port-438836 kubelet[2943]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 17 18:41:43 default-k8s-diff-port-438836 kubelet[2943]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 17 18:41:43 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:41:43.839903    2943 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598503838257040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:41:43 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:41:43.839978    2943 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598503838257040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:41:52 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:41:52.633712    2943 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-qnfv2" podUID="75be5ed8-b62d-42c8-8ea9-5809187be05a"
	Sep 17 18:41:53 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:41:53.843312    2943 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598513841965067,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:41:53 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:41:53.844701    2943 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598513841965067,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [2fbd0e5e760d13f98ab9ba88d999247cdf66a8ad8098c9ee7f28e65d6572a9b2] <==
	I0917 18:32:51.464714       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 18:32:51.475466       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 18:32:51.475512       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0917 18:32:51.494524       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 18:32:51.494721       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-438836_8361e523-e803-46d6-9e51-ba5af59ac90d!
	I0917 18:32:51.497516       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4b5836b4-3547-40fb-980a-2268372245a3", APIVersion:"v1", ResourceVersion:"437", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-438836_8361e523-e803-46d6-9e51-ba5af59ac90d became leader
	I0917 18:32:51.595115       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-438836_8361e523-e803-46d6-9e51-ba5af59ac90d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-438836 -n default-k8s-diff-port-438836
E0917 18:42:02.206402   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/bridge-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-438836 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-qnfv2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-438836 describe pod metrics-server-6867b74b74-qnfv2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-438836 describe pod metrics-server-6867b74b74-qnfv2: exit status 1 (67.922621ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-qnfv2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-438836 describe pod metrics-server-6867b74b74-qnfv2: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0917 18:33:50.532362   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:33:58.728168   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/enable-default-cni-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:34:48.896653   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:34:56.205694   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-081863 -n embed-certs-081863
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-17 18:42:33.025187442 +0000 UTC m=+6422.099759719
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-081863 -n embed-certs-081863
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-081863 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-081863 logs -n 25: (2.268792472s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo containerd config dump                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	| delete  | -p                                                     | disable-driver-mounts-671774 | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | disable-driver-mounts-671774                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:20 UTC |
	|         | default-k8s-diff-port-438836                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-081863            | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-081863                                  | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-328741             | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:20 UTC | 17 Sep 24 18:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-328741                                   | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:20 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-438836  | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:21 UTC | 17 Sep 24 18:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:21 UTC |                     |
	|         | default-k8s-diff-port-438836                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-081863                 | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-190698        | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-081863                                  | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC | 17 Sep 24 18:33 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-328741                  | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-328741                                   | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC | 17 Sep 24 18:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-438836       | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC | 17 Sep 24 18:32 UTC |
	|         | default-k8s-diff-port-438836                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-190698                              | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC | 17 Sep 24 18:23 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-190698             | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC | 17 Sep 24 18:23 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-190698                              | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 18:23:50
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 18:23:50.674050   78008 out.go:345] Setting OutFile to fd 1 ...
	I0917 18:23:50.674338   78008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:23:50.674349   78008 out.go:358] Setting ErrFile to fd 2...
	I0917 18:23:50.674356   78008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:23:50.674556   78008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 18:23:50.675161   78008 out.go:352] Setting JSON to false
	I0917 18:23:50.676159   78008 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7546,"bootTime":1726589885,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 18:23:50.676252   78008 start.go:139] virtualization: kvm guest
	I0917 18:23:50.678551   78008 out.go:177] * [old-k8s-version-190698] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 18:23:50.679898   78008 notify.go:220] Checking for updates...
	I0917 18:23:50.679923   78008 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 18:23:50.681520   78008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 18:23:50.683062   78008 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:23:50.684494   78008 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 18:23:50.685988   78008 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 18:23:50.687372   78008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 18:23:50.689066   78008 config.go:182] Loaded profile config "old-k8s-version-190698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0917 18:23:50.689526   78008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:23:50.689604   78008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:23:50.704879   78008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41839
	I0917 18:23:50.705416   78008 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:23:50.705985   78008 main.go:141] libmachine: Using API Version  1
	I0917 18:23:50.706014   78008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:23:50.706318   78008 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:23:50.706508   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:23:50.708560   78008 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0917 18:23:50.709804   78008 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 18:23:50.710139   78008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:23:50.710185   78008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:23:50.725466   78008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41935
	I0917 18:23:50.725978   78008 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:23:50.726521   78008 main.go:141] libmachine: Using API Version  1
	I0917 18:23:50.726552   78008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:23:50.726874   78008 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:23:50.727047   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:23:50.764769   78008 out.go:177] * Using the kvm2 driver based on existing profile
	I0917 18:23:50.766378   78008 start.go:297] selected driver: kvm2
	I0917 18:23:50.766396   78008 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:23:50.766522   78008 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 18:23:50.767254   78008 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:23:50.767323   78008 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19662-11085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0917 18:23:50.783226   78008 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0917 18:23:50.783619   78008 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 18:23:50.783658   78008 cni.go:84] Creating CNI manager for ""
	I0917 18:23:50.783697   78008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:23:50.783745   78008 start.go:340] cluster config:
	{Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:23:50.783859   78008 iso.go:125] acquiring lock: {Name:mkee7131e09b1c5eb94d106bda884bcbdbf6f906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:23:48.141429   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:23:50.786173   78008 out.go:177] * Starting "old-k8s-version-190698" primary control-plane node in "old-k8s-version-190698" cluster
	I0917 18:23:50.787985   78008 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0917 18:23:50.788036   78008 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0917 18:23:50.788046   78008 cache.go:56] Caching tarball of preloaded images
	I0917 18:23:50.788122   78008 preload.go:172] Found /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 18:23:50.788132   78008 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0917 18:23:50.788236   78008 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/config.json ...
	I0917 18:23:50.788409   78008 start.go:360] acquireMachinesLock for old-k8s-version-190698: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 18:23:54.221530   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:23:57.293515   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:03.373505   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:06.445563   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:12.525534   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:15.597572   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:21.677533   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:24.749529   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:30.829519   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:33.901554   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:39.981533   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:43.053468   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:49.133556   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:52.205564   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:58.285562   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:01.357500   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:07.437467   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:10.509559   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:16.589464   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:19.661586   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:25.741498   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:28.813506   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:34.893488   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:37.965553   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:44.045546   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:47.117526   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:53.197534   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:56.269532   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:02.349528   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:05.421492   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:11.501470   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:14.573534   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:20.653500   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:23.725530   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:29.805601   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:32.877548   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:38.957496   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:42.029510   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:48.109547   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:51.181567   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:57.261480   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:27:00.333628   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:27:03.338059   77433 start.go:364] duration metric: took 4m21.061938866s to acquireMachinesLock for "no-preload-328741"
	I0917 18:27:03.338119   77433 start.go:96] Skipping create...Using existing machine configuration
	I0917 18:27:03.338127   77433 fix.go:54] fixHost starting: 
	I0917 18:27:03.338580   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:27:03.338627   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:27:03.353917   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38843
	I0917 18:27:03.354383   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:27:03.354859   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:27:03.354881   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:27:03.355169   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:27:03.355331   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:03.355481   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetState
	I0917 18:27:03.357141   77433 fix.go:112] recreateIfNeeded on no-preload-328741: state=Stopped err=<nil>
	I0917 18:27:03.357164   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	W0917 18:27:03.357305   77433 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 18:27:03.359125   77433 out.go:177] * Restarting existing kvm2 VM for "no-preload-328741" ...
	I0917 18:27:03.335549   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:27:03.335586   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetMachineName
	I0917 18:27:03.335955   77264 buildroot.go:166] provisioning hostname "embed-certs-081863"
	I0917 18:27:03.335984   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetMachineName
	I0917 18:27:03.336183   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:27:03.337915   77264 machine.go:96] duration metric: took 4m37.417759423s to provisionDockerMachine
	I0917 18:27:03.337964   77264 fix.go:56] duration metric: took 4m37.441049892s for fixHost
	I0917 18:27:03.337973   77264 start.go:83] releasing machines lock for "embed-certs-081863", held for 4m37.441075799s
	W0917 18:27:03.337995   77264 start.go:714] error starting host: provision: host is not running
	W0917 18:27:03.338098   77264 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0917 18:27:03.338107   77264 start.go:729] Will try again in 5 seconds ...
	I0917 18:27:03.360504   77433 main.go:141] libmachine: (no-preload-328741) Calling .Start
	I0917 18:27:03.360723   77433 main.go:141] libmachine: (no-preload-328741) Ensuring networks are active...
	I0917 18:27:03.361552   77433 main.go:141] libmachine: (no-preload-328741) Ensuring network default is active
	I0917 18:27:03.361892   77433 main.go:141] libmachine: (no-preload-328741) Ensuring network mk-no-preload-328741 is active
	I0917 18:27:03.362266   77433 main.go:141] libmachine: (no-preload-328741) Getting domain xml...
	I0917 18:27:03.362986   77433 main.go:141] libmachine: (no-preload-328741) Creating domain...
	I0917 18:27:04.605668   77433 main.go:141] libmachine: (no-preload-328741) Waiting to get IP...
	I0917 18:27:04.606667   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:04.607120   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:04.607206   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:04.607116   78688 retry.go:31] will retry after 233.634344ms: waiting for machine to come up
	I0917 18:27:04.842666   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:04.843211   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:04.843238   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:04.843149   78688 retry.go:31] will retry after 295.987515ms: waiting for machine to come up
	I0917 18:27:05.140821   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:05.141150   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:05.141173   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:05.141121   78688 retry.go:31] will retry after 482.890276ms: waiting for machine to come up
	I0917 18:27:05.625952   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:05.626401   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:05.626461   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:05.626347   78688 retry.go:31] will retry after 554.515102ms: waiting for machine to come up
	I0917 18:27:06.182038   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:06.182421   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:06.182448   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:06.182375   78688 retry.go:31] will retry after 484.48355ms: waiting for machine to come up
	I0917 18:27:06.668366   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:06.668886   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:06.668917   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:06.668862   78688 retry.go:31] will retry after 821.433387ms: waiting for machine to come up
	I0917 18:27:08.338629   77264 start.go:360] acquireMachinesLock for embed-certs-081863: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 18:27:07.491878   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:07.492313   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:07.492333   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:07.492274   78688 retry.go:31] will retry after 777.017059ms: waiting for machine to come up
	I0917 18:27:08.271320   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:08.271721   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:08.271748   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:08.271671   78688 retry.go:31] will retry after 1.033548419s: waiting for machine to come up
	I0917 18:27:09.307361   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:09.307889   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:09.307922   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:09.307826   78688 retry.go:31] will retry after 1.347955425s: waiting for machine to come up
	I0917 18:27:10.657426   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:10.657903   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:10.657927   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:10.657850   78688 retry.go:31] will retry after 1.52847221s: waiting for machine to come up
	I0917 18:27:12.188594   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:12.189069   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:12.189094   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:12.189031   78688 retry.go:31] will retry after 2.329019451s: waiting for machine to come up
	I0917 18:27:14.519240   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:14.519691   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:14.519718   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:14.519643   78688 retry.go:31] will retry after 2.547184893s: waiting for machine to come up
	I0917 18:27:17.068162   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:17.068621   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:17.068645   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:17.068577   78688 retry.go:31] will retry after 3.042534231s: waiting for machine to come up
	I0917 18:27:21.442547   77819 start.go:364] duration metric: took 3m42.844200352s to acquireMachinesLock for "default-k8s-diff-port-438836"
	I0917 18:27:21.442612   77819 start.go:96] Skipping create...Using existing machine configuration
	I0917 18:27:21.442620   77819 fix.go:54] fixHost starting: 
	I0917 18:27:21.443035   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:27:21.443089   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:27:21.462997   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38191
	I0917 18:27:21.463468   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:27:21.464035   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:27:21.464056   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:27:21.464377   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:27:21.464546   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:21.464703   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetState
	I0917 18:27:21.466460   77819 fix.go:112] recreateIfNeeded on default-k8s-diff-port-438836: state=Stopped err=<nil>
	I0917 18:27:21.466502   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	W0917 18:27:21.466643   77819 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 18:27:21.468932   77819 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-438836" ...
	I0917 18:27:20.113857   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.114336   77433 main.go:141] libmachine: (no-preload-328741) Found IP for machine: 192.168.72.182
	I0917 18:27:20.114359   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has current primary IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.114364   77433 main.go:141] libmachine: (no-preload-328741) Reserving static IP address...
	I0917 18:27:20.114774   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "no-preload-328741", mac: "52:54:00:de:bd:6d", ip: "192.168.72.182"} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.114792   77433 main.go:141] libmachine: (no-preload-328741) Reserved static IP address: 192.168.72.182
	I0917 18:27:20.114808   77433 main.go:141] libmachine: (no-preload-328741) DBG | skip adding static IP to network mk-no-preload-328741 - found existing host DHCP lease matching {name: "no-preload-328741", mac: "52:54:00:de:bd:6d", ip: "192.168.72.182"}
	I0917 18:27:20.114822   77433 main.go:141] libmachine: (no-preload-328741) DBG | Getting to WaitForSSH function...
	I0917 18:27:20.114831   77433 main.go:141] libmachine: (no-preload-328741) Waiting for SSH to be available...
	I0917 18:27:20.116945   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.117224   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.117268   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.117371   77433 main.go:141] libmachine: (no-preload-328741) DBG | Using SSH client type: external
	I0917 18:27:20.117396   77433 main.go:141] libmachine: (no-preload-328741) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa (-rw-------)
	I0917 18:27:20.117427   77433 main.go:141] libmachine: (no-preload-328741) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:27:20.117439   77433 main.go:141] libmachine: (no-preload-328741) DBG | About to run SSH command:
	I0917 18:27:20.117446   77433 main.go:141] libmachine: (no-preload-328741) DBG | exit 0
	I0917 18:27:20.241462   77433 main.go:141] libmachine: (no-preload-328741) DBG | SSH cmd err, output: <nil>: 
	I0917 18:27:20.241844   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetConfigRaw
	I0917 18:27:20.242520   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetIP
	I0917 18:27:20.245397   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.245786   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.245821   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.246121   77433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/config.json ...
	I0917 18:27:20.246346   77433 machine.go:93] provisionDockerMachine start ...
	I0917 18:27:20.246367   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:20.246573   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.248978   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.249318   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.249345   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.249489   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:20.249643   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.249795   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.249911   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:20.250048   77433 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:20.250301   77433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0917 18:27:20.250317   77433 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 18:27:20.357778   77433 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 18:27:20.357805   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetMachineName
	I0917 18:27:20.358058   77433 buildroot.go:166] provisioning hostname "no-preload-328741"
	I0917 18:27:20.358083   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetMachineName
	I0917 18:27:20.358261   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.361057   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.361463   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.361498   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.361617   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:20.361774   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.361948   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.362031   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:20.362157   77433 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:20.362321   77433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0917 18:27:20.362337   77433 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-328741 && echo "no-preload-328741" | sudo tee /etc/hostname
	I0917 18:27:20.486928   77433 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-328741
	
	I0917 18:27:20.486956   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.489814   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.490212   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.490245   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.490451   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:20.490627   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.490846   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.491105   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:20.491327   77433 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:20.491532   77433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0917 18:27:20.491553   77433 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-328741' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-328741/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-328741' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:27:20.607308   77433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:27:20.607336   77433 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:27:20.607379   77433 buildroot.go:174] setting up certificates
	I0917 18:27:20.607394   77433 provision.go:84] configureAuth start
	I0917 18:27:20.607407   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetMachineName
	I0917 18:27:20.607708   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetIP
	I0917 18:27:20.610353   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.610722   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.610751   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.610897   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.612874   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.613160   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.613196   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.613366   77433 provision.go:143] copyHostCerts
	I0917 18:27:20.613425   77433 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:27:20.613435   77433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:27:20.613508   77433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:27:20.613607   77433 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:27:20.613614   77433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:27:20.613645   77433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:27:20.613706   77433 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:27:20.613713   77433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:27:20.613734   77433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:27:20.613789   77433 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.no-preload-328741 san=[127.0.0.1 192.168.72.182 localhost minikube no-preload-328741]
	I0917 18:27:20.808567   77433 provision.go:177] copyRemoteCerts
	I0917 18:27:20.808634   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:27:20.808662   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.811568   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.811927   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.811954   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.812154   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:20.812347   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.812503   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:20.812627   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:27:20.895825   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0917 18:27:20.922489   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 18:27:20.948827   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:27:20.974824   77433 provision.go:87] duration metric: took 367.418792ms to configureAuth
	I0917 18:27:20.974852   77433 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:27:20.975023   77433 config.go:182] Loaded profile config "no-preload-328741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:27:20.975090   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.977758   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.978068   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.978105   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.978254   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:20.978473   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.978662   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.978784   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:20.978951   77433 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:20.979110   77433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0917 18:27:20.979126   77433 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:27:21.205095   77433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:27:21.205123   77433 machine.go:96] duration metric: took 958.76263ms to provisionDockerMachine
	I0917 18:27:21.205136   77433 start.go:293] postStartSetup for "no-preload-328741" (driver="kvm2")
	I0917 18:27:21.205148   77433 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:27:21.205167   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:21.205532   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:27:21.205565   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:21.208451   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.208840   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:21.208882   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.209046   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:21.209355   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:21.209578   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:21.209759   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:27:21.291918   77433 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:27:21.296054   77433 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:27:21.296077   77433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:27:21.296139   77433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:27:21.296215   77433 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:27:21.296313   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:27:21.305838   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:27:21.331220   77433 start.go:296] duration metric: took 126.069168ms for postStartSetup
	I0917 18:27:21.331261   77433 fix.go:56] duration metric: took 17.993134184s for fixHost
	I0917 18:27:21.331280   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:21.334290   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.334663   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:21.334688   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.334893   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:21.335134   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:21.335275   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:21.335443   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:21.335597   77433 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:21.335788   77433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0917 18:27:21.335803   77433 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:27:21.442323   77433 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726597641.413351440
	
	I0917 18:27:21.442375   77433 fix.go:216] guest clock: 1726597641.413351440
	I0917 18:27:21.442390   77433 fix.go:229] Guest: 2024-09-17 18:27:21.41335144 +0000 UTC Remote: 2024-09-17 18:27:21.331264373 +0000 UTC m=+279.198911017 (delta=82.087067ms)
	I0917 18:27:21.442423   77433 fix.go:200] guest clock delta is within tolerance: 82.087067ms
	I0917 18:27:21.442443   77433 start.go:83] releasing machines lock for "no-preload-328741", held for 18.10434208s
	I0917 18:27:21.442489   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:21.442775   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetIP
	I0917 18:27:21.445223   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.445561   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:21.445602   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.445710   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:21.446182   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:21.446357   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:21.446466   77433 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:27:21.446519   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:21.446551   77433 ssh_runner.go:195] Run: cat /version.json
	I0917 18:27:21.446574   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:21.449063   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.449340   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.449400   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:21.449435   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.449557   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:21.449699   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:21.449832   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:21.449833   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:21.449866   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.450010   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:21.450004   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:27:21.450104   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:21.450222   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:21.450352   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:27:21.552947   77433 ssh_runner.go:195] Run: systemctl --version
	I0917 18:27:21.559634   77433 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:27:21.707720   77433 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:27:21.714672   77433 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:27:21.714746   77433 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:27:21.731669   77433 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:27:21.731700   77433 start.go:495] detecting cgroup driver to use...
	I0917 18:27:21.731776   77433 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:27:21.749370   77433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:27:21.765181   77433 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:27:21.765284   77433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:27:21.782356   77433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:27:21.801216   77433 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:27:21.918587   77433 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:27:22.089578   77433 docker.go:233] disabling docker service ...
	I0917 18:27:22.089661   77433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:27:22.110533   77433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:27:22.125372   77433 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:27:22.241575   77433 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:27:22.367081   77433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:27:22.381835   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:27:22.402356   77433 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 18:27:22.402432   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.413980   77433 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:27:22.414051   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.426845   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.439426   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.451352   77433 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:27:22.463891   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.475686   77433 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.495380   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.507217   77433 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:27:22.517776   77433 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:27:22.517844   77433 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:27:22.537889   77433 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:27:22.549554   77433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:27:22.663258   77433 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:27:22.762619   77433 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:27:22.762693   77433 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:27:22.769911   77433 start.go:563] Will wait 60s for crictl version
	I0917 18:27:22.769967   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:22.775014   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:27:22.819750   77433 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:27:22.819864   77433 ssh_runner.go:195] Run: crio --version
	I0917 18:27:22.849303   77433 ssh_runner.go:195] Run: crio --version
	I0917 18:27:22.887418   77433 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 18:27:21.470362   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Start
	I0917 18:27:21.470570   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Ensuring networks are active...
	I0917 18:27:21.471316   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Ensuring network default is active
	I0917 18:27:21.471781   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Ensuring network mk-default-k8s-diff-port-438836 is active
	I0917 18:27:21.472151   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Getting domain xml...
	I0917 18:27:21.472856   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Creating domain...
	I0917 18:27:22.744436   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting to get IP...
	I0917 18:27:22.745314   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:22.745829   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:22.745899   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:22.745819   78807 retry.go:31] will retry after 201.903728ms: waiting for machine to come up
	I0917 18:27:22.949838   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:22.951570   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:22.951596   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:22.951537   78807 retry.go:31] will retry after 376.852856ms: waiting for machine to come up
	I0917 18:27:23.330165   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:23.330685   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:23.330706   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:23.330633   78807 retry.go:31] will retry after 415.874344ms: waiting for machine to come up
	I0917 18:27:22.888728   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetIP
	I0917 18:27:22.891793   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:22.892111   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:22.892130   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:22.892513   77433 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0917 18:27:22.897071   77433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:27:22.911118   77433 kubeadm.go:883] updating cluster {Name:no-preload-328741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-328741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:27:22.911279   77433 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 18:27:22.911333   77433 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:27:22.949155   77433 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0917 18:27:22.949180   77433 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0917 18:27:22.949270   77433 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:22.949289   77433 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:22.949319   77433 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0917 18:27:22.949298   77433 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:22.949398   77433 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:22.949424   77433 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:22.949449   77433 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:22.949339   77433 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:22.950952   77433 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:22.951106   77433 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:22.951113   77433 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:22.951238   77433 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:22.951257   77433 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0917 18:27:22.951257   77433 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:22.951343   77433 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:22.951426   77433 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.145473   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:23.155577   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:23.167187   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:23.169154   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:23.171736   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.196199   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:23.225029   77433 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0917 18:27:23.225085   77433 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:23.225133   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.233185   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0917 18:27:23.269008   77433 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0917 18:27:23.269045   77433 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:23.269092   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.307273   77433 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0917 18:27:23.307319   77433 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:23.307374   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.345906   77433 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0917 18:27:23.345949   77433 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.345999   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.346222   77433 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0917 18:27:23.346259   77433 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:23.346316   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.362612   77433 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0917 18:27:23.362657   77433 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:23.362684   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:23.362707   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.464589   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:23.464684   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:23.464742   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:23.464815   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.464903   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:23.464911   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:23.616289   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:23.616349   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:23.616400   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:23.616459   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.616514   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:23.616548   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:23.752643   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:23.752754   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:23.752754   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:23.761857   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.761945   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0917 18:27:23.762041   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0917 18:27:23.768641   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:23.883181   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0917 18:27:23.883181   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0917 18:27:23.883230   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0917 18:27:23.883294   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0917 18:27:23.883301   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0917 18:27:23.883302   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0917 18:27:23.883314   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0917 18:27:23.883371   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0917 18:27:23.883388   77433 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0917 18:27:23.883401   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0917 18:27:23.883413   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0917 18:27:23.883680   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0917 18:27:23.883758   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0917 18:27:23.894354   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0917 18:27:23.894539   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0917 18:27:23.901735   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0917 18:27:23.901990   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0917 18:27:23.909116   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:26.450360   77433 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.566575076s)
	I0917 18:27:26.450405   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0917 18:27:26.450360   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.566921389s)
	I0917 18:27:26.450422   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0917 18:27:26.450429   77433 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.541282746s)
	I0917 18:27:26.450444   77433 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0917 18:27:26.450492   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0917 18:27:26.450485   77433 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0917 18:27:26.450524   77433 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:26.450567   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.748331   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:23.748832   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:23.748862   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:23.748765   78807 retry.go:31] will retry after 515.370863ms: waiting for machine to come up
	I0917 18:27:24.265477   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:24.265902   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:24.265939   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:24.265859   78807 retry.go:31] will retry after 629.410487ms: waiting for machine to come up
	I0917 18:27:24.896939   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:24.897469   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:24.897500   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:24.897415   78807 retry.go:31] will retry after 846.873676ms: waiting for machine to come up
	I0917 18:27:25.745594   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:25.746228   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:25.746254   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:25.746167   78807 retry.go:31] will retry after 1.192058073s: waiting for machine to come up
	I0917 18:27:26.940216   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:26.940678   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:26.940702   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:26.940637   78807 retry.go:31] will retry after 1.449067435s: waiting for machine to come up
	I0917 18:27:28.392247   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:28.392711   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:28.392753   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:28.392665   78807 retry.go:31] will retry after 1.444723582s: waiting for machine to come up
	I0917 18:27:29.730898   77433 ssh_runner.go:235] Completed: which crictl: (3.280308944s)
	I0917 18:27:29.730988   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:29.731032   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.280407278s)
	I0917 18:27:29.731069   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0917 18:27:29.731121   77433 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0917 18:27:29.731164   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0917 18:27:29.781214   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:32.016162   77433 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.234900005s)
	I0917 18:27:32.016246   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:32.016175   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.284993422s)
	I0917 18:27:32.016331   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0917 18:27:32.016382   77433 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0917 18:27:32.016431   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0917 18:27:32.062774   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0917 18:27:32.062903   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0917 18:27:29.839565   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:29.840118   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:29.840154   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:29.840044   78807 retry.go:31] will retry after 1.984255207s: waiting for machine to come up
	I0917 18:27:31.825642   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:31.826059   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:31.826105   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:31.826027   78807 retry.go:31] will retry after 1.870760766s: waiting for machine to come up
	I0917 18:27:34.201435   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.18496735s)
	I0917 18:27:34.201470   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0917 18:27:34.201493   77433 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0917 18:27:34.201506   77433 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.138578181s)
	I0917 18:27:34.201545   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0917 18:27:34.201547   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0917 18:27:36.281470   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.079903331s)
	I0917 18:27:36.281515   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0917 18:27:36.281539   77433 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0917 18:27:36.281581   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0917 18:27:33.698947   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:33.699358   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:33.699389   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:33.699308   78807 retry.go:31] will retry after 2.194557575s: waiting for machine to come up
	I0917 18:27:35.896774   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:35.897175   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:35.897215   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:35.897139   78807 retry.go:31] will retry after 3.232409388s: waiting for machine to come up
	I0917 18:27:40.422552   78008 start.go:364] duration metric: took 3m49.634084682s to acquireMachinesLock for "old-k8s-version-190698"
	I0917 18:27:40.422631   78008 start.go:96] Skipping create...Using existing machine configuration
	I0917 18:27:40.422641   78008 fix.go:54] fixHost starting: 
	I0917 18:27:40.423075   78008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:27:40.423129   78008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:27:40.444791   78008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38471
	I0917 18:27:40.445363   78008 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:27:40.446028   78008 main.go:141] libmachine: Using API Version  1
	I0917 18:27:40.446063   78008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:27:40.446445   78008 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:27:40.446690   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:27:40.446844   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetState
	I0917 18:27:40.448698   78008 fix.go:112] recreateIfNeeded on old-k8s-version-190698: state=Stopped err=<nil>
	I0917 18:27:40.448743   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	W0917 18:27:40.448912   78008 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 18:27:40.451316   78008 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-190698" ...
	I0917 18:27:40.452694   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .Start
	I0917 18:27:40.452899   78008 main.go:141] libmachine: (old-k8s-version-190698) Ensuring networks are active...
	I0917 18:27:40.453913   78008 main.go:141] libmachine: (old-k8s-version-190698) Ensuring network default is active
	I0917 18:27:40.454353   78008 main.go:141] libmachine: (old-k8s-version-190698) Ensuring network mk-old-k8s-version-190698 is active
	I0917 18:27:40.454806   78008 main.go:141] libmachine: (old-k8s-version-190698) Getting domain xml...
	I0917 18:27:40.455606   78008 main.go:141] libmachine: (old-k8s-version-190698) Creating domain...
	I0917 18:27:39.131665   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.132199   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Found IP for machine: 192.168.39.58
	I0917 18:27:39.132224   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Reserving static IP address...
	I0917 18:27:39.132241   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has current primary IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.132683   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-438836", mac: "52:54:00:78:fb:fd", ip: "192.168.39.58"} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.132716   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | skip adding static IP to network mk-default-k8s-diff-port-438836 - found existing host DHCP lease matching {name: "default-k8s-diff-port-438836", mac: "52:54:00:78:fb:fd", ip: "192.168.39.58"}
	I0917 18:27:39.132729   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Reserved static IP address: 192.168.39.58
	I0917 18:27:39.132744   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for SSH to be available...
	I0917 18:27:39.132759   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Getting to WaitForSSH function...
	I0917 18:27:39.135223   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.135590   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.135612   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.135797   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Using SSH client type: external
	I0917 18:27:39.135825   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa (-rw-------)
	I0917 18:27:39.135871   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:27:39.135888   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | About to run SSH command:
	I0917 18:27:39.135899   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | exit 0
	I0917 18:27:39.261644   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | SSH cmd err, output: <nil>: 
	I0917 18:27:39.261978   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetConfigRaw
	I0917 18:27:39.262594   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetIP
	I0917 18:27:39.265005   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.265308   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.265376   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.265576   77819 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/config.json ...
	I0917 18:27:39.265817   77819 machine.go:93] provisionDockerMachine start ...
	I0917 18:27:39.265835   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:39.266039   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.268290   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.268616   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.268646   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.268846   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:39.269019   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.269159   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.269333   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:39.269497   77819 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:39.269689   77819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0917 18:27:39.269701   77819 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 18:27:39.378024   77819 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 18:27:39.378050   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetMachineName
	I0917 18:27:39.378284   77819 buildroot.go:166] provisioning hostname "default-k8s-diff-port-438836"
	I0917 18:27:39.378322   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetMachineName
	I0917 18:27:39.378529   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.381247   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.381574   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.381614   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.381765   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:39.381938   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.382057   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.382169   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:39.382311   77819 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:39.382546   77819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0917 18:27:39.382567   77819 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-438836 && echo "default-k8s-diff-port-438836" | sudo tee /etc/hostname
	I0917 18:27:39.516431   77819 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-438836
	
	I0917 18:27:39.516462   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.519542   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.519934   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.519966   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.520172   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:39.520405   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.520594   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.520773   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:39.520927   77819 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:39.521094   77819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0917 18:27:39.521111   77819 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-438836' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-438836/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-438836' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:27:39.640608   77819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:27:39.640656   77819 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:27:39.640717   77819 buildroot.go:174] setting up certificates
	I0917 18:27:39.640731   77819 provision.go:84] configureAuth start
	I0917 18:27:39.640750   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetMachineName
	I0917 18:27:39.641038   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetIP
	I0917 18:27:39.643698   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.644026   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.644085   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.644374   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.646822   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.647198   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.647227   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.647360   77819 provision.go:143] copyHostCerts
	I0917 18:27:39.647428   77819 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:27:39.647441   77819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:27:39.647516   77819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:27:39.647637   77819 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:27:39.647658   77819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:27:39.647693   77819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:27:39.647782   77819 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:27:39.647790   77819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:27:39.647817   77819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:27:39.647883   77819 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-438836 san=[127.0.0.1 192.168.39.58 default-k8s-diff-port-438836 localhost minikube]
	I0917 18:27:39.751962   77819 provision.go:177] copyRemoteCerts
	I0917 18:27:39.752028   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:27:39.752053   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.754975   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.755348   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.755381   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.755541   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:39.755725   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.755872   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:39.755988   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:27:39.840071   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0917 18:27:39.866175   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 18:27:39.896353   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:27:39.924332   77819 provision.go:87] duration metric: took 283.582838ms to configureAuth
	I0917 18:27:39.924363   77819 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:27:39.924606   77819 config.go:182] Loaded profile config "default-k8s-diff-port-438836": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:27:39.924701   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.927675   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.928027   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.928058   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.928307   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:39.928545   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.928710   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.928854   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:39.929011   77819 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:39.929244   77819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0917 18:27:39.929272   77819 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:27:40.170729   77819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:27:40.170763   77819 machine.go:96] duration metric: took 904.932975ms to provisionDockerMachine
	I0917 18:27:40.170776   77819 start.go:293] postStartSetup for "default-k8s-diff-port-438836" (driver="kvm2")
	I0917 18:27:40.170789   77819 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:27:40.170810   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:40.171145   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:27:40.171187   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:40.173980   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.174451   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:40.174480   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.174739   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:40.174926   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:40.175096   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:40.175261   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:27:40.263764   77819 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:27:40.269500   77819 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:27:40.269528   77819 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:27:40.269611   77819 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:27:40.269711   77819 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:27:40.269838   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:27:40.280672   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:27:40.309608   77819 start.go:296] duration metric: took 138.819033ms for postStartSetup
	I0917 18:27:40.309648   77819 fix.go:56] duration metric: took 18.867027995s for fixHost
	I0917 18:27:40.309668   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:40.312486   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.313018   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:40.313042   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.313201   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:40.313408   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:40.313574   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:40.313691   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:40.313853   77819 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:40.314037   77819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0917 18:27:40.314050   77819 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:27:40.422393   77819 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726597660.391833807
	
	I0917 18:27:40.422417   77819 fix.go:216] guest clock: 1726597660.391833807
	I0917 18:27:40.422424   77819 fix.go:229] Guest: 2024-09-17 18:27:40.391833807 +0000 UTC Remote: 2024-09-17 18:27:40.309651352 +0000 UTC m=+241.856499140 (delta=82.182455ms)
	I0917 18:27:40.422443   77819 fix.go:200] guest clock delta is within tolerance: 82.182455ms
	I0917 18:27:40.422448   77819 start.go:83] releasing machines lock for "default-k8s-diff-port-438836", held for 18.97986821s
	I0917 18:27:40.422473   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:40.422745   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetIP
	I0917 18:27:40.425463   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.425856   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:40.425885   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.426048   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:40.426529   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:40.426665   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:40.426742   77819 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:27:40.426807   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:40.426910   77819 ssh_runner.go:195] Run: cat /version.json
	I0917 18:27:40.426936   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:40.429570   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.429639   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.429967   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:40.430004   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:40.430031   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.430047   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.430161   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:40.430297   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:40.430376   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:40.430470   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:40.430662   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:40.430664   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:40.430841   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:27:40.430837   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:27:40.532536   77819 ssh_runner.go:195] Run: systemctl --version
	I0917 18:27:40.540125   77819 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:27:40.697991   77819 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:27:40.705336   77819 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:27:40.705427   77819 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:27:40.723038   77819 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:27:40.723065   77819 start.go:495] detecting cgroup driver to use...
	I0917 18:27:40.723135   77819 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:27:40.745561   77819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:27:40.765884   77819 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:27:40.765955   77819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:27:40.786769   77819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:27:40.805655   77819 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:27:40.935895   77819 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:27:41.121556   77819 docker.go:233] disabling docker service ...
	I0917 18:27:41.121638   77819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:27:41.144711   77819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:27:41.164782   77819 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:27:41.308439   77819 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:27:41.467525   77819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:27:41.485989   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:27:41.510198   77819 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 18:27:41.510282   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.526458   77819 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:27:41.526566   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.543334   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.558978   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.574621   77819 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:27:41.587226   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.603144   77819 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.627410   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.639981   77819 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:27:41.651547   77819 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:27:41.651615   77819 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:27:41.669534   77819 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:27:41.684429   77819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:27:41.839270   77819 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:27:41.974151   77819 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:27:41.974230   77819 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:27:41.980491   77819 start.go:563] Will wait 60s for crictl version
	I0917 18:27:41.980563   77819 ssh_runner.go:195] Run: which crictl
	I0917 18:27:41.985802   77819 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:27:42.033141   77819 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:27:42.033247   77819 ssh_runner.go:195] Run: crio --version
	I0917 18:27:42.076192   77819 ssh_runner.go:195] Run: crio --version
	I0917 18:27:42.118442   77819 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 18:27:37.750960   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.469353165s)
	I0917 18:27:37.750995   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0917 18:27:37.751021   77433 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0917 18:27:37.751074   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0917 18:27:38.415240   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0917 18:27:38.415308   77433 cache_images.go:123] Successfully loaded all cached images
	I0917 18:27:38.415317   77433 cache_images.go:92] duration metric: took 15.466122195s to LoadCachedImages
	I0917 18:27:38.415338   77433 kubeadm.go:934] updating node { 192.168.72.182 8443 v1.31.1 crio true true} ...
	I0917 18:27:38.415428   77433 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-328741 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-328741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:27:38.415536   77433 ssh_runner.go:195] Run: crio config
	I0917 18:27:38.466849   77433 cni.go:84] Creating CNI manager for ""
	I0917 18:27:38.466880   77433 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:27:38.466893   77433 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:27:38.466921   77433 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.182 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-328741 NodeName:no-preload-328741 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 18:27:38.467090   77433 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-328741"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.182
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.182"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:27:38.467166   77433 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 18:27:38.478263   77433 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:27:38.478345   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:27:38.488938   77433 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0917 18:27:38.509613   77433 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:27:38.529224   77433 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0917 18:27:38.549010   77433 ssh_runner.go:195] Run: grep 192.168.72.182	control-plane.minikube.internal$ /etc/hosts
	I0917 18:27:38.553381   77433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:27:38.566215   77433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:27:38.688671   77433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:27:38.708655   77433 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741 for IP: 192.168.72.182
	I0917 18:27:38.708677   77433 certs.go:194] generating shared ca certs ...
	I0917 18:27:38.708693   77433 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:27:38.708860   77433 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:27:38.708916   77433 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:27:38.708930   77433 certs.go:256] generating profile certs ...
	I0917 18:27:38.709038   77433 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/client.key
	I0917 18:27:38.709130   77433 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/apiserver.key.843ed40b
	I0917 18:27:38.709199   77433 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/proxy-client.key
	I0917 18:27:38.709384   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:27:38.709421   77433 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:27:38.709435   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:27:38.709471   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:27:38.709519   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:27:38.709552   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:27:38.709606   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:27:38.710412   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:27:38.754736   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:27:38.792703   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:27:38.826420   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:27:38.869433   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0917 18:27:38.897601   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 18:27:38.928694   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:27:38.953856   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 18:27:38.978643   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:27:39.004382   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:27:39.031548   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:27:39.057492   77433 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:27:39.075095   77433 ssh_runner.go:195] Run: openssl version
	I0917 18:27:39.081033   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:27:39.092196   77433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:27:39.097013   77433 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:27:39.097070   77433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:27:39.103104   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:27:39.114377   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:27:39.125639   77433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:27:39.130757   77433 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:27:39.130828   77433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:27:39.137857   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:27:39.150215   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:27:39.161792   77433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:39.166467   77433 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:39.166528   77433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:39.172262   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:27:39.183793   77433 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:27:39.188442   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 18:27:39.194477   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 18:27:39.200688   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 18:27:39.207092   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 18:27:39.213451   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 18:27:39.220286   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 18:27:39.226642   77433 kubeadm.go:392] StartCluster: {Name:no-preload-328741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-328741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:27:39.226747   77433 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:27:39.226814   77433 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:27:39.273929   77433 cri.go:89] found id: ""
	I0917 18:27:39.274001   77433 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:27:39.286519   77433 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 18:27:39.286543   77433 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 18:27:39.286584   77433 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 18:27:39.298955   77433 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 18:27:39.300296   77433 kubeconfig.go:125] found "no-preload-328741" server: "https://192.168.72.182:8443"
	I0917 18:27:39.303500   77433 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 18:27:39.316866   77433 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.182
	I0917 18:27:39.316904   77433 kubeadm.go:1160] stopping kube-system containers ...
	I0917 18:27:39.316917   77433 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0917 18:27:39.316980   77433 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:27:39.356519   77433 cri.go:89] found id: ""
	I0917 18:27:39.356608   77433 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 18:27:39.373894   77433 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:27:39.387121   77433 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:27:39.387140   77433 kubeadm.go:157] found existing configuration files:
	
	I0917 18:27:39.387183   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:27:39.397807   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:27:39.397867   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:27:39.408393   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:27:39.420103   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:27:39.420175   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:27:39.432123   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:27:39.442237   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:27:39.442308   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:27:39.452902   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:27:39.462802   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:27:39.462857   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:27:39.473035   77433 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:27:39.483824   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:39.603594   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:40.540682   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:40.798278   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:40.876550   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:41.006410   77433 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:27:41.006504   77433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:41.507355   77433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:42.006707   77433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:42.054395   77433 api_server.go:72] duration metric: took 1.047984188s to wait for apiserver process to appear ...
	I0917 18:27:42.054448   77433 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:27:42.054473   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:42.054949   77433 api_server.go:269] stopped: https://192.168.72.182:8443/healthz: Get "https://192.168.72.182:8443/healthz": dial tcp 192.168.72.182:8443: connect: connection refused
	I0917 18:27:42.119537   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetIP
	I0917 18:27:42.122908   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:42.123378   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:42.123409   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:42.123739   77819 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0917 18:27:42.129654   77819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:27:42.144892   77819 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-438836 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-438836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:27:42.145015   77819 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 18:27:42.145054   77819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:27:42.191002   77819 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0917 18:27:42.191086   77819 ssh_runner.go:195] Run: which lz4
	I0917 18:27:42.196979   77819 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 18:27:42.203024   77819 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 18:27:42.203079   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0917 18:27:41.874915   78008 main.go:141] libmachine: (old-k8s-version-190698) Waiting to get IP...
	I0917 18:27:41.875882   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:41.876350   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:41.876438   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:41.876337   78975 retry.go:31] will retry after 221.467702ms: waiting for machine to come up
	I0917 18:27:42.100196   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:42.100848   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:42.100869   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:42.100798   78975 retry.go:31] will retry after 339.25287ms: waiting for machine to come up
	I0917 18:27:42.441407   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:42.442029   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:42.442057   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:42.441987   78975 retry.go:31] will retry after 471.576193ms: waiting for machine to come up
	I0917 18:27:42.915529   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:42.916159   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:42.916187   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:42.916123   78975 retry.go:31] will retry after 502.97146ms: waiting for machine to come up
	I0917 18:27:43.420795   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:43.421214   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:43.421256   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:43.421163   78975 retry.go:31] will retry after 660.138027ms: waiting for machine to come up
	I0917 18:27:44.082653   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:44.083225   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:44.083255   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:44.083166   78975 retry.go:31] will retry after 656.142121ms: waiting for machine to come up
	I0917 18:27:44.740700   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:44.741167   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:44.741193   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:44.741129   78975 retry.go:31] will retry after 928.613341ms: waiting for machine to come up
	I0917 18:27:45.671934   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:45.672452   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:45.672489   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:45.672370   78975 retry.go:31] will retry after 980.051509ms: waiting for machine to come up
	I0917 18:27:42.554732   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:45.472618   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:27:45.472651   77433 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:27:45.472667   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:45.491418   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:27:45.491447   77433 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:27:45.554728   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:45.562047   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:27:45.562083   77433 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:27:46.054709   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:46.077483   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:27:46.077533   77433 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:27:46.555249   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:46.570200   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:27:46.570242   77433 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:27:47.054604   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:47.062637   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 200:
	ok
	I0917 18:27:47.074075   77433 api_server.go:141] control plane version: v1.31.1
	I0917 18:27:47.074107   77433 api_server.go:131] duration metric: took 5.019651057s to wait for apiserver health ...
	I0917 18:27:47.074118   77433 cni.go:84] Creating CNI manager for ""
	I0917 18:27:47.074127   77433 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:27:47.275236   77433 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:27:43.762089   77819 crio.go:462] duration metric: took 1.565150626s to copy over tarball
	I0917 18:27:43.762183   77819 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 18:27:46.222613   77819 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.460401071s)
	I0917 18:27:46.222640   77819 crio.go:469] duration metric: took 2.460522168s to extract the tarball
	I0917 18:27:46.222649   77819 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 18:27:46.260257   77819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:27:46.314982   77819 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 18:27:46.315007   77819 cache_images.go:84] Images are preloaded, skipping loading
	I0917 18:27:46.315017   77819 kubeadm.go:934] updating node { 192.168.39.58 8444 v1.31.1 crio true true} ...
	I0917 18:27:46.315159   77819 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-438836 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-438836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:27:46.315267   77819 ssh_runner.go:195] Run: crio config
	I0917 18:27:46.372511   77819 cni.go:84] Creating CNI manager for ""
	I0917 18:27:46.372534   77819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:27:46.372545   77819 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:27:46.372564   77819 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-438836 NodeName:default-k8s-diff-port-438836 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 18:27:46.372684   77819 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-438836"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.58
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:27:46.372742   77819 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 18:27:46.383855   77819 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:27:46.383950   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:27:46.394588   77819 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0917 18:27:46.416968   77819 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:27:46.438389   77819 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0917 18:27:46.461630   77819 ssh_runner.go:195] Run: grep 192.168.39.58	control-plane.minikube.internal$ /etc/hosts
	I0917 18:27:46.467126   77819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:27:46.484625   77819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:27:46.614753   77819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:27:46.638959   77819 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836 for IP: 192.168.39.58
	I0917 18:27:46.638984   77819 certs.go:194] generating shared ca certs ...
	I0917 18:27:46.639004   77819 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:27:46.639166   77819 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:27:46.639228   77819 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:27:46.639240   77819 certs.go:256] generating profile certs ...
	I0917 18:27:46.639349   77819 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/client.key
	I0917 18:27:46.639420   77819 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/apiserver.key.06041009
	I0917 18:27:46.639484   77819 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/proxy-client.key
	I0917 18:27:46.639636   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:27:46.639695   77819 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:27:46.639708   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:27:46.639740   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:27:46.639773   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:27:46.639807   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:27:46.639904   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:27:46.640789   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:27:46.681791   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:27:46.715575   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:27:46.746415   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:27:46.780380   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 18:27:46.805518   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 18:27:46.841727   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:27:46.881056   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 18:27:46.918589   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:27:46.947113   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:27:46.977741   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:27:47.015143   77819 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:27:47.036837   77819 ssh_runner.go:195] Run: openssl version
	I0917 18:27:47.043152   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:27:47.057503   77819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:47.063479   77819 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:47.063554   77819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:47.072746   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:27:47.090698   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:27:47.105125   77819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:27:47.110617   77819 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:27:47.110690   77819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:27:47.117267   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:27:47.131593   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:27:47.145726   77819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:27:47.151245   77819 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:27:47.151350   77819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:27:47.157996   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:27:47.171327   77819 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:27:47.178058   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 18:27:47.185068   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 18:27:47.191776   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 18:27:47.198740   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 18:27:47.206057   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 18:27:47.212608   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 18:27:47.219345   77819 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-438836 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-438836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:27:47.219459   77819 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:27:47.219518   77819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:27:47.259853   77819 cri.go:89] found id: ""
	I0917 18:27:47.259944   77819 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:27:47.271127   77819 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 18:27:47.271146   77819 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 18:27:47.271197   77819 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 18:27:47.283724   77819 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 18:27:47.284834   77819 kubeconfig.go:125] found "default-k8s-diff-port-438836" server: "https://192.168.39.58:8444"
	I0917 18:27:47.287040   77819 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 18:27:47.298429   77819 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.58
	I0917 18:27:47.298462   77819 kubeadm.go:1160] stopping kube-system containers ...
	I0917 18:27:47.298481   77819 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0917 18:27:47.298535   77819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:27:47.341739   77819 cri.go:89] found id: ""
	I0917 18:27:47.341820   77819 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 18:27:47.361539   77819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:27:47.377218   77819 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:27:47.377254   77819 kubeadm.go:157] found existing configuration files:
	
	I0917 18:27:47.377301   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0917 18:27:47.390846   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:27:47.390913   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:27:47.401363   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0917 18:27:47.411412   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:27:47.411490   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:27:47.422596   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0917 18:27:47.438021   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:27:47.438102   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:27:47.450085   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0917 18:27:47.461269   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:27:47.461343   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:27:47.472893   77819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:27:47.484393   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:47.620947   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:46.654519   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:46.654962   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:46.655001   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:46.654927   78975 retry.go:31] will retry after 1.346541235s: waiting for machine to come up
	I0917 18:27:48.003569   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:48.004084   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:48.004118   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:48.004017   78975 retry.go:31] will retry after 2.098571627s: waiting for machine to come up
	I0917 18:27:50.105422   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:50.106073   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:50.106096   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:50.105998   78975 retry.go:31] will retry after 1.995584656s: waiting for machine to come up
	I0917 18:27:47.424559   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:27:47.441071   77433 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:27:47.462954   77433 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:27:47.636311   77433 system_pods.go:59] 8 kube-system pods found
	I0917 18:27:47.636361   77433 system_pods.go:61] "coredns-7c65d6cfc9-cgmx9" [e539dfc7-82f3-4e3a-b4d8-262c528fa5bf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:27:47.636373   77433 system_pods.go:61] "etcd-no-preload-328741" [16eed9ef-b991-4760-a116-af9716a70d71] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 18:27:47.636388   77433 system_pods.go:61] "kube-apiserver-no-preload-328741" [ed952dd4-6a99-4ad8-9cdb-c47a5f9d8e46] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 18:27:47.636397   77433 system_pods.go:61] "kube-controller-manager-no-preload-328741" [5da59a8e-4ce3-41f0-a8a0-d022f8788ce1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 18:27:47.636407   77433 system_pods.go:61] "kube-proxy-kpzxv" [eae9f1b2-95bf-44bf-9752-92e34a863520] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 18:27:47.636415   77433 system_pods.go:61] "kube-scheduler-no-preload-328741" [54c4a13c-e03c-4ccb-993b-7b454a66f266] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 18:27:47.636428   77433 system_pods.go:61] "metrics-server-6867b74b74-l8n57" [06210da2-3da4-4082-a966-7a808d762db9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:27:47.636434   77433 system_pods.go:61] "storage-provisioner" [c7501af5-63e1-499f-acfe-48c569e460dd] Running
	I0917 18:27:47.636445   77433 system_pods.go:74] duration metric: took 173.469578ms to wait for pod list to return data ...
	I0917 18:27:47.636458   77433 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:27:47.642831   77433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:27:47.642863   77433 node_conditions.go:123] node cpu capacity is 2
	I0917 18:27:47.642876   77433 node_conditions.go:105] duration metric: took 6.413638ms to run NodePressure ...
	I0917 18:27:47.642898   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:49.172338   77433 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.529413888s)
	I0917 18:27:49.172374   77433 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0917 18:27:49.181467   77433 kubeadm.go:739] kubelet initialised
	I0917 18:27:49.181492   77433 kubeadm.go:740] duration metric: took 9.106065ms waiting for restarted kubelet to initialise ...
	I0917 18:27:49.181504   77433 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:27:49.188444   77433 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:51.196629   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace has status "Ready":"False"
	I0917 18:27:48.837267   77819 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.216281013s)
	I0917 18:27:48.837303   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:49.079443   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:49.184248   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:49.270646   77819 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:27:49.270739   77819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:49.771210   77819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:50.270888   77819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:50.300440   77819 api_server.go:72] duration metric: took 1.029792788s to wait for apiserver process to appear ...
	I0917 18:27:50.300472   77819 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:27:50.300497   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:50.301150   77819 api_server.go:269] stopped: https://192.168.39.58:8444/healthz: Get "https://192.168.39.58:8444/healthz": dial tcp 192.168.39.58:8444: connect: connection refused
	I0917 18:27:50.800904   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:53.830413   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:27:53.830444   77819 api_server.go:103] status: https://192.168.39.58:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:27:53.830466   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:53.863997   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:27:53.864040   77819 api_server.go:103] status: https://192.168.39.58:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:27:54.301188   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:54.308708   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:27:54.308744   77819 api_server.go:103] status: https://192.168.39.58:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:27:54.801293   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:54.810135   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:27:54.810165   77819 api_server.go:103] status: https://192.168.39.58:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:27:55.300669   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:55.306598   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 200:
	ok
	I0917 18:27:55.314062   77819 api_server.go:141] control plane version: v1.31.1
	I0917 18:27:55.314089   77819 api_server.go:131] duration metric: took 5.013610515s to wait for apiserver health ...
	I0917 18:27:55.314098   77819 cni.go:84] Creating CNI manager for ""
	I0917 18:27:55.314105   77819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:27:55.315933   77819 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:27:52.103970   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:52.104598   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:52.104668   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:52.104610   78975 retry.go:31] will retry after 3.302824s: waiting for machine to come up
	I0917 18:27:55.410506   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:55.410967   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:55.410993   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:55.410917   78975 retry.go:31] will retry after 3.790367729s: waiting for machine to come up
	I0917 18:27:53.697650   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace has status "Ready":"False"
	I0917 18:27:56.195779   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace has status "Ready":"False"
	I0917 18:27:55.317026   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:27:55.328593   77819 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:27:55.353710   77819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:27:55.364593   77819 system_pods.go:59] 8 kube-system pods found
	I0917 18:27:55.364637   77819 system_pods.go:61] "coredns-7c65d6cfc9-5wm4j" [af3267b8-4da2-4e95-802e-981814415f7d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:27:55.364649   77819 system_pods.go:61] "etcd-default-k8s-diff-port-438836" [72235e11-dd9c-4560-a258-84ae2fefc0ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 18:27:55.364659   77819 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-438836" [606ffa55-26de-426a-b101-3e5db2329146] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 18:27:55.364682   77819 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-438836" [a9ef6aae-54f9-4ac7-959f-3fb9dcf6019d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 18:27:55.364694   77819 system_pods.go:61] "kube-proxy-pbjlc" [de4d4161-64cd-4794-9eaa-d42b1b13e4a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 18:27:55.364702   77819 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-438836" [ba637ee3-77ca-4b12-8936-3e8616be80d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 18:27:55.364712   77819 system_pods.go:61] "metrics-server-6867b74b74-gpdsn" [4d3193f7-7912-40c6-b86e-402935023601] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:27:55.364722   77819 system_pods.go:61] "storage-provisioner" [5dbf57a2-126c-46e2-9be5-eb2974b84720] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 18:27:55.364739   77819 system_pods.go:74] duration metric: took 10.995638ms to wait for pod list to return data ...
	I0917 18:27:55.364752   77819 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:27:55.369115   77819 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:27:55.369145   77819 node_conditions.go:123] node cpu capacity is 2
	I0917 18:27:55.369159   77819 node_conditions.go:105] duration metric: took 4.401118ms to run NodePressure ...
	I0917 18:27:55.369179   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:55.688791   77819 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0917 18:27:55.694004   77819 kubeadm.go:739] kubelet initialised
	I0917 18:27:55.694035   77819 kubeadm.go:740] duration metric: took 5.21454ms waiting for restarted kubelet to initialise ...
	I0917 18:27:55.694045   77819 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:27:55.700066   77819 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5wm4j" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.706889   77819 pod_ready.go:103] pod "coredns-7c65d6cfc9-5wm4j" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:00.566518   77264 start.go:364] duration metric: took 52.227841633s to acquireMachinesLock for "embed-certs-081863"
	I0917 18:28:00.566588   77264 start.go:96] Skipping create...Using existing machine configuration
	I0917 18:28:00.566596   77264 fix.go:54] fixHost starting: 
	I0917 18:28:00.567020   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:28:00.567055   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:28:00.585812   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46167
	I0917 18:28:00.586338   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:28:00.586855   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:28:00.586878   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:28:00.587201   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:28:00.587368   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:00.587552   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetState
	I0917 18:28:00.589641   77264 fix.go:112] recreateIfNeeded on embed-certs-081863: state=Stopped err=<nil>
	I0917 18:28:00.589668   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	W0917 18:28:00.589827   77264 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 18:28:00.591622   77264 out.go:177] * Restarting existing kvm2 VM for "embed-certs-081863" ...
	I0917 18:27:59.203551   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.204119   78008 main.go:141] libmachine: (old-k8s-version-190698) Found IP for machine: 192.168.61.143
	I0917 18:27:59.204145   78008 main.go:141] libmachine: (old-k8s-version-190698) Reserving static IP address...
	I0917 18:27:59.204160   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has current primary IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.204580   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "old-k8s-version-190698", mac: "52:54:00:72:8a:43", ip: "192.168.61.143"} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.204623   78008 main.go:141] libmachine: (old-k8s-version-190698) Reserved static IP address: 192.168.61.143
	I0917 18:27:59.204642   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | skip adding static IP to network mk-old-k8s-version-190698 - found existing host DHCP lease matching {name: "old-k8s-version-190698", mac: "52:54:00:72:8a:43", ip: "192.168.61.143"}
	I0917 18:27:59.204660   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | Getting to WaitForSSH function...
	I0917 18:27:59.204675   78008 main.go:141] libmachine: (old-k8s-version-190698) Waiting for SSH to be available...
	I0917 18:27:59.206831   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.207248   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.207277   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.207563   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | Using SSH client type: external
	I0917 18:27:59.207591   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa (-rw-------)
	I0917 18:27:59.207628   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.143 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:27:59.207648   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | About to run SSH command:
	I0917 18:27:59.207660   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | exit 0
	I0917 18:27:59.334284   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | SSH cmd err, output: <nil>: 
	I0917 18:27:59.334712   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetConfigRaw
	I0917 18:27:59.335400   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:27:59.337795   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.338175   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.338199   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.338448   78008 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/config.json ...
	I0917 18:27:59.338675   78008 machine.go:93] provisionDockerMachine start ...
	I0917 18:27:59.338696   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:27:59.338932   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.340943   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.341313   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.341338   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.341517   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:27:59.341695   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.341821   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.341953   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:27:59.342138   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:59.342349   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:27:59.342366   78008 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 18:27:59.449958   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 18:27:59.449986   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetMachineName
	I0917 18:27:59.450245   78008 buildroot.go:166] provisioning hostname "old-k8s-version-190698"
	I0917 18:27:59.450275   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetMachineName
	I0917 18:27:59.450449   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.453653   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.454015   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.454044   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.454246   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:27:59.454451   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.454608   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.454777   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:27:59.454978   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:59.455195   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:27:59.455212   78008 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-190698 && echo "old-k8s-version-190698" | sudo tee /etc/hostname
	I0917 18:27:59.576721   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-190698
	
	I0917 18:27:59.576758   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.579821   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.580176   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.580211   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.580420   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:27:59.580601   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.580774   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.580920   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:27:59.581097   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:59.581292   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:27:59.581313   78008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-190698' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-190698/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-190698' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:27:59.696335   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:27:59.696366   78008 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:27:59.696387   78008 buildroot.go:174] setting up certificates
	I0917 18:27:59.696396   78008 provision.go:84] configureAuth start
	I0917 18:27:59.696405   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetMachineName
	I0917 18:27:59.696689   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:27:59.699694   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.700052   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.700079   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.700251   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.702492   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.702870   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.702897   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.703098   78008 provision.go:143] copyHostCerts
	I0917 18:27:59.703211   78008 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:27:59.703228   78008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:27:59.703308   78008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:27:59.703494   78008 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:27:59.703511   78008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:27:59.703557   78008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:27:59.703696   78008 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:27:59.703711   78008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:27:59.703743   78008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:27:59.703843   78008 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-190698 san=[127.0.0.1 192.168.61.143 localhost minikube old-k8s-version-190698]
	I0917 18:27:59.881199   78008 provision.go:177] copyRemoteCerts
	I0917 18:27:59.881281   78008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:27:59.881319   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.884199   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.884526   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.884559   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.884808   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:27:59.885004   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.885174   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:27:59.885311   78008 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:27:59.972021   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:27:59.999996   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0917 18:28:00.028759   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 18:28:00.062167   78008 provision.go:87] duration metric: took 365.752983ms to configureAuth
	I0917 18:28:00.062224   78008 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:28:00.062431   78008 config.go:182] Loaded profile config "old-k8s-version-190698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0917 18:28:00.062530   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.065903   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.066354   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.066387   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.066851   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.067080   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.067272   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.067551   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.067782   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:00.068031   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:28:00.068058   78008 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:28:00.310378   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:28:00.310410   78008 machine.go:96] duration metric: took 971.72114ms to provisionDockerMachine
	I0917 18:28:00.310424   78008 start.go:293] postStartSetup for "old-k8s-version-190698" (driver="kvm2")
	I0917 18:28:00.310444   78008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:28:00.310465   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.310788   78008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:28:00.310822   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.313609   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.313975   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.314004   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.314158   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.314364   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.314518   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.314672   78008 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:28:00.402352   78008 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:28:00.407061   78008 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:28:00.407091   78008 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:28:00.407183   78008 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:28:00.407295   78008 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:28:00.407435   78008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:28:00.419527   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:28:00.449686   78008 start.go:296] duration metric: took 139.247596ms for postStartSetup
	I0917 18:28:00.449739   78008 fix.go:56] duration metric: took 20.027097941s for fixHost
	I0917 18:28:00.449764   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.452672   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.453033   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.453080   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.453218   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.453433   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.453637   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.453793   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.454001   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:00.454175   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:28:00.454185   78008 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:28:00.566377   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726597680.523257617
	
	I0917 18:28:00.566403   78008 fix.go:216] guest clock: 1726597680.523257617
	I0917 18:28:00.566413   78008 fix.go:229] Guest: 2024-09-17 18:28:00.523257617 +0000 UTC Remote: 2024-09-17 18:28:00.449744487 +0000 UTC m=+249.811602656 (delta=73.51313ms)
	I0917 18:28:00.566439   78008 fix.go:200] guest clock delta is within tolerance: 73.51313ms
	I0917 18:28:00.566445   78008 start.go:83] releasing machines lock for "old-k8s-version-190698", held for 20.143843614s
	I0917 18:28:00.566478   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.566748   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:28:00.570065   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.570491   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.570520   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.570731   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.571320   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.571497   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.571584   78008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:28:00.571649   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.571803   78008 ssh_runner.go:195] Run: cat /version.json
	I0917 18:28:00.571830   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.574802   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.575083   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.575343   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.575382   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.575506   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.575574   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.575600   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.575664   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.575881   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.575941   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.576030   78008 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:28:00.576082   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.576278   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.576430   78008 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:28:00.592850   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Start
	I0917 18:28:00.593044   77264 main.go:141] libmachine: (embed-certs-081863) Ensuring networks are active...
	I0917 18:28:00.593996   77264 main.go:141] libmachine: (embed-certs-081863) Ensuring network default is active
	I0917 18:28:00.594404   77264 main.go:141] libmachine: (embed-certs-081863) Ensuring network mk-embed-certs-081863 is active
	I0917 18:28:00.594855   77264 main.go:141] libmachine: (embed-certs-081863) Getting domain xml...
	I0917 18:28:00.595603   77264 main.go:141] libmachine: (embed-certs-081863) Creating domain...
	I0917 18:28:00.685146   78008 ssh_runner.go:195] Run: systemctl --version
	I0917 18:28:00.692059   78008 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:28:00.844888   78008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:28:00.852326   78008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:28:00.852438   78008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:28:00.869907   78008 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:28:00.869934   78008 start.go:495] detecting cgroup driver to use...
	I0917 18:28:00.870010   78008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:28:00.888992   78008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:28:00.905438   78008 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:28:00.905495   78008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:28:00.920872   78008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:28:00.939154   78008 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:28:01.067061   78008 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:28:01.220976   78008 docker.go:233] disabling docker service ...
	I0917 18:28:01.221068   78008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:28:01.240350   78008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:28:01.257396   78008 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:28:01.407317   78008 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:28:01.552256   78008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:28:01.567151   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:28:01.589401   78008 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0917 18:28:01.589465   78008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:01.604462   78008 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:28:01.604527   78008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:01.617293   78008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:01.629766   78008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:01.643336   78008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:28:01.656308   78008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:28:01.667116   78008 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:28:01.667187   78008 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:28:01.683837   78008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:28:01.697438   78008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:28:01.843288   78008 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:28:01.951590   78008 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:28:01.951666   78008 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:28:01.957158   78008 start.go:563] Will wait 60s for crictl version
	I0917 18:28:01.957240   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:01.961218   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:28:02.001679   78008 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:28:02.001772   78008 ssh_runner.go:195] Run: crio --version
	I0917 18:28:02.032619   78008 ssh_runner.go:195] Run: crio --version
	I0917 18:28:02.064108   78008 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0917 18:27:57.695202   77433 pod_ready.go:93] pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:57.695235   77433 pod_ready.go:82] duration metric: took 8.506750324s for pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.695249   77433 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.700040   77433 pod_ready.go:93] pod "etcd-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:57.700062   77433 pod_ready.go:82] duration metric: took 4.804815ms for pod "etcd-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.700070   77433 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.705836   77433 pod_ready.go:93] pod "kube-apiserver-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:57.705867   77433 pod_ready.go:82] duration metric: took 5.789446ms for pod "kube-apiserver-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.705880   77433 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.215156   77433 pod_ready.go:93] pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:58.215180   77433 pod_ready.go:82] duration metric: took 509.29189ms for pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.215193   77433 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kpzxv" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.221031   77433 pod_ready.go:93] pod "kube-proxy-kpzxv" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:58.221054   77433 pod_ready.go:82] duration metric: took 5.853831ms for pod "kube-proxy-kpzxv" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.221065   77433 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.493958   77433 pod_ready.go:93] pod "kube-scheduler-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:58.493984   77433 pod_ready.go:82] duration metric: took 272.911397ms for pod "kube-scheduler-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.493994   77433 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:00.501591   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:27:59.707995   77819 pod_ready.go:93] pod "coredns-7c65d6cfc9-5wm4j" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:59.708017   77819 pod_ready.go:82] duration metric: took 4.007926053s for pod "coredns-7c65d6cfc9-5wm4j" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:59.708026   77819 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:01.716326   77819 pod_ready.go:103] pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:02.065336   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:28:02.068703   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:02.069066   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:02.069094   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:02.069321   78008 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0917 18:28:02.074550   78008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:28:02.091863   78008 kubeadm.go:883] updating cluster {Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:28:02.092006   78008 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0917 18:28:02.092069   78008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:28:02.152944   78008 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0917 18:28:02.153024   78008 ssh_runner.go:195] Run: which lz4
	I0917 18:28:02.157664   78008 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 18:28:02.162231   78008 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 18:28:02.162290   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0917 18:28:04.015315   78008 crio.go:462] duration metric: took 1.857697544s to copy over tarball
	I0917 18:28:04.015398   78008 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 18:28:01.931491   77264 main.go:141] libmachine: (embed-certs-081863) Waiting to get IP...
	I0917 18:28:01.932448   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:01.932939   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:01.933006   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:01.932914   79167 retry.go:31] will retry after 232.498944ms: waiting for machine to come up
	I0917 18:28:02.167642   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:02.168159   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:02.168187   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:02.168114   79167 retry.go:31] will retry after 297.644768ms: waiting for machine to come up
	I0917 18:28:02.467583   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:02.468395   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:02.468422   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:02.468356   79167 retry.go:31] will retry after 486.22753ms: waiting for machine to come up
	I0917 18:28:02.956719   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:02.957187   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:02.957212   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:02.957151   79167 retry.go:31] will retry after 602.094874ms: waiting for machine to come up
	I0917 18:28:03.560509   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:03.561150   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:03.561177   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:03.561102   79167 retry.go:31] will retry after 732.31608ms: waiting for machine to come up
	I0917 18:28:04.294713   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:04.295264   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:04.295306   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:04.295212   79167 retry.go:31] will retry after 826.461309ms: waiting for machine to come up
	I0917 18:28:05.123086   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:05.123570   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:05.123596   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:05.123528   79167 retry.go:31] will retry after 785.524779ms: waiting for machine to come up
	I0917 18:28:02.503063   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:05.002750   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:03.716871   77819 pod_ready.go:103] pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:05.718652   77819 pod_ready.go:93] pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:05.718685   77819 pod_ready.go:82] duration metric: took 6.010651123s for pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:05.718697   77819 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:07.727355   77819 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:07.199571   78008 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.184141166s)
	I0917 18:28:07.199605   78008 crio.go:469] duration metric: took 3.184259546s to extract the tarball
	I0917 18:28:07.199625   78008 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 18:28:07.247308   78008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:28:07.290580   78008 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0917 18:28:07.290605   78008 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0917 18:28:07.290641   78008 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:28:07.290664   78008 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.290685   78008 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.290705   78008 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.290772   78008 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.290865   78008 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.290898   78008 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0917 18:28:07.290896   78008 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.292426   78008 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.292473   78008 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.292479   78008 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.292525   78008 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:28:07.292555   78008 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.292544   78008 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.292594   78008 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.292796   78008 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0917 18:28:07.460802   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.466278   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.466439   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.473442   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.484306   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.490062   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.517285   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0917 18:28:07.550668   78008 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0917 18:28:07.550730   78008 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.550779   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.598383   78008 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0917 18:28:07.598426   78008 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.598468   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.627615   78008 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0917 18:28:07.627665   78008 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.627737   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.675687   78008 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0917 18:28:07.675733   78008 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.675769   78008 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0917 18:28:07.675806   78008 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.675848   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.675809   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.689052   78008 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0917 18:28:07.689106   78008 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.689141   78008 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0917 18:28:07.689169   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.689186   78008 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0917 18:28:07.689200   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.689224   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.689252   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.689296   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.689336   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.689374   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.782923   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0917 18:28:07.783204   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.833121   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.833205   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.833278   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.833316   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.833343   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.880054   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0917 18:28:07.885156   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.982007   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.990252   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:08.005351   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:08.008118   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0917 18:28:08.008319   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:08.066339   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0917 18:28:08.066388   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0917 18:28:08.173842   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0917 18:28:08.173884   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0917 18:28:08.173951   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:08.181801   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0917 18:28:08.181832   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0917 18:28:08.181952   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0917 18:28:08.196666   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:28:08.219844   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0917 18:28:08.351645   78008 cache_images.go:92] duration metric: took 1.061022994s to LoadCachedImages
	W0917 18:28:08.351739   78008 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0917 18:28:08.351760   78008 kubeadm.go:934] updating node { 192.168.61.143 8443 v1.20.0 crio true true} ...
	I0917 18:28:08.351869   78008 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-190698 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.143
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:28:08.351947   78008 ssh_runner.go:195] Run: crio config
	I0917 18:28:08.404304   78008 cni.go:84] Creating CNI manager for ""
	I0917 18:28:08.404333   78008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:28:08.404347   78008 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:28:08.404369   78008 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.143 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-190698 NodeName:old-k8s-version-190698 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.143"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.143 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0917 18:28:08.404554   78008 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.143
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-190698"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.143
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.143"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:28:08.404636   78008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0917 18:28:08.415712   78008 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:28:08.415788   78008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:28:08.426074   78008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0917 18:28:08.446765   78008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:28:08.467884   78008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0917 18:28:08.489565   78008 ssh_runner.go:195] Run: grep 192.168.61.143	control-plane.minikube.internal$ /etc/hosts
	I0917 18:28:08.494030   78008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.143	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:28:08.510100   78008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:28:08.667598   78008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:28:08.686416   78008 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698 for IP: 192.168.61.143
	I0917 18:28:08.686453   78008 certs.go:194] generating shared ca certs ...
	I0917 18:28:08.686477   78008 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:28:08.686680   78008 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:28:08.686743   78008 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:28:08.686762   78008 certs.go:256] generating profile certs ...
	I0917 18:28:08.686886   78008 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/client.key
	I0917 18:28:08.686962   78008 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.key.8ffdb4af
	I0917 18:28:08.687069   78008 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/proxy-client.key
	I0917 18:28:08.687256   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:28:08.687302   78008 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:28:08.687318   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:28:08.687360   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:28:08.687397   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:28:08.687441   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:28:08.687511   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:28:08.688412   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:28:08.729318   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:28:08.772932   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:28:08.815329   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:28:08.866305   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 18:28:08.910004   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 18:28:08.950902   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:28:08.993679   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 18:28:09.021272   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:28:09.046848   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:28:09.078938   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:28:09.110919   78008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:28:09.134493   78008 ssh_runner.go:195] Run: openssl version
	I0917 18:28:09.142920   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:28:09.157440   78008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:28:09.163382   78008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:28:09.163460   78008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:28:09.170446   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:28:09.182690   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:28:09.195144   78008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:28:09.200544   78008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:28:09.200612   78008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:28:09.207418   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:28:09.219931   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:28:09.234765   78008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:09.240859   78008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:09.240930   78008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:09.249168   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:28:09.262225   78008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:28:09.267923   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 18:28:09.276136   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 18:28:09.284356   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 18:28:09.292809   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 18:28:09.301175   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 18:28:09.309486   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 18:28:09.317652   78008 kubeadm.go:392] StartCluster: {Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:28:09.317788   78008 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:28:09.317862   78008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:28:09.367633   78008 cri.go:89] found id: ""
	I0917 18:28:09.367714   78008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:28:09.378721   78008 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 18:28:09.378751   78008 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 18:28:09.378823   78008 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 18:28:09.389949   78008 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 18:28:09.391438   78008 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-190698" does not appear in /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:28:09.392494   78008 kubeconfig.go:62] /home/jenkins/minikube-integration/19662-11085/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-190698" cluster setting kubeconfig missing "old-k8s-version-190698" context setting]
	I0917 18:28:09.393951   78008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:28:09.396482   78008 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 18:28:09.407488   78008 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.143
	I0917 18:28:09.407541   78008 kubeadm.go:1160] stopping kube-system containers ...
	I0917 18:28:09.407555   78008 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0917 18:28:09.407617   78008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:28:09.454529   78008 cri.go:89] found id: ""
	I0917 18:28:09.454609   78008 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 18:28:09.473001   78008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:28:09.483455   78008 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:28:09.483478   78008 kubeadm.go:157] found existing configuration files:
	
	I0917 18:28:09.483524   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:28:09.492941   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:28:09.493015   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:28:09.503733   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:28:09.513646   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:28:09.513744   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:28:09.523852   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:28:09.533964   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:28:09.534023   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:28:09.544196   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:28:09.554778   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:28:09.554867   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:28:09.565305   78008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:28:09.576177   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:09.717093   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:10.376689   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:10.619407   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:05.910824   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:05.911297   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:05.911326   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:05.911249   79167 retry.go:31] will retry after 994.146737ms: waiting for machine to come up
	I0917 18:28:06.906856   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:06.907429   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:06.907489   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:06.907376   79167 retry.go:31] will retry after 1.592998284s: waiting for machine to come up
	I0917 18:28:08.502438   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:08.502946   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:08.502969   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:08.502894   79167 retry.go:31] will retry after 1.71066586s: waiting for machine to come up
	I0917 18:28:10.215620   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:10.216060   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:10.216088   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:10.216019   79167 retry.go:31] will retry after 2.640762654s: waiting for machine to come up
	I0917 18:28:07.502981   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:10.000910   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:12.002029   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:09.068583   77819 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:09.068620   77819 pod_ready.go:82] duration metric: took 3.349915006s for pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.068634   77819 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.104652   77819 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:09.104685   77819 pod_ready.go:82] duration metric: took 36.042715ms for pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.104698   77819 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-pbjlc" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.111983   77819 pod_ready.go:93] pod "kube-proxy-pbjlc" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:09.112010   77819 pod_ready.go:82] duration metric: took 7.304378ms for pod "kube-proxy-pbjlc" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.112022   77819 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.118242   77819 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:09.118270   77819 pod_ready.go:82] duration metric: took 6.238909ms for pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.118284   77819 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:11.128221   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:10.743928   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:10.832172   78008 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:28:10.832275   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:11.333365   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:11.832631   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:12.332364   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:12.832978   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:13.333348   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:13.833325   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:14.333130   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:14.833200   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:15.333019   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:12.859438   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:12.859907   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:12.859933   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:12.859855   79167 retry.go:31] will retry after 2.872904917s: waiting for machine to come up
	I0917 18:28:15.734778   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:15.735248   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:15.735276   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:15.735204   79167 retry.go:31] will retry after 3.980802088s: waiting for machine to come up
	I0917 18:28:14.002604   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:16.501220   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:13.625926   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:16.124315   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:18.125564   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:15.832326   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:16.333353   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:16.833183   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:17.332967   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:17.833315   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:18.333025   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:18.832727   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:19.333388   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:19.833387   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:20.332777   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:19.720378   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.720874   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has current primary IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.720895   77264 main.go:141] libmachine: (embed-certs-081863) Found IP for machine: 192.168.50.61
	I0917 18:28:19.720909   77264 main.go:141] libmachine: (embed-certs-081863) Reserving static IP address...
	I0917 18:28:19.721385   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "embed-certs-081863", mac: "52:54:00:3f:17:3d", ip: "192.168.50.61"} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:19.721428   77264 main.go:141] libmachine: (embed-certs-081863) DBG | skip adding static IP to network mk-embed-certs-081863 - found existing host DHCP lease matching {name: "embed-certs-081863", mac: "52:54:00:3f:17:3d", ip: "192.168.50.61"}
	I0917 18:28:19.721444   77264 main.go:141] libmachine: (embed-certs-081863) Reserved static IP address: 192.168.50.61
	I0917 18:28:19.721461   77264 main.go:141] libmachine: (embed-certs-081863) Waiting for SSH to be available...
	I0917 18:28:19.721478   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Getting to WaitForSSH function...
	I0917 18:28:19.723623   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.723932   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:19.723960   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.724082   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Using SSH client type: external
	I0917 18:28:19.724109   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa (-rw-------)
	I0917 18:28:19.724139   77264 main.go:141] libmachine: (embed-certs-081863) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.61 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:28:19.724161   77264 main.go:141] libmachine: (embed-certs-081863) DBG | About to run SSH command:
	I0917 18:28:19.724173   77264 main.go:141] libmachine: (embed-certs-081863) DBG | exit 0
	I0917 18:28:19.849714   77264 main.go:141] libmachine: (embed-certs-081863) DBG | SSH cmd err, output: <nil>: 
	I0917 18:28:19.850124   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetConfigRaw
	I0917 18:28:19.850841   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetIP
	I0917 18:28:19.853490   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.853866   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:19.853891   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.854193   77264 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/config.json ...
	I0917 18:28:19.854396   77264 machine.go:93] provisionDockerMachine start ...
	I0917 18:28:19.854414   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:19.854653   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:19.857041   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.857395   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:19.857423   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.857547   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:19.857729   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:19.857863   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:19.857975   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:19.858079   77264 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:19.858237   77264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I0917 18:28:19.858247   77264 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 18:28:19.965775   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 18:28:19.965805   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetMachineName
	I0917 18:28:19.966057   77264 buildroot.go:166] provisioning hostname "embed-certs-081863"
	I0917 18:28:19.966091   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetMachineName
	I0917 18:28:19.966278   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:19.968957   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.969277   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:19.969308   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.969469   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:19.969656   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:19.969816   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:19.969923   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:19.970068   77264 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:19.970294   77264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I0917 18:28:19.970314   77264 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-081863 && echo "embed-certs-081863" | sudo tee /etc/hostname
	I0917 18:28:20.096717   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-081863
	
	I0917 18:28:20.096753   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.099788   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.100162   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.100195   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.100351   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.100571   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.100731   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.100864   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.101043   77264 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:20.101273   77264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I0917 18:28:20.101297   77264 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-081863' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-081863/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-081863' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:28:20.224405   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:28:20.224447   77264 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:28:20.224468   77264 buildroot.go:174] setting up certificates
	I0917 18:28:20.224476   77264 provision.go:84] configureAuth start
	I0917 18:28:20.224487   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetMachineName
	I0917 18:28:20.224796   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetIP
	I0917 18:28:20.227642   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.227990   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.228020   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.228128   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.230411   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.230785   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.230819   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.230945   77264 provision.go:143] copyHostCerts
	I0917 18:28:20.231012   77264 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:28:20.231026   77264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:28:20.231097   77264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:28:20.231220   77264 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:28:20.231232   77264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:28:20.231263   77264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:28:20.231349   77264 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:28:20.231361   77264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:28:20.231387   77264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:28:20.231460   77264 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.embed-certs-081863 san=[127.0.0.1 192.168.50.61 embed-certs-081863 localhost minikube]
	I0917 18:28:20.293317   77264 provision.go:177] copyRemoteCerts
	I0917 18:28:20.293370   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:28:20.293395   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.296247   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.296611   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.296649   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.296878   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.297065   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.297251   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.297411   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:28:20.384577   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:28:20.409805   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0917 18:28:20.436199   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 18:28:20.463040   77264 provision.go:87] duration metric: took 238.548615ms to configureAuth
	I0917 18:28:20.463072   77264 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:28:20.463270   77264 config.go:182] Loaded profile config "embed-certs-081863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:28:20.463371   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.466291   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.466656   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.466688   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.466942   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.467172   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.467363   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.467511   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.467661   77264 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:20.467850   77264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I0917 18:28:20.467864   77264 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:28:20.713934   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:28:20.713961   77264 machine.go:96] duration metric: took 859.552656ms to provisionDockerMachine
	I0917 18:28:20.713975   77264 start.go:293] postStartSetup for "embed-certs-081863" (driver="kvm2")
	I0917 18:28:20.713989   77264 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:28:20.714017   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:20.714338   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:28:20.714366   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.717415   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.717784   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.717810   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.717979   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.718181   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.718334   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.718489   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:28:18.501410   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:21.001625   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:20.808582   77264 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:28:20.812874   77264 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:28:20.812903   77264 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:28:20.812985   77264 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:28:20.813082   77264 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:28:20.813202   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:28:20.823533   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:28:20.853907   77264 start.go:296] duration metric: took 139.917603ms for postStartSetup
	I0917 18:28:20.853950   77264 fix.go:56] duration metric: took 20.287354242s for fixHost
	I0917 18:28:20.853974   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.856746   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.857114   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.857141   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.857324   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.857572   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.857749   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.857925   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.858084   77264 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:20.858314   77264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I0917 18:28:20.858329   77264 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:28:20.970530   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726597700.949100009
	
	I0917 18:28:20.970553   77264 fix.go:216] guest clock: 1726597700.949100009
	I0917 18:28:20.970561   77264 fix.go:229] Guest: 2024-09-17 18:28:20.949100009 +0000 UTC Remote: 2024-09-17 18:28:20.853955257 +0000 UTC m=+355.105413575 (delta=95.144752ms)
	I0917 18:28:20.970581   77264 fix.go:200] guest clock delta is within tolerance: 95.144752ms
	I0917 18:28:20.970586   77264 start.go:83] releasing machines lock for "embed-certs-081863", held for 20.404030588s
	I0917 18:28:20.970604   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:20.970874   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetIP
	I0917 18:28:20.973477   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.973786   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.973813   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.973938   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:20.974529   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:20.974733   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:20.974825   77264 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:28:20.974881   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.974945   77264 ssh_runner.go:195] Run: cat /version.json
	I0917 18:28:20.974973   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.977671   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.977994   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.978044   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.978074   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.978203   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.978365   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.978517   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.978555   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.978590   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.978659   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:28:20.978775   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.978915   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.979042   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.979161   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:28:21.080649   77264 ssh_runner.go:195] Run: systemctl --version
	I0917 18:28:21.087412   77264 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:28:21.241355   77264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:28:21.249173   77264 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:28:21.249245   77264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:28:21.266337   77264 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:28:21.266369   77264 start.go:495] detecting cgroup driver to use...
	I0917 18:28:21.266441   77264 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:28:21.284535   77264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:28:21.300191   77264 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:28:21.300262   77264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:28:21.315687   77264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:28:21.331132   77264 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:28:21.469564   77264 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:28:21.618385   77264 docker.go:233] disabling docker service ...
	I0917 18:28:21.618465   77264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:28:21.635746   77264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:28:21.653011   77264 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:28:21.806397   77264 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:28:21.942768   77264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:28:21.957319   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:28:21.977409   77264 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 18:28:21.977479   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:21.989090   77264 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:28:21.989165   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.001555   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.013044   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.024634   77264 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:28:22.036482   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.048082   77264 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.067971   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.079429   77264 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:28:22.089772   77264 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:28:22.089841   77264 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:28:22.104492   77264 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:28:22.116429   77264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:28:22.250299   77264 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:28:22.353115   77264 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:28:22.353195   77264 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:28:22.359475   77264 start.go:563] Will wait 60s for crictl version
	I0917 18:28:22.359527   77264 ssh_runner.go:195] Run: which crictl
	I0917 18:28:22.363627   77264 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:28:22.402802   77264 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:28:22.402902   77264 ssh_runner.go:195] Run: crio --version
	I0917 18:28:22.432389   77264 ssh_runner.go:195] Run: crio --version
	I0917 18:28:22.463277   77264 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 18:28:20.625519   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:23.126788   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:20.832698   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:21.332644   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:21.832955   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:22.332859   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:22.832393   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:23.333067   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:23.833266   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:24.332837   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:24.832669   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:25.332772   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:22.464498   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetIP
	I0917 18:28:22.467595   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:22.468070   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:22.468104   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:22.468400   77264 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0917 18:28:22.473355   77264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:28:22.487043   77264 kubeadm.go:883] updating cluster {Name:embed-certs-081863 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-081863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.61 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:28:22.487162   77264 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 18:28:22.487204   77264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:28:22.525877   77264 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0917 18:28:22.525947   77264 ssh_runner.go:195] Run: which lz4
	I0917 18:28:22.530318   77264 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 18:28:22.534779   77264 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 18:28:22.534821   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0917 18:28:24.007808   77264 crio.go:462] duration metric: took 1.477544842s to copy over tarball
	I0917 18:28:24.007895   77264 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 18:28:23.002565   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:25.501068   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:25.627993   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:28.126373   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:25.832772   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:26.332949   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:26.833016   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:27.332604   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:27.833127   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:28.332337   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:28.832430   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.332564   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.833193   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:30.333057   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:26.210912   77264 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.202977006s)
	I0917 18:28:26.210942   77264 crio.go:469] duration metric: took 2.203106209s to extract the tarball
	I0917 18:28:26.210950   77264 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 18:28:26.249979   77264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:28:26.297086   77264 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 18:28:26.297112   77264 cache_images.go:84] Images are preloaded, skipping loading
	I0917 18:28:26.297122   77264 kubeadm.go:934] updating node { 192.168.50.61 8443 v1.31.1 crio true true} ...
	I0917 18:28:26.297238   77264 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-081863 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-081863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:28:26.297323   77264 ssh_runner.go:195] Run: crio config
	I0917 18:28:26.343491   77264 cni.go:84] Creating CNI manager for ""
	I0917 18:28:26.343516   77264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:28:26.343526   77264 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:28:26.343547   77264 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.61 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-081863 NodeName:embed-certs-081863 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 18:28:26.343711   77264 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-081863"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:28:26.343786   77264 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 18:28:26.354782   77264 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:28:26.354863   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:28:26.365347   77264 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0917 18:28:26.383377   77264 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:28:26.401629   77264 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0917 18:28:26.420595   77264 ssh_runner.go:195] Run: grep 192.168.50.61	control-plane.minikube.internal$ /etc/hosts
	I0917 18:28:26.424760   77264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.61	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:28:26.439152   77264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:28:26.582540   77264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:28:26.600662   77264 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863 for IP: 192.168.50.61
	I0917 18:28:26.600684   77264 certs.go:194] generating shared ca certs ...
	I0917 18:28:26.600701   77264 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:28:26.600877   77264 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:28:26.600932   77264 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:28:26.600946   77264 certs.go:256] generating profile certs ...
	I0917 18:28:26.601065   77264 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/client.key
	I0917 18:28:26.601154   77264 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/apiserver.key.b407faea
	I0917 18:28:26.601218   77264 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/proxy-client.key
	I0917 18:28:26.601382   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:28:26.601423   77264 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:28:26.601438   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:28:26.601501   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:28:26.601537   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:28:26.601568   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:28:26.601625   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:28:26.602482   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:28:26.641066   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:28:26.665154   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:28:26.699573   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:28:26.749625   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0917 18:28:26.790757   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 18:28:26.818331   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:28:26.848575   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 18:28:26.875901   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:28:26.902547   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:28:26.929873   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:28:26.954674   77264 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:28:26.972433   77264 ssh_runner.go:195] Run: openssl version
	I0917 18:28:26.978761   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:28:26.991752   77264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:28:26.996704   77264 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:28:26.996771   77264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:28:27.003567   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:28:27.015305   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:28:27.027052   77264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:27.032815   77264 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:27.032880   77264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:27.039495   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:28:27.051331   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:28:27.062771   77264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:28:27.067404   77264 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:28:27.067461   77264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:28:27.073663   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:28:27.085283   77264 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:28:27.090171   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 18:28:27.096537   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 18:28:27.103011   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 18:28:27.110516   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 18:28:27.116647   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 18:28:27.123087   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 18:28:27.129689   77264 kubeadm.go:392] StartCluster: {Name:embed-certs-081863 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-081863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.61 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:28:27.129958   77264 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:28:27.130021   77264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:28:27.171240   77264 cri.go:89] found id: ""
	I0917 18:28:27.171312   77264 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:28:27.183474   77264 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 18:28:27.183494   77264 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 18:28:27.183555   77264 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 18:28:27.195418   77264 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 18:28:27.196485   77264 kubeconfig.go:125] found "embed-certs-081863" server: "https://192.168.50.61:8443"
	I0917 18:28:27.198613   77264 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 18:28:27.210454   77264 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.61
	I0917 18:28:27.210489   77264 kubeadm.go:1160] stopping kube-system containers ...
	I0917 18:28:27.210503   77264 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0917 18:28:27.210560   77264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:28:27.249423   77264 cri.go:89] found id: ""
	I0917 18:28:27.249495   77264 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 18:28:27.270900   77264 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:28:27.283556   77264 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:28:27.283577   77264 kubeadm.go:157] found existing configuration files:
	
	I0917 18:28:27.283636   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:28:27.293555   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:28:27.293619   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:28:27.303876   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:28:27.313465   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:28:27.313533   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:28:27.323675   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:28:27.333753   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:28:27.333828   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:28:27.345276   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:28:27.356223   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:28:27.356278   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:28:27.366916   77264 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:28:27.380179   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:27.518193   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:28.381642   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:28.600807   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:28.674888   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:28.751910   77264 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:28:28.752037   77264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.252499   77264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.752690   77264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.792406   77264 api_server.go:72] duration metric: took 1.040494132s to wait for apiserver process to appear ...
	I0917 18:28:29.792439   77264 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:28:29.792463   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:29.793008   77264 api_server.go:269] stopped: https://192.168.50.61:8443/healthz: Get "https://192.168.50.61:8443/healthz": dial tcp 192.168.50.61:8443: connect: connection refused
	I0917 18:28:30.292587   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:27.501185   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:29.501753   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:32.000659   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:30.626195   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:33.126180   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:30.832853   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:31.332521   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:31.832513   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:32.332347   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:32.833201   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:33.332485   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:33.833002   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:34.333150   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:34.832985   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:35.332584   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:32.308247   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:28:32.308273   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:28:32.308286   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:32.327248   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:28:32.327283   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:28:32.792628   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:32.798368   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:32.798399   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:33.292887   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:33.298137   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:33.298162   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:33.792634   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:33.797062   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:33.797095   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:34.292626   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:34.297161   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:34.297198   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:34.792621   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:34.797092   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:34.797124   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:35.292693   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:35.298774   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:35.298806   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:35.793350   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:35.798559   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 200:
	ok
	I0917 18:28:35.805421   77264 api_server.go:141] control plane version: v1.31.1
	I0917 18:28:35.805455   77264 api_server.go:131] duration metric: took 6.013008084s to wait for apiserver health ...
	I0917 18:28:35.805467   77264 cni.go:84] Creating CNI manager for ""
	I0917 18:28:35.805476   77264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:28:35.807270   77264 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:28:34.500180   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:36.501455   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:35.625916   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:38.124412   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:35.833375   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:36.332518   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:36.833057   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:37.333093   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:37.832449   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:38.333260   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:38.832592   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:39.332352   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:39.833094   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:40.332401   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:35.808509   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:28:35.820438   77264 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:28:35.843308   77264 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:28:35.858341   77264 system_pods.go:59] 8 kube-system pods found
	I0917 18:28:35.858375   77264 system_pods.go:61] "coredns-7c65d6cfc9-fv5t2" [6d147703-1be6-4e14-b00a-00563bb9f05d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:28:35.858383   77264 system_pods.go:61] "etcd-embed-certs-081863" [e7da3a2f-02a8-4fb8-bcc1-2057560e2a99] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 18:28:35.858390   77264 system_pods.go:61] "kube-apiserver-embed-certs-081863" [f576f758-867b-45ff-83e7-c7ec010c784d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 18:28:35.858396   77264 system_pods.go:61] "kube-controller-manager-embed-certs-081863" [864cfdcd-bba9-41ef-a014-9b44f90d10fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 18:28:35.858400   77264 system_pods.go:61] "kube-proxy-5ctps" [adbf43b1-986e-4bef-b515-9bf20e847369] Running
	I0917 18:28:35.858407   77264 system_pods.go:61] "kube-scheduler-embed-certs-081863" [1c6dc904-888a-43e2-9edf-ad87025d9cd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 18:28:35.858425   77264 system_pods.go:61] "metrics-server-6867b74b74-g2ttm" [dbb935ab-664c-420e-8b8e-4c033c3e07d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:28:35.858438   77264 system_pods.go:61] "storage-provisioner" [3a81abf3-c894-4279-91ce-6a66e4517de9] Running
	I0917 18:28:35.858446   77264 system_pods.go:74] duration metric: took 15.115932ms to wait for pod list to return data ...
	I0917 18:28:35.858459   77264 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:28:35.865686   77264 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:28:35.865715   77264 node_conditions.go:123] node cpu capacity is 2
	I0917 18:28:35.865728   77264 node_conditions.go:105] duration metric: took 7.262354ms to run NodePressure ...
	I0917 18:28:35.865747   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:36.133217   77264 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0917 18:28:36.142193   77264 kubeadm.go:739] kubelet initialised
	I0917 18:28:36.142216   77264 kubeadm.go:740] duration metric: took 8.957553ms waiting for restarted kubelet to initialise ...
	I0917 18:28:36.142223   77264 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:28:36.148365   77264 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-fv5t2" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.154605   77264 pod_ready.go:98] node "embed-certs-081863" hosting pod "coredns-7c65d6cfc9-fv5t2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.154633   77264 pod_ready.go:82] duration metric: took 6.241589ms for pod "coredns-7c65d6cfc9-fv5t2" in "kube-system" namespace to be "Ready" ...
	E0917 18:28:36.154644   77264 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-081863" hosting pod "coredns-7c65d6cfc9-fv5t2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.154654   77264 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.160864   77264 pod_ready.go:98] node "embed-certs-081863" hosting pod "etcd-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.160888   77264 pod_ready.go:82] duration metric: took 6.224743ms for pod "etcd-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	E0917 18:28:36.160899   77264 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-081863" hosting pod "etcd-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.160906   77264 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.167006   77264 pod_ready.go:98] node "embed-certs-081863" hosting pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.167038   77264 pod_ready.go:82] duration metric: took 6.114714ms for pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	E0917 18:28:36.167049   77264 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-081863" hosting pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.167058   77264 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.247310   77264 pod_ready.go:98] node "embed-certs-081863" hosting pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.247349   77264 pod_ready.go:82] duration metric: took 80.274557ms for pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	E0917 18:28:36.247361   77264 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-081863" hosting pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.247368   77264 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5ctps" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.647989   77264 pod_ready.go:93] pod "kube-proxy-5ctps" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:36.648012   77264 pod_ready.go:82] duration metric: took 400.635503ms for pod "kube-proxy-5ctps" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.648022   77264 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:38.654947   77264 pod_ready.go:103] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:40.658044   77264 pod_ready.go:103] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:39.000917   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:41.001794   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:40.124879   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:42.125939   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:40.832609   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:41.332438   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:41.832456   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:42.332846   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:42.832374   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:43.332703   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:43.832502   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:44.332845   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:44.832341   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:45.333377   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:43.154904   77264 pod_ready.go:103] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:45.155253   77264 pod_ready.go:103] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:43.001900   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:45.501989   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:44.625492   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:47.124276   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:45.832541   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:46.332842   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:46.832446   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:47.333344   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:47.833087   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:48.332527   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:48.832377   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:49.332937   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:49.833254   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:50.332394   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:47.157575   77264 pod_ready.go:93] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:47.157603   77264 pod_ready.go:82] duration metric: took 10.509573459s for pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:47.157614   77264 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:49.163957   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:48.000696   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:50.001527   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:49.627381   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:52.125550   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:50.833049   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:51.333314   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:51.832959   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:52.332830   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:52.832394   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:53.333004   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:53.832841   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:54.333310   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:54.832648   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:55.332487   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:51.164376   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:53.164866   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:55.165065   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:52.501375   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:54.501792   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:57.006451   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:54.624863   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:57.125005   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:55.832339   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:56.333257   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:56.833293   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:57.332665   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:57.833189   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:58.332409   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:58.833030   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:59.333251   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:59.832903   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:00.333365   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:57.664921   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:00.165972   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:59.500173   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:01.501014   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:59.125299   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:01.125883   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:00.833018   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:01.332976   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:01.832860   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:02.332401   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:02.832409   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:03.333273   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:03.832435   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:04.332572   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:04.832618   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:05.333051   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:02.166251   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:04.665729   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:04.000731   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:06.000850   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:03.624799   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:05.625817   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:08.124471   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:05.833109   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:06.332870   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:06.833248   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:07.332856   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:07.832795   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:08.332779   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:08.832356   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:09.333340   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:09.832899   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:10.332646   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:06.666037   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:09.163623   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:08.501863   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:10.504311   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:10.125479   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:12.625676   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:10.833153   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:10.833224   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:10.877318   78008 cri.go:89] found id: ""
	I0917 18:29:10.877347   78008 logs.go:276] 0 containers: []
	W0917 18:29:10.877356   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:10.877363   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:10.877433   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:10.913506   78008 cri.go:89] found id: ""
	I0917 18:29:10.913532   78008 logs.go:276] 0 containers: []
	W0917 18:29:10.913540   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:10.913546   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:10.913607   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:10.952648   78008 cri.go:89] found id: ""
	I0917 18:29:10.952679   78008 logs.go:276] 0 containers: []
	W0917 18:29:10.952689   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:10.952699   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:10.952761   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:10.992819   78008 cri.go:89] found id: ""
	I0917 18:29:10.992851   78008 logs.go:276] 0 containers: []
	W0917 18:29:10.992863   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:10.992870   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:10.992923   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:11.032717   78008 cri.go:89] found id: ""
	I0917 18:29:11.032752   78008 logs.go:276] 0 containers: []
	W0917 18:29:11.032764   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:11.032772   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:11.032831   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:11.070909   78008 cri.go:89] found id: ""
	I0917 18:29:11.070934   78008 logs.go:276] 0 containers: []
	W0917 18:29:11.070944   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:11.070953   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:11.071005   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:11.111115   78008 cri.go:89] found id: ""
	I0917 18:29:11.111146   78008 logs.go:276] 0 containers: []
	W0917 18:29:11.111157   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:11.111164   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:11.111233   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:11.147704   78008 cri.go:89] found id: ""
	I0917 18:29:11.147738   78008 logs.go:276] 0 containers: []
	W0917 18:29:11.147751   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:11.147770   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:11.147783   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:11.222086   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:11.222131   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:11.268572   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:11.268598   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:11.320140   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:11.320179   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:11.336820   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:11.336862   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:11.476726   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:13.977359   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:13.991780   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:13.991861   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:14.029657   78008 cri.go:89] found id: ""
	I0917 18:29:14.029686   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.029697   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:14.029703   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:14.029761   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:14.070673   78008 cri.go:89] found id: ""
	I0917 18:29:14.070707   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.070716   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:14.070722   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:14.070781   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:14.109826   78008 cri.go:89] found id: ""
	I0917 18:29:14.109862   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.109872   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:14.109880   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:14.109938   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:14.156812   78008 cri.go:89] found id: ""
	I0917 18:29:14.156839   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.156848   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:14.156853   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:14.156909   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:14.203877   78008 cri.go:89] found id: ""
	I0917 18:29:14.203906   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.203915   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:14.203921   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:14.203973   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:14.263366   78008 cri.go:89] found id: ""
	I0917 18:29:14.263395   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.263403   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:14.263408   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:14.263469   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:14.305300   78008 cri.go:89] found id: ""
	I0917 18:29:14.305324   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.305331   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:14.305337   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:14.305393   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:14.342838   78008 cri.go:89] found id: ""
	I0917 18:29:14.342874   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.342888   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:14.342900   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:14.342915   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:14.394814   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:14.394864   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:14.410058   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:14.410084   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:14.497503   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:14.497532   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:14.497547   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:14.578545   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:14.578582   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:11.164670   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:13.664310   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:15.664728   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:13.001122   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:15.001204   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:15.124476   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:17.125696   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:17.119953   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:17.134019   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:17.134078   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:17.174236   78008 cri.go:89] found id: ""
	I0917 18:29:17.174259   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.174268   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:17.174273   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:17.174317   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:17.208678   78008 cri.go:89] found id: ""
	I0917 18:29:17.208738   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.208749   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:17.208757   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:17.208820   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:17.242890   78008 cri.go:89] found id: ""
	I0917 18:29:17.242915   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.242923   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:17.242929   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:17.242983   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:17.281990   78008 cri.go:89] found id: ""
	I0917 18:29:17.282013   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.282038   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:17.282046   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:17.282105   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:17.320104   78008 cri.go:89] found id: ""
	I0917 18:29:17.320140   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.320153   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:17.320160   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:17.320220   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:17.361959   78008 cri.go:89] found id: ""
	I0917 18:29:17.361993   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.362004   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:17.362012   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:17.362120   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:17.400493   78008 cri.go:89] found id: ""
	I0917 18:29:17.400531   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.400543   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:17.400550   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:17.400611   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:17.435549   78008 cri.go:89] found id: ""
	I0917 18:29:17.435574   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.435582   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:17.435590   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:17.435605   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:17.483883   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:17.483919   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:17.498771   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:17.498801   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:17.583654   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:17.583680   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:17.583695   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:17.670903   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:17.670935   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:20.213963   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:20.228410   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:20.228487   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:20.268252   78008 cri.go:89] found id: ""
	I0917 18:29:20.268290   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.268301   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:20.268308   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:20.268385   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:20.307725   78008 cri.go:89] found id: ""
	I0917 18:29:20.307765   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.307774   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:20.307779   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:20.307840   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:20.350112   78008 cri.go:89] found id: ""
	I0917 18:29:20.350138   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.350146   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:20.350151   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:20.350209   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:20.386658   78008 cri.go:89] found id: ""
	I0917 18:29:20.386683   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.386692   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:20.386697   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:20.386758   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:20.427135   78008 cri.go:89] found id: ""
	I0917 18:29:20.427168   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.427180   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:20.427186   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:20.427253   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:20.464054   78008 cri.go:89] found id: ""
	I0917 18:29:20.464081   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.464091   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:20.464098   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:20.464162   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:20.503008   78008 cri.go:89] found id: ""
	I0917 18:29:20.503034   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.503043   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:20.503048   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:20.503107   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:20.539095   78008 cri.go:89] found id: ""
	I0917 18:29:20.539125   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.539137   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:20.539149   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:20.539165   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:20.552429   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:20.552457   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:20.631977   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:20.632000   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:20.632012   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:18.164593   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:20.164968   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:17.501184   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:19.503422   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:22.001605   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:19.624854   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:21.625397   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:20.709917   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:20.709950   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:20.752312   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:20.752349   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:23.310520   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:23.327230   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:23.327296   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:23.369648   78008 cri.go:89] found id: ""
	I0917 18:29:23.369677   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.369687   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:23.369692   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:23.369756   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:23.406968   78008 cri.go:89] found id: ""
	I0917 18:29:23.407002   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.407010   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:23.407017   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:23.407079   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:23.448246   78008 cri.go:89] found id: ""
	I0917 18:29:23.448275   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.448285   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:23.448290   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:23.448350   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:23.486975   78008 cri.go:89] found id: ""
	I0917 18:29:23.487006   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.487016   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:23.487024   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:23.487077   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:23.523614   78008 cri.go:89] found id: ""
	I0917 18:29:23.523645   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.523656   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:23.523672   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:23.523751   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:23.567735   78008 cri.go:89] found id: ""
	I0917 18:29:23.567763   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.567774   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:23.567781   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:23.567846   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:23.610952   78008 cri.go:89] found id: ""
	I0917 18:29:23.610985   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.610995   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:23.611002   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:23.611063   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:23.647601   78008 cri.go:89] found id: ""
	I0917 18:29:23.647633   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.647645   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:23.647657   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:23.647674   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:23.720969   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:23.720998   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:23.721014   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:23.802089   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:23.802124   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:23.847641   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:23.847673   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:23.901447   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:23.901488   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:22.663696   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:25.164022   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:24.001853   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:26.002572   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:24.124362   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:26.125485   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:26.416524   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:26.432087   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:26.432148   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:26.473403   78008 cri.go:89] found id: ""
	I0917 18:29:26.473435   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.473446   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:26.473453   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:26.473516   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:26.510736   78008 cri.go:89] found id: ""
	I0917 18:29:26.510764   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.510774   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:26.510780   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:26.510847   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:26.549732   78008 cri.go:89] found id: ""
	I0917 18:29:26.549766   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.549779   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:26.549789   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:26.549857   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:26.586548   78008 cri.go:89] found id: ""
	I0917 18:29:26.586580   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.586592   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:26.586599   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:26.586664   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:26.624246   78008 cri.go:89] found id: ""
	I0917 18:29:26.624276   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.624286   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:26.624294   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:26.624353   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:26.662535   78008 cri.go:89] found id: ""
	I0917 18:29:26.662565   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.662576   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:26.662584   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:26.662648   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:26.697775   78008 cri.go:89] found id: ""
	I0917 18:29:26.697810   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.697820   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:26.697826   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:26.697885   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:26.734181   78008 cri.go:89] found id: ""
	I0917 18:29:26.734209   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.734218   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:26.734228   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:26.734239   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:26.783128   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:26.783163   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:26.797674   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:26.797713   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:26.873548   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:26.873570   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:26.873581   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:26.954031   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:26.954066   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:29.494364   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:29.508545   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:29.508616   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:29.545854   78008 cri.go:89] found id: ""
	I0917 18:29:29.545880   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.545888   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:29.545893   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:29.545941   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:29.581646   78008 cri.go:89] found id: ""
	I0917 18:29:29.581680   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.581691   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:29.581698   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:29.581770   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:29.627071   78008 cri.go:89] found id: ""
	I0917 18:29:29.627101   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.627112   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:29.627119   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:29.627176   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:29.662514   78008 cri.go:89] found id: ""
	I0917 18:29:29.662544   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.662555   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:29.662562   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:29.662622   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:29.699246   78008 cri.go:89] found id: ""
	I0917 18:29:29.699278   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.699291   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:29.699299   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:29.699359   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:29.736018   78008 cri.go:89] found id: ""
	I0917 18:29:29.736057   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.736070   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:29.736077   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:29.736138   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:29.773420   78008 cri.go:89] found id: ""
	I0917 18:29:29.773449   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.773459   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:29.773467   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:29.773527   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:29.811530   78008 cri.go:89] found id: ""
	I0917 18:29:29.811556   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.811568   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:29.811578   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:29.811592   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:29.870083   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:29.870123   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:29.885471   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:29.885500   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:29.964699   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:29.964730   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:29.964754   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:30.048858   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:30.048899   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:27.165404   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:29.166367   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:28.500007   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:30.500594   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:28.626043   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:31.125419   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:33.125872   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:32.597013   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:32.611613   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:32.611691   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:32.648043   78008 cri.go:89] found id: ""
	I0917 18:29:32.648074   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.648086   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:32.648093   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:32.648159   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:32.686471   78008 cri.go:89] found id: ""
	I0917 18:29:32.686514   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.686526   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:32.686533   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:32.686594   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:32.721495   78008 cri.go:89] found id: ""
	I0917 18:29:32.721521   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.721530   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:32.721536   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:32.721595   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:32.757916   78008 cri.go:89] found id: ""
	I0917 18:29:32.757949   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.757960   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:32.757968   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:32.758035   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:32.793880   78008 cri.go:89] found id: ""
	I0917 18:29:32.793913   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.793925   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:32.793933   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:32.794006   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:32.834944   78008 cri.go:89] found id: ""
	I0917 18:29:32.834965   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.834973   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:32.834983   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:32.835044   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:32.872852   78008 cri.go:89] found id: ""
	I0917 18:29:32.872875   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.872883   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:32.872888   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:32.872939   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:32.913506   78008 cri.go:89] found id: ""
	I0917 18:29:32.913530   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.913538   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:32.913547   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:32.913562   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:32.928726   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:32.928751   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:33.001220   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:33.001259   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:33.001274   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:33.080268   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:33.080304   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:33.123977   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:33.124008   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:31.664513   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:34.164735   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:33.001341   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:35.500975   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:35.625484   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:37.625964   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:35.678936   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:35.692953   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:35.693036   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:35.736947   78008 cri.go:89] found id: ""
	I0917 18:29:35.736984   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.737004   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:35.737012   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:35.737076   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:35.776148   78008 cri.go:89] found id: ""
	I0917 18:29:35.776173   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.776184   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:35.776191   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:35.776253   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:35.814136   78008 cri.go:89] found id: ""
	I0917 18:29:35.814167   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.814179   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:35.814189   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:35.814252   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:35.854451   78008 cri.go:89] found id: ""
	I0917 18:29:35.854480   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.854492   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:35.854505   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:35.854573   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:35.893068   78008 cri.go:89] found id: ""
	I0917 18:29:35.893091   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.893102   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:35.893108   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:35.893174   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:35.929116   78008 cri.go:89] found id: ""
	I0917 18:29:35.929140   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.929148   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:35.929153   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:35.929211   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:35.964253   78008 cri.go:89] found id: ""
	I0917 18:29:35.964284   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.964294   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:35.964300   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:35.964364   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:36.002761   78008 cri.go:89] found id: ""
	I0917 18:29:36.002790   78008 logs.go:276] 0 containers: []
	W0917 18:29:36.002800   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:36.002810   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:36.002825   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:36.017581   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:36.017614   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:36.086982   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:36.087008   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:36.087024   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:36.169886   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:36.169919   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:36.215327   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:36.215355   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:38.768619   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:38.781979   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:38.782049   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:38.818874   78008 cri.go:89] found id: ""
	I0917 18:29:38.818903   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.818911   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:38.818918   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:38.818967   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:38.857619   78008 cri.go:89] found id: ""
	I0917 18:29:38.857648   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.857664   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:38.857670   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:38.857747   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:38.896861   78008 cri.go:89] found id: ""
	I0917 18:29:38.896896   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.896907   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:38.896914   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:38.896977   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:38.934593   78008 cri.go:89] found id: ""
	I0917 18:29:38.934616   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.934625   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:38.934632   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:38.934707   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:38.972359   78008 cri.go:89] found id: ""
	I0917 18:29:38.972383   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.972394   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:38.972400   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:38.972468   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:39.007529   78008 cri.go:89] found id: ""
	I0917 18:29:39.007554   78008 logs.go:276] 0 containers: []
	W0917 18:29:39.007561   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:39.007567   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:39.007613   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:39.042646   78008 cri.go:89] found id: ""
	I0917 18:29:39.042679   78008 logs.go:276] 0 containers: []
	W0917 18:29:39.042690   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:39.042697   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:39.042758   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:39.080077   78008 cri.go:89] found id: ""
	I0917 18:29:39.080106   78008 logs.go:276] 0 containers: []
	W0917 18:29:39.080118   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:39.080128   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:39.080144   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:39.094785   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:39.094812   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:39.168149   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:39.168173   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:39.168184   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:39.258912   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:39.258958   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:39.303103   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:39.303133   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:36.664761   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:38.664881   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:37.501339   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:40.001032   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:42.001645   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:40.124869   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:42.125730   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:41.860904   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:41.875574   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:41.875644   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:41.916576   78008 cri.go:89] found id: ""
	I0917 18:29:41.916603   78008 logs.go:276] 0 containers: []
	W0917 18:29:41.916615   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:41.916623   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:41.916674   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:41.952222   78008 cri.go:89] found id: ""
	I0917 18:29:41.952284   78008 logs.go:276] 0 containers: []
	W0917 18:29:41.952298   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:41.952307   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:41.952374   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:41.992584   78008 cri.go:89] found id: ""
	I0917 18:29:41.992611   78008 logs.go:276] 0 containers: []
	W0917 18:29:41.992621   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:41.992627   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:41.992689   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:42.030490   78008 cri.go:89] found id: ""
	I0917 18:29:42.030522   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.030534   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:42.030542   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:42.030621   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:42.067240   78008 cri.go:89] found id: ""
	I0917 18:29:42.067274   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.067287   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:42.067312   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:42.067394   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:42.106093   78008 cri.go:89] found id: ""
	I0917 18:29:42.106124   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.106137   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:42.106148   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:42.106227   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:42.148581   78008 cri.go:89] found id: ""
	I0917 18:29:42.148623   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.148635   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:42.148643   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:42.148729   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:42.188248   78008 cri.go:89] found id: ""
	I0917 18:29:42.188277   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.188286   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:42.188294   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:42.188308   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:42.276866   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:42.276906   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:42.325636   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:42.325671   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:42.379370   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:42.379406   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:42.396321   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:42.396357   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:42.481770   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:44.982800   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:44.996898   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:44.997053   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:45.036594   78008 cri.go:89] found id: ""
	I0917 18:29:45.036623   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.036632   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:45.036638   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:45.036699   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:45.073760   78008 cri.go:89] found id: ""
	I0917 18:29:45.073788   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.073799   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:45.073807   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:45.073868   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:45.111080   78008 cri.go:89] found id: ""
	I0917 18:29:45.111106   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.111116   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:45.111127   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:45.111196   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:45.149986   78008 cri.go:89] found id: ""
	I0917 18:29:45.150017   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.150027   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:45.150035   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:45.150099   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:45.187597   78008 cri.go:89] found id: ""
	I0917 18:29:45.187620   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.187629   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:45.187635   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:45.187701   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:45.234149   78008 cri.go:89] found id: ""
	I0917 18:29:45.234174   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.234182   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:45.234188   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:45.234236   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:45.269840   78008 cri.go:89] found id: ""
	I0917 18:29:45.269867   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.269875   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:45.269882   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:45.269944   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:45.306377   78008 cri.go:89] found id: ""
	I0917 18:29:45.306407   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.306418   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:45.306427   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:45.306441   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:45.388767   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:45.388788   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:45.388799   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:45.470114   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:45.470147   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:45.516157   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:45.516185   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:45.573857   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:45.573895   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:41.166141   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:43.664951   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:44.501916   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:47.000980   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:44.626656   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:47.124445   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:48.090706   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:48.105691   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:48.105776   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:48.150986   78008 cri.go:89] found id: ""
	I0917 18:29:48.151013   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.151024   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:48.151032   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:48.151100   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:48.192061   78008 cri.go:89] found id: ""
	I0917 18:29:48.192090   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.192099   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:48.192104   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:48.192161   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:48.229101   78008 cri.go:89] found id: ""
	I0917 18:29:48.229131   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.229148   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:48.229157   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:48.229220   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:48.265986   78008 cri.go:89] found id: ""
	I0917 18:29:48.266016   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.266027   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:48.266034   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:48.266095   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:48.303726   78008 cri.go:89] found id: ""
	I0917 18:29:48.303766   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.303776   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:48.303781   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:48.303830   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:48.339658   78008 cri.go:89] found id: ""
	I0917 18:29:48.339686   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.339696   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:48.339704   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:48.339774   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:48.379115   78008 cri.go:89] found id: ""
	I0917 18:29:48.379140   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.379157   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:48.379164   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:48.379218   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:48.414414   78008 cri.go:89] found id: ""
	I0917 18:29:48.414449   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.414461   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:48.414472   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:48.414488   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:48.428450   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:48.428477   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:48.514098   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:48.514125   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:48.514140   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:48.593472   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:48.593505   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:48.644071   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:48.644108   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:46.165499   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:48.166008   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:50.663751   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:49.001133   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:51.001465   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:49.125957   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:51.126670   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:51.202414   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:51.216803   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:51.216880   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:51.258947   78008 cri.go:89] found id: ""
	I0917 18:29:51.258982   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.259000   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:51.259009   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:51.259075   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:51.298904   78008 cri.go:89] found id: ""
	I0917 18:29:51.298937   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.298949   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:51.298957   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:51.299019   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:51.340714   78008 cri.go:89] found id: ""
	I0917 18:29:51.340743   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.340755   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:51.340761   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:51.340823   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:51.382480   78008 cri.go:89] found id: ""
	I0917 18:29:51.382518   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.382527   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:51.382532   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:51.382584   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:51.423788   78008 cri.go:89] found id: ""
	I0917 18:29:51.423818   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.423829   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:51.423836   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:51.423905   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:51.459714   78008 cri.go:89] found id: ""
	I0917 18:29:51.459740   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.459755   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:51.459762   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:51.459810   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:51.495817   78008 cri.go:89] found id: ""
	I0917 18:29:51.495850   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.495862   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:51.495870   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:51.495926   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:51.531481   78008 cri.go:89] found id: ""
	I0917 18:29:51.531521   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.531538   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:51.531550   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:51.531566   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:51.547085   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:51.547120   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:51.622717   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:51.622743   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:51.622758   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:51.701363   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:51.701404   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:51.749746   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:51.749779   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:54.306208   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:54.320659   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:54.320737   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:54.365488   78008 cri.go:89] found id: ""
	I0917 18:29:54.365513   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.365521   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:54.365527   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:54.365588   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:54.417659   78008 cri.go:89] found id: ""
	I0917 18:29:54.417689   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.417700   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:54.417706   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:54.417773   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:54.460760   78008 cri.go:89] found id: ""
	I0917 18:29:54.460795   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.460806   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:54.460814   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:54.460865   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:54.501371   78008 cri.go:89] found id: ""
	I0917 18:29:54.501405   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.501419   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:54.501428   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:54.501501   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:54.549810   78008 cri.go:89] found id: ""
	I0917 18:29:54.549844   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.549853   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:54.549859   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:54.549910   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:54.586837   78008 cri.go:89] found id: ""
	I0917 18:29:54.586860   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.586867   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:54.586881   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:54.586942   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:54.623858   78008 cri.go:89] found id: ""
	I0917 18:29:54.623887   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.623898   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:54.623905   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:54.623967   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:54.660913   78008 cri.go:89] found id: ""
	I0917 18:29:54.660945   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.660955   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:54.660965   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:54.660979   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:54.716523   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:54.716560   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:54.731846   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:54.731877   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:54.812288   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:54.812311   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:54.812323   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:54.892779   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:54.892819   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:52.663861   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:54.664903   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:53.501802   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:56.001407   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:53.624682   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:56.124445   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:57.440435   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:57.454886   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:57.454964   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:57.491408   78008 cri.go:89] found id: ""
	I0917 18:29:57.491440   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.491453   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:57.491461   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:57.491523   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:57.535786   78008 cri.go:89] found id: ""
	I0917 18:29:57.535814   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.535829   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:57.535837   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:57.535904   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:57.578014   78008 cri.go:89] found id: ""
	I0917 18:29:57.578043   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.578051   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:57.578057   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:57.578108   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:57.615580   78008 cri.go:89] found id: ""
	I0917 18:29:57.615615   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.615626   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:57.615634   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:57.615699   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:57.660250   78008 cri.go:89] found id: ""
	I0917 18:29:57.660285   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.660296   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:57.660305   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:57.660366   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:57.700495   78008 cri.go:89] found id: ""
	I0917 18:29:57.700526   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.700536   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:57.700542   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:57.700600   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:57.740580   78008 cri.go:89] found id: ""
	I0917 18:29:57.740616   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.740627   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:57.740635   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:57.740694   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:57.776982   78008 cri.go:89] found id: ""
	I0917 18:29:57.777012   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.777024   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:57.777035   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:57.777049   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:57.877144   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:57.877184   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:57.923875   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:57.923912   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:57.976988   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:57.977025   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:57.992196   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:57.992223   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:58.071161   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:00.571930   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:00.586999   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:00.587083   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:00.625833   78008 cri.go:89] found id: ""
	I0917 18:30:00.625856   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.625864   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:00.625869   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:00.625924   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:00.669976   78008 cri.go:89] found id: ""
	I0917 18:30:00.669999   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.670007   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:00.670012   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:00.670072   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:56.665386   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:59.163695   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:58.002576   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:00.500510   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:58.624759   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:00.633084   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:03.124695   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:00.708223   78008 cri.go:89] found id: ""
	I0917 18:30:00.708249   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.708257   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:00.708263   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:00.708315   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:00.743322   78008 cri.go:89] found id: ""
	I0917 18:30:00.743352   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.743364   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:00.743371   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:00.743508   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:00.778595   78008 cri.go:89] found id: ""
	I0917 18:30:00.778625   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.778635   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:00.778643   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:00.778706   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:00.816878   78008 cri.go:89] found id: ""
	I0917 18:30:00.816911   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.816923   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:00.816930   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:00.816983   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:00.855841   78008 cri.go:89] found id: ""
	I0917 18:30:00.855876   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.855889   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:00.855898   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:00.855974   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:00.897170   78008 cri.go:89] found id: ""
	I0917 18:30:00.897195   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.897203   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:00.897210   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:00.897236   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:00.949640   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:00.949680   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:00.963799   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:00.963825   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:01.050102   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:01.050123   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:01.050135   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:01.129012   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:01.129061   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:03.672160   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:03.687572   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:03.687648   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:03.729586   78008 cri.go:89] found id: ""
	I0917 18:30:03.729615   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.729626   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:03.729632   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:03.729692   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:03.766993   78008 cri.go:89] found id: ""
	I0917 18:30:03.767022   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.767032   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:03.767039   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:03.767104   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:03.804340   78008 cri.go:89] found id: ""
	I0917 18:30:03.804368   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.804378   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:03.804385   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:03.804451   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:03.847020   78008 cri.go:89] found id: ""
	I0917 18:30:03.847050   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.847061   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:03.847068   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:03.847158   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:03.885900   78008 cri.go:89] found id: ""
	I0917 18:30:03.885927   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.885938   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:03.885946   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:03.886009   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:03.925137   78008 cri.go:89] found id: ""
	I0917 18:30:03.925167   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.925178   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:03.925184   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:03.925259   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:03.962225   78008 cri.go:89] found id: ""
	I0917 18:30:03.962261   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.962275   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:03.962283   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:03.962352   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:04.005866   78008 cri.go:89] found id: ""
	I0917 18:30:04.005892   78008 logs.go:276] 0 containers: []
	W0917 18:30:04.005902   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:04.005909   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:04.005921   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:04.057578   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:04.057615   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:04.072178   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:04.072213   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:04.145219   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:04.145251   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:04.145285   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:04.234230   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:04.234282   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:01.165075   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:03.666085   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:05.672830   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:03.000954   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:05.501361   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:05.124840   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:07.126821   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:06.777988   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:06.793426   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:06.793500   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:06.833313   78008 cri.go:89] found id: ""
	I0917 18:30:06.833352   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.833360   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:06.833365   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:06.833424   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:06.870020   78008 cri.go:89] found id: ""
	I0917 18:30:06.870047   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.870056   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:06.870062   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:06.870124   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:06.906682   78008 cri.go:89] found id: ""
	I0917 18:30:06.906716   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.906728   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:06.906735   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:06.906810   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:06.946328   78008 cri.go:89] found id: ""
	I0917 18:30:06.946356   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.946365   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:06.946371   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:06.946418   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:06.983832   78008 cri.go:89] found id: ""
	I0917 18:30:06.983856   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.983865   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:06.983871   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:06.983918   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:07.024526   78008 cri.go:89] found id: ""
	I0917 18:30:07.024560   78008 logs.go:276] 0 containers: []
	W0917 18:30:07.024571   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:07.024579   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:07.024637   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:07.066891   78008 cri.go:89] found id: ""
	I0917 18:30:07.066917   78008 logs.go:276] 0 containers: []
	W0917 18:30:07.066928   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:07.066935   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:07.066997   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:07.105669   78008 cri.go:89] found id: ""
	I0917 18:30:07.105709   78008 logs.go:276] 0 containers: []
	W0917 18:30:07.105721   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:07.105732   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:07.105754   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:07.120771   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:07.120802   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:07.195243   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:07.195272   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:07.195287   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:07.284377   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:07.284428   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:07.326894   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:07.326924   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:09.886998   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:09.900710   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:09.900773   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:09.943198   78008 cri.go:89] found id: ""
	I0917 18:30:09.943225   78008 logs.go:276] 0 containers: []
	W0917 18:30:09.943234   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:09.943240   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:09.943300   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:09.980113   78008 cri.go:89] found id: ""
	I0917 18:30:09.980148   78008 logs.go:276] 0 containers: []
	W0917 18:30:09.980160   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:09.980167   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:09.980226   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:10.017582   78008 cri.go:89] found id: ""
	I0917 18:30:10.017613   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.017625   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:10.017632   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:10.017681   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:10.053698   78008 cri.go:89] found id: ""
	I0917 18:30:10.053722   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.053731   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:10.053736   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:10.053784   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:10.091391   78008 cri.go:89] found id: ""
	I0917 18:30:10.091421   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.091433   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:10.091439   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:10.091496   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:10.130636   78008 cri.go:89] found id: ""
	I0917 18:30:10.130668   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.130677   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:10.130682   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:10.130736   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:10.168175   78008 cri.go:89] found id: ""
	I0917 18:30:10.168203   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.168214   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:10.168222   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:10.168313   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:10.207085   78008 cri.go:89] found id: ""
	I0917 18:30:10.207109   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.207118   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:10.207126   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:10.207139   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:10.245978   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:10.246007   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:10.298522   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:10.298569   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:10.312878   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:10.312904   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:10.387530   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:10.387553   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:10.387565   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:08.165955   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:10.663887   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:08.000401   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:10.000928   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:12.001022   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:09.625405   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:12.124546   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:12.967663   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:12.982157   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:12.982215   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:13.020177   78008 cri.go:89] found id: ""
	I0917 18:30:13.020224   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.020235   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:13.020241   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:13.020310   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:13.056317   78008 cri.go:89] found id: ""
	I0917 18:30:13.056342   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.056351   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:13.056356   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:13.056404   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:13.091799   78008 cri.go:89] found id: ""
	I0917 18:30:13.091823   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.091832   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:13.091838   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:13.091888   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:13.130421   78008 cri.go:89] found id: ""
	I0917 18:30:13.130450   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.130460   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:13.130465   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:13.130518   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:13.170623   78008 cri.go:89] found id: ""
	I0917 18:30:13.170654   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.170664   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:13.170672   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:13.170732   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:13.206396   78008 cri.go:89] found id: ""
	I0917 18:30:13.206441   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.206452   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:13.206460   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:13.206514   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:13.243090   78008 cri.go:89] found id: ""
	I0917 18:30:13.243121   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.243132   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:13.243139   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:13.243192   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:13.285690   78008 cri.go:89] found id: ""
	I0917 18:30:13.285730   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.285740   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:13.285747   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:13.285759   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:13.361992   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:13.362021   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:13.362043   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:13.448424   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:13.448467   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:13.489256   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:13.489284   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:13.544698   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:13.544735   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:12.665127   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:15.164296   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:14.501748   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:17.001119   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:14.124965   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:16.625638   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:16.060014   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:16.073504   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:16.073564   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:16.110538   78008 cri.go:89] found id: ""
	I0917 18:30:16.110567   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.110579   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:16.110587   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:16.110648   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:16.148521   78008 cri.go:89] found id: ""
	I0917 18:30:16.148551   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.148562   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:16.148570   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:16.148640   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:16.182772   78008 cri.go:89] found id: ""
	I0917 18:30:16.182796   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.182804   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:16.182809   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:16.182858   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:16.219617   78008 cri.go:89] found id: ""
	I0917 18:30:16.219642   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.219653   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:16.219660   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:16.219714   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:16.257320   78008 cri.go:89] found id: ""
	I0917 18:30:16.257345   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.257354   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:16.257359   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:16.257419   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:16.295118   78008 cri.go:89] found id: ""
	I0917 18:30:16.295150   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.295161   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:16.295168   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:16.295234   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:16.332448   78008 cri.go:89] found id: ""
	I0917 18:30:16.332482   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.332493   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:16.332500   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:16.332564   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:16.370155   78008 cri.go:89] found id: ""
	I0917 18:30:16.370182   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.370189   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:16.370197   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:16.370208   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:16.410230   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:16.410260   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:16.462306   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:16.462342   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:16.476472   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:16.476506   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:16.550449   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:16.550479   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:16.550497   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:19.129550   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:19.143333   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:19.143415   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:19.184184   78008 cri.go:89] found id: ""
	I0917 18:30:19.184213   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.184224   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:19.184231   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:19.184289   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:19.219455   78008 cri.go:89] found id: ""
	I0917 18:30:19.219489   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.219501   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:19.219508   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:19.219568   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:19.257269   78008 cri.go:89] found id: ""
	I0917 18:30:19.257303   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.257315   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:19.257328   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:19.257405   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:19.293898   78008 cri.go:89] found id: ""
	I0917 18:30:19.293931   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.293943   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:19.293951   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:19.294009   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:19.339154   78008 cri.go:89] found id: ""
	I0917 18:30:19.339183   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.339194   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:19.339201   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:19.339268   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:19.378608   78008 cri.go:89] found id: ""
	I0917 18:30:19.378634   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.378646   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:19.378653   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:19.378720   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:19.415280   78008 cri.go:89] found id: ""
	I0917 18:30:19.415311   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.415322   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:19.415330   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:19.415396   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:19.454025   78008 cri.go:89] found id: ""
	I0917 18:30:19.454066   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.454079   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:19.454089   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:19.454107   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:19.505918   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:19.505950   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:19.520996   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:19.521027   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:19.597408   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:19.597431   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:19.597442   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:19.678454   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:19.678487   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:17.165495   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:19.665976   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:19.001210   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:21.001549   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:19.123461   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:21.124423   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:23.124646   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:22.223094   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:22.238644   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:22.238722   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:22.279497   78008 cri.go:89] found id: ""
	I0917 18:30:22.279529   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.279541   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:22.279554   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:22.279616   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:22.315953   78008 cri.go:89] found id: ""
	I0917 18:30:22.315980   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.315990   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:22.315997   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:22.316061   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:22.355157   78008 cri.go:89] found id: ""
	I0917 18:30:22.355191   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.355204   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:22.355212   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:22.355278   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:22.393304   78008 cri.go:89] found id: ""
	I0917 18:30:22.393335   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.393346   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:22.393353   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:22.393405   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:22.437541   78008 cri.go:89] found id: ""
	I0917 18:30:22.437567   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.437576   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:22.437582   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:22.437637   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:22.478560   78008 cri.go:89] found id: ""
	I0917 18:30:22.478588   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.478596   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:22.478601   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:22.478661   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:22.516049   78008 cri.go:89] found id: ""
	I0917 18:30:22.516084   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.516093   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:22.516099   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:22.516151   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:22.554321   78008 cri.go:89] found id: ""
	I0917 18:30:22.554350   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.554359   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:22.554367   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:22.554377   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:22.613073   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:22.613110   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:22.627768   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:22.627797   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:22.710291   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:22.710318   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:22.710333   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:22.807999   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:22.808035   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:25.350639   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:25.366302   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:25.366405   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:25.411585   78008 cri.go:89] found id: ""
	I0917 18:30:25.411613   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.411625   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:25.411632   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:25.411694   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:25.453414   78008 cri.go:89] found id: ""
	I0917 18:30:25.453441   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.453461   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:25.453467   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:25.453529   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:25.489776   78008 cri.go:89] found id: ""
	I0917 18:30:25.489803   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.489812   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:25.489817   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:25.489868   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:25.531594   78008 cri.go:89] found id: ""
	I0917 18:30:25.531624   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.531633   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:25.531638   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:25.531686   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:25.568796   78008 cri.go:89] found id: ""
	I0917 18:30:25.568820   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.568831   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:25.568837   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:25.568888   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:25.605612   78008 cri.go:89] found id: ""
	I0917 18:30:25.605643   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.605654   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:25.605661   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:25.605719   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:25.647673   78008 cri.go:89] found id: ""
	I0917 18:30:25.647698   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.647708   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:25.647713   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:25.647772   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:22.164631   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:24.165353   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:23.500355   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:25.503250   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:25.125192   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:27.125540   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:25.686943   78008 cri.go:89] found id: ""
	I0917 18:30:25.686976   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.686989   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:25.687000   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:25.687022   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:25.728440   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:25.728477   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:25.778211   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:25.778254   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:25.792519   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:25.792547   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:25.879452   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:25.879477   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:25.879492   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:28.460531   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:28.474595   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:28.474689   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:28.531065   78008 cri.go:89] found id: ""
	I0917 18:30:28.531099   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.531108   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:28.531117   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:28.531184   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:28.571952   78008 cri.go:89] found id: ""
	I0917 18:30:28.571991   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.572002   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:28.572012   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:28.572081   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:28.608315   78008 cri.go:89] found id: ""
	I0917 18:30:28.608348   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.608364   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:28.608371   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:28.608433   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:28.647882   78008 cri.go:89] found id: ""
	I0917 18:30:28.647913   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.647925   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:28.647932   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:28.647997   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:28.684998   78008 cri.go:89] found id: ""
	I0917 18:30:28.685021   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.685030   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:28.685036   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:28.685098   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:28.724249   78008 cri.go:89] found id: ""
	I0917 18:30:28.724274   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.724282   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:28.724287   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:28.724348   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:28.765932   78008 cri.go:89] found id: ""
	I0917 18:30:28.765965   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.765976   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:28.765982   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:28.766047   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:28.803857   78008 cri.go:89] found id: ""
	I0917 18:30:28.803888   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.803899   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:28.803910   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:28.803923   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:28.863667   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:28.863703   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:28.878148   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:28.878187   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:28.956714   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:28.956743   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:28.956760   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:29.036303   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:29.036342   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:26.664369   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:28.665390   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:28.001973   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:30.500284   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:29.126782   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:31.626235   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:31.581741   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:31.595509   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:31.595592   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:31.631185   78008 cri.go:89] found id: ""
	I0917 18:30:31.631215   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.631227   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:31.631234   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:31.631286   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:31.668059   78008 cri.go:89] found id: ""
	I0917 18:30:31.668091   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.668102   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:31.668109   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:31.668168   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:31.705807   78008 cri.go:89] found id: ""
	I0917 18:30:31.705838   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.705849   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:31.705856   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:31.705925   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:31.750168   78008 cri.go:89] found id: ""
	I0917 18:30:31.750198   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.750212   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:31.750220   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:31.750282   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:31.792032   78008 cri.go:89] found id: ""
	I0917 18:30:31.792054   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.792063   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:31.792069   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:31.792130   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:31.828596   78008 cri.go:89] found id: ""
	I0917 18:30:31.828632   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.828646   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:31.828654   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:31.828708   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:31.871963   78008 cri.go:89] found id: ""
	I0917 18:30:31.872000   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.872013   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:31.872023   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:31.872094   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:31.906688   78008 cri.go:89] found id: ""
	I0917 18:30:31.906718   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.906727   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:31.906735   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:31.906746   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:31.920311   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:31.920339   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:32.009966   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:32.009992   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:32.010006   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:32.088409   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:32.088447   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:32.132771   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:32.132806   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:34.686159   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:34.700133   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:34.700211   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:34.739392   78008 cri.go:89] found id: ""
	I0917 18:30:34.739431   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.739445   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:34.739453   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:34.739522   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:34.779141   78008 cri.go:89] found id: ""
	I0917 18:30:34.779175   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.779188   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:34.779195   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:34.779260   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:34.819883   78008 cri.go:89] found id: ""
	I0917 18:30:34.819907   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.819915   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:34.819920   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:34.819967   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:34.855886   78008 cri.go:89] found id: ""
	I0917 18:30:34.855912   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.855923   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:34.855931   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:34.855999   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:34.903919   78008 cri.go:89] found id: ""
	I0917 18:30:34.903956   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.903968   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:34.903975   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:34.904042   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:34.951895   78008 cri.go:89] found id: ""
	I0917 18:30:34.951925   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.951936   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:34.951943   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:34.952007   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:35.013084   78008 cri.go:89] found id: ""
	I0917 18:30:35.013124   78008 logs.go:276] 0 containers: []
	W0917 18:30:35.013132   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:35.013137   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:35.013189   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:35.051565   78008 cri.go:89] found id: ""
	I0917 18:30:35.051589   78008 logs.go:276] 0 containers: []
	W0917 18:30:35.051598   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:35.051606   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:35.051616   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:35.092723   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:35.092753   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:35.147996   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:35.148037   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:35.164989   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:35.165030   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:35.246216   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:35.246239   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:35.246252   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:31.163920   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:33.664255   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:32.500662   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:35.002015   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:34.124883   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:36.125144   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:38.125514   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:37.828811   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:37.846467   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:37.846534   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:37.884725   78008 cri.go:89] found id: ""
	I0917 18:30:37.884758   78008 logs.go:276] 0 containers: []
	W0917 18:30:37.884769   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:37.884777   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:37.884836   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:37.923485   78008 cri.go:89] found id: ""
	I0917 18:30:37.923517   78008 logs.go:276] 0 containers: []
	W0917 18:30:37.923525   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:37.923531   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:37.923597   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:37.962829   78008 cri.go:89] found id: ""
	I0917 18:30:37.962857   78008 logs.go:276] 0 containers: []
	W0917 18:30:37.962867   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:37.962873   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:37.962938   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:38.003277   78008 cri.go:89] found id: ""
	I0917 18:30:38.003305   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.003313   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:38.003319   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:38.003380   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:38.047919   78008 cri.go:89] found id: ""
	I0917 18:30:38.047952   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.047963   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:38.047971   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:38.048043   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:38.084853   78008 cri.go:89] found id: ""
	I0917 18:30:38.084883   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.084896   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:38.084904   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:38.084967   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:38.122340   78008 cri.go:89] found id: ""
	I0917 18:30:38.122369   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.122379   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:38.122387   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:38.122446   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:38.163071   78008 cri.go:89] found id: ""
	I0917 18:30:38.163101   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.163112   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:38.163121   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:38.163134   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:38.243772   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:38.243812   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:38.291744   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:38.291777   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:38.346738   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:38.346778   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:38.361908   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:38.361953   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:38.441730   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:36.165051   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:38.165173   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:40.664192   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:37.500496   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:39.501199   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:42.000608   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:40.626165   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:43.125533   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:40.942693   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:40.960643   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:40.960713   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:41.016226   78008 cri.go:89] found id: ""
	I0917 18:30:41.016255   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.016265   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:41.016270   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:41.016328   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:41.054315   78008 cri.go:89] found id: ""
	I0917 18:30:41.054342   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.054353   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:41.054360   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:41.054426   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:41.092946   78008 cri.go:89] found id: ""
	I0917 18:30:41.092978   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.092991   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:41.092998   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:41.093058   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:41.133385   78008 cri.go:89] found id: ""
	I0917 18:30:41.133415   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.133423   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:41.133430   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:41.133487   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:41.173993   78008 cri.go:89] found id: ""
	I0917 18:30:41.174017   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.174025   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:41.174030   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:41.174083   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:41.211127   78008 cri.go:89] found id: ""
	I0917 18:30:41.211154   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.211168   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:41.211174   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:41.211244   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:41.248607   78008 cri.go:89] found id: ""
	I0917 18:30:41.248632   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.248645   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:41.248652   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:41.248714   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:41.284580   78008 cri.go:89] found id: ""
	I0917 18:30:41.284612   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.284621   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:41.284629   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:41.284640   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:41.336573   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:41.336613   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:41.352134   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:41.352167   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:41.419061   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:41.419085   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:41.419099   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:41.499758   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:41.499792   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:44.043361   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:44.057270   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:44.057339   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:44.096130   78008 cri.go:89] found id: ""
	I0917 18:30:44.096165   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.096176   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:44.096184   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:44.096238   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:44.134483   78008 cri.go:89] found id: ""
	I0917 18:30:44.134514   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.134526   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:44.134534   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:44.134601   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:44.172723   78008 cri.go:89] found id: ""
	I0917 18:30:44.172759   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.172774   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:44.172782   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:44.172855   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:44.208478   78008 cri.go:89] found id: ""
	I0917 18:30:44.208506   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.208514   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:44.208519   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:44.208577   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:44.249352   78008 cri.go:89] found id: ""
	I0917 18:30:44.249381   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.249391   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:44.249398   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:44.249457   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:44.291156   78008 cri.go:89] found id: ""
	I0917 18:30:44.291180   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.291188   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:44.291194   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:44.291243   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:44.331580   78008 cri.go:89] found id: ""
	I0917 18:30:44.331612   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.331623   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:44.331632   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:44.331705   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:44.370722   78008 cri.go:89] found id: ""
	I0917 18:30:44.370750   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.370763   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:44.370774   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:44.370797   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:44.421126   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:44.421161   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:44.478581   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:44.478624   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:44.493492   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:44.493522   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:44.566317   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:44.566347   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:44.566358   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:42.664631   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:44.664871   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:44.001209   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:46.003437   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:45.625415   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:47.626515   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:47.147466   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:47.162590   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:47.162654   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:47.201382   78008 cri.go:89] found id: ""
	I0917 18:30:47.201409   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.201418   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:47.201423   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:47.201474   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:47.249536   78008 cri.go:89] found id: ""
	I0917 18:30:47.249561   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.249569   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:47.249574   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:47.249631   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:47.292337   78008 cri.go:89] found id: ""
	I0917 18:30:47.292361   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.292369   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:47.292376   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:47.292438   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:47.341387   78008 cri.go:89] found id: ""
	I0917 18:30:47.341421   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.341433   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:47.341447   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:47.341531   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:47.382687   78008 cri.go:89] found id: ""
	I0917 18:30:47.382719   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.382748   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:47.382762   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:47.382827   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:47.419598   78008 cri.go:89] found id: ""
	I0917 18:30:47.419632   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.419644   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:47.419650   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:47.419717   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:47.456104   78008 cri.go:89] found id: ""
	I0917 18:30:47.456131   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.456141   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:47.456148   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:47.456210   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:47.498610   78008 cri.go:89] found id: ""
	I0917 18:30:47.498643   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.498654   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:47.498665   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:47.498706   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:47.573796   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:47.573819   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:47.573830   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:47.651234   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:47.651271   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:47.692875   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:47.692902   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:47.747088   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:47.747128   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:50.262789   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:50.277262   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:50.277415   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:50.314866   78008 cri.go:89] found id: ""
	I0917 18:30:50.314902   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.314911   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:50.314916   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:50.314971   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:50.353490   78008 cri.go:89] found id: ""
	I0917 18:30:50.353527   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.353536   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:50.353542   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:50.353590   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:50.391922   78008 cri.go:89] found id: ""
	I0917 18:30:50.391944   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.391952   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:50.391957   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:50.392003   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:50.431088   78008 cri.go:89] found id: ""
	I0917 18:30:50.431118   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.431129   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:50.431136   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:50.431186   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:50.469971   78008 cri.go:89] found id: ""
	I0917 18:30:50.469999   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.470010   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:50.470018   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:50.470083   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:50.509121   78008 cri.go:89] found id: ""
	I0917 18:30:50.509153   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.509165   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:50.509172   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:50.509256   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:50.546569   78008 cri.go:89] found id: ""
	I0917 18:30:50.546594   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.546602   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:50.546607   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:50.546656   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:50.586045   78008 cri.go:89] found id: ""
	I0917 18:30:50.586071   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.586080   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:50.586088   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:50.586098   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:50.642994   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:50.643040   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:50.658018   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:50.658050   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 18:30:46.665597   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:49.164714   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:48.501502   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:51.001554   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:50.124526   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:52.625006   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	W0917 18:30:50.730760   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:50.730792   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:50.730808   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:50.810154   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:50.810185   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:53.356859   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:53.371313   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:53.371406   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:53.412822   78008 cri.go:89] found id: ""
	I0917 18:30:53.412847   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.412858   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:53.412865   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:53.412931   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:53.448900   78008 cri.go:89] found id: ""
	I0917 18:30:53.448932   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.448943   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:53.448950   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:53.449014   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:53.487141   78008 cri.go:89] found id: ""
	I0917 18:30:53.487167   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.487176   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:53.487182   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:53.487251   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:53.528899   78008 cri.go:89] found id: ""
	I0917 18:30:53.528928   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.528940   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:53.528947   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:53.529008   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:53.564795   78008 cri.go:89] found id: ""
	I0917 18:30:53.564827   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.564839   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:53.564847   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:53.564914   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:53.605208   78008 cri.go:89] found id: ""
	I0917 18:30:53.605257   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.605268   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:53.605277   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:53.605339   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:53.647177   78008 cri.go:89] found id: ""
	I0917 18:30:53.647205   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.647214   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:53.647219   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:53.647278   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:53.694030   78008 cri.go:89] found id: ""
	I0917 18:30:53.694057   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.694067   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:53.694075   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:53.694085   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:53.746611   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:53.746645   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:53.761563   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:53.761595   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:53.835663   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:53.835694   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:53.835709   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:53.920796   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:53.920848   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:51.166015   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:53.665173   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:53.001959   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:55.501150   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:54.625124   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:56.626246   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:56.468452   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:56.482077   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:56.482148   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:56.518569   78008 cri.go:89] found id: ""
	I0917 18:30:56.518593   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.518601   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:56.518607   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:56.518665   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:56.560000   78008 cri.go:89] found id: ""
	I0917 18:30:56.560033   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.560045   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:56.560054   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:56.560117   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:56.600391   78008 cri.go:89] found id: ""
	I0917 18:30:56.600423   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.600435   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:56.600442   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:56.600519   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:56.637674   78008 cri.go:89] found id: ""
	I0917 18:30:56.637706   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.637720   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:56.637728   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:56.637781   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:56.673297   78008 cri.go:89] found id: ""
	I0917 18:30:56.673329   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.673340   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:56.673348   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:56.673414   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:56.708863   78008 cri.go:89] found id: ""
	I0917 18:30:56.708898   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.708910   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:56.708917   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:56.708979   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:56.745165   78008 cri.go:89] found id: ""
	I0917 18:30:56.745199   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.745211   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:56.745220   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:56.745297   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:56.793206   78008 cri.go:89] found id: ""
	I0917 18:30:56.793260   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.793273   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:56.793284   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:56.793297   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:56.880661   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:56.880699   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:56.926789   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:56.926820   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:56.978914   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:56.978965   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:56.993199   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:56.993236   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:57.065180   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:59.565927   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:59.579838   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:59.579921   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:59.616623   78008 cri.go:89] found id: ""
	I0917 18:30:59.616648   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.616656   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:59.616662   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:59.616716   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:59.659048   78008 cri.go:89] found id: ""
	I0917 18:30:59.659074   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.659084   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:59.659091   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:59.659153   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:59.694874   78008 cri.go:89] found id: ""
	I0917 18:30:59.694899   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.694910   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:59.694921   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:59.694988   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:59.732858   78008 cri.go:89] found id: ""
	I0917 18:30:59.732889   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.732902   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:59.732909   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:59.732972   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:59.771178   78008 cri.go:89] found id: ""
	I0917 18:30:59.771203   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.771212   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:59.771218   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:59.771271   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:59.812456   78008 cri.go:89] found id: ""
	I0917 18:30:59.812481   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.812490   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:59.812498   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:59.812560   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:59.849876   78008 cri.go:89] found id: ""
	I0917 18:30:59.849906   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.849917   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:59.849924   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:59.849988   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:59.889796   78008 cri.go:89] found id: ""
	I0917 18:30:59.889827   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.889839   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:59.889850   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:59.889865   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:59.942735   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:59.942774   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:59.957159   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:59.957186   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:00.030497   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:00.030522   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:00.030537   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:00.112077   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:00.112134   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:56.164011   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:58.164643   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:00.164831   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:57.502585   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:00.002013   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:02.002047   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:59.125188   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:01.127691   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:02.656525   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:02.671313   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:02.671379   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:02.710779   78008 cri.go:89] found id: ""
	I0917 18:31:02.710807   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.710820   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:02.710827   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:02.710890   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:02.750285   78008 cri.go:89] found id: ""
	I0917 18:31:02.750315   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.750326   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:02.750335   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:02.750399   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:02.790676   78008 cri.go:89] found id: ""
	I0917 18:31:02.790704   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.790712   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:02.790718   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:02.790766   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:02.832124   78008 cri.go:89] found id: ""
	I0917 18:31:02.832154   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.832166   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:02.832174   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:02.832236   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:02.868769   78008 cri.go:89] found id: ""
	I0917 18:31:02.868801   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.868813   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:02.868820   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:02.868886   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:02.910482   78008 cri.go:89] found id: ""
	I0917 18:31:02.910512   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.910524   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:02.910533   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:02.910587   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:02.948128   78008 cri.go:89] found id: ""
	I0917 18:31:02.948154   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.948165   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:02.948172   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:02.948239   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:02.987981   78008 cri.go:89] found id: ""
	I0917 18:31:02.988007   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.988018   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:02.988028   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:02.988042   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:03.044116   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:03.044157   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:03.059837   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:03.059866   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:03.134048   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:03.134073   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:03.134086   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:03.214751   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:03.214792   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:02.169026   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:04.664829   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:04.501493   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:07.001722   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:03.625165   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:06.126203   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:05.768145   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:05.782375   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:05.782455   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:05.820083   78008 cri.go:89] found id: ""
	I0917 18:31:05.820116   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.820127   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:05.820134   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:05.820188   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:05.856626   78008 cri.go:89] found id: ""
	I0917 18:31:05.856655   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.856666   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:05.856673   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:05.856737   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:05.893119   78008 cri.go:89] found id: ""
	I0917 18:31:05.893149   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.893162   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:05.893172   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:05.893299   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:05.931892   78008 cri.go:89] found id: ""
	I0917 18:31:05.931916   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.931924   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:05.931930   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:05.931991   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:05.968770   78008 cri.go:89] found id: ""
	I0917 18:31:05.968802   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.968814   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:05.968820   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:05.968888   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:06.008183   78008 cri.go:89] found id: ""
	I0917 18:31:06.008208   78008 logs.go:276] 0 containers: []
	W0917 18:31:06.008217   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:06.008222   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:06.008267   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:06.043161   78008 cri.go:89] found id: ""
	I0917 18:31:06.043189   78008 logs.go:276] 0 containers: []
	W0917 18:31:06.043199   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:06.043204   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:06.043271   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:06.079285   78008 cri.go:89] found id: ""
	I0917 18:31:06.079315   78008 logs.go:276] 0 containers: []
	W0917 18:31:06.079326   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:06.079336   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:06.079347   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:06.160863   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:06.160913   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:06.202101   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:06.202127   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:06.255482   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:06.255517   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:06.271518   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:06.271545   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:06.344034   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:08.844243   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:08.859312   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:08.859381   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:08.896915   78008 cri.go:89] found id: ""
	I0917 18:31:08.896942   78008 logs.go:276] 0 containers: []
	W0917 18:31:08.896952   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:08.896959   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:08.897022   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:08.937979   78008 cri.go:89] found id: ""
	I0917 18:31:08.938005   78008 logs.go:276] 0 containers: []
	W0917 18:31:08.938014   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:08.938022   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:08.938072   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:08.978502   78008 cri.go:89] found id: ""
	I0917 18:31:08.978536   78008 logs.go:276] 0 containers: []
	W0917 18:31:08.978548   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:08.978556   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:08.978616   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:09.044664   78008 cri.go:89] found id: ""
	I0917 18:31:09.044699   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.044711   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:09.044719   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:09.044796   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:09.082888   78008 cri.go:89] found id: ""
	I0917 18:31:09.082923   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.082944   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:09.082954   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:09.083027   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:09.120314   78008 cri.go:89] found id: ""
	I0917 18:31:09.120339   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.120350   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:09.120357   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:09.120418   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:09.160137   78008 cri.go:89] found id: ""
	I0917 18:31:09.160165   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.160176   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:09.160183   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:09.160241   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:09.198711   78008 cri.go:89] found id: ""
	I0917 18:31:09.198741   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.198749   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:09.198756   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:09.198766   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:09.253431   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:09.253485   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:09.270520   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:09.270554   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:09.349865   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:09.349889   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:09.349909   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:09.436606   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:09.436650   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:07.165101   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:09.165704   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:09.001786   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:11.500557   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:08.625085   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:11.124817   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:13.125531   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:11.981998   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:11.995472   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:11.995556   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:12.035854   78008 cri.go:89] found id: ""
	I0917 18:31:12.035883   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.035894   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:12.035902   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:12.035953   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:12.070923   78008 cri.go:89] found id: ""
	I0917 18:31:12.070953   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.070965   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:12.070973   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:12.071041   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:12.108151   78008 cri.go:89] found id: ""
	I0917 18:31:12.108176   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.108185   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:12.108190   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:12.108238   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:12.146050   78008 cri.go:89] found id: ""
	I0917 18:31:12.146081   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.146092   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:12.146100   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:12.146158   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:12.185355   78008 cri.go:89] found id: ""
	I0917 18:31:12.185387   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.185396   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:12.185402   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:12.185449   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:12.222377   78008 cri.go:89] found id: ""
	I0917 18:31:12.222403   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.222412   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:12.222418   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:12.222488   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:12.258190   78008 cri.go:89] found id: ""
	I0917 18:31:12.258231   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.258242   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:12.258249   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:12.258326   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:12.295674   78008 cri.go:89] found id: ""
	I0917 18:31:12.295710   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.295722   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:12.295731   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:12.295742   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:12.348185   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:12.348223   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:12.363961   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:12.363992   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:12.438630   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:12.438661   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:12.438676   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:12.520086   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:12.520133   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:15.061926   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:15.079141   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:15.079206   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:15.122722   78008 cri.go:89] found id: ""
	I0917 18:31:15.122812   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.122828   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:15.122837   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:15.122895   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:15.168184   78008 cri.go:89] found id: ""
	I0917 18:31:15.168209   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.168218   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:15.168225   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:15.168288   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:15.208219   78008 cri.go:89] found id: ""
	I0917 18:31:15.208246   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.208259   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:15.208266   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:15.208318   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:15.248082   78008 cri.go:89] found id: ""
	I0917 18:31:15.248114   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.248126   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:15.248133   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:15.248197   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:15.285215   78008 cri.go:89] found id: ""
	I0917 18:31:15.285263   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.285274   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:15.285281   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:15.285339   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:15.328617   78008 cri.go:89] found id: ""
	I0917 18:31:15.328650   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.328669   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:15.328675   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:15.328738   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:15.371869   78008 cri.go:89] found id: ""
	I0917 18:31:15.371895   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.371903   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:15.371909   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:15.371955   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:15.418109   78008 cri.go:89] found id: ""
	I0917 18:31:15.418136   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.418145   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:15.418153   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:15.418166   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:15.443709   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:15.443741   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:15.540475   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:15.540499   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:15.540511   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:15.627751   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:15.627781   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:15.671027   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:15.671056   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:11.664755   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:14.164563   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:14.001567   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:16.500724   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:15.127715   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:17.624831   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:18.223732   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:18.239161   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:18.239242   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:18.280252   78008 cri.go:89] found id: ""
	I0917 18:31:18.280282   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.280294   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:18.280301   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:18.280350   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:18.318774   78008 cri.go:89] found id: ""
	I0917 18:31:18.318805   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.318815   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:18.318821   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:18.318877   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:18.354755   78008 cri.go:89] found id: ""
	I0917 18:31:18.354785   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.354796   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:18.354804   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:18.354862   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:18.391283   78008 cri.go:89] found id: ""
	I0917 18:31:18.391310   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.391318   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:18.391324   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:18.391372   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:18.429026   78008 cri.go:89] found id: ""
	I0917 18:31:18.429062   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.429074   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:18.429081   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:18.429135   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:18.468318   78008 cri.go:89] found id: ""
	I0917 18:31:18.468351   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.468365   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:18.468372   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:18.468421   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:18.509871   78008 cri.go:89] found id: ""
	I0917 18:31:18.509903   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.509914   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:18.509922   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:18.509979   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:18.548662   78008 cri.go:89] found id: ""
	I0917 18:31:18.548694   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.548705   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:18.548714   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:18.548729   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:18.587633   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:18.587662   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:18.640867   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:18.640910   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:18.658020   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:18.658054   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:18.729643   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:18.729674   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:18.729686   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:16.664372   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:18.666834   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:18.501952   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:21.001547   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:20.125423   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:22.626597   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:21.313013   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:21.329702   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:21.329768   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:21.378972   78008 cri.go:89] found id: ""
	I0917 18:31:21.378996   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.379004   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:21.379010   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:21.379065   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:21.433355   78008 cri.go:89] found id: ""
	I0917 18:31:21.433382   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.433393   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:21.433400   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:21.433462   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:21.489030   78008 cri.go:89] found id: ""
	I0917 18:31:21.489055   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.489063   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:21.489068   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:21.489124   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:21.529089   78008 cri.go:89] found id: ""
	I0917 18:31:21.529119   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.529131   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:21.529138   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:21.529188   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:21.566892   78008 cri.go:89] found id: ""
	I0917 18:31:21.566919   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.566929   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:21.566935   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:21.566985   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:21.605453   78008 cri.go:89] found id: ""
	I0917 18:31:21.605484   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.605496   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:21.605504   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:21.605569   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:21.647710   78008 cri.go:89] found id: ""
	I0917 18:31:21.647732   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.647740   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:21.647745   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:21.647804   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:21.687002   78008 cri.go:89] found id: ""
	I0917 18:31:21.687036   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.687048   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:21.687058   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:21.687074   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:21.738591   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:21.738631   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:21.752950   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:21.752987   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:21.826268   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:21.826292   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:21.826306   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:21.906598   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:21.906646   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:24.453057   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:24.468867   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:24.468930   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:24.511103   78008 cri.go:89] found id: ""
	I0917 18:31:24.511129   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.511140   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:24.511147   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:24.511200   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:24.546392   78008 cri.go:89] found id: ""
	I0917 18:31:24.546423   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.546434   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:24.546443   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:24.546505   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:24.583266   78008 cri.go:89] found id: ""
	I0917 18:31:24.583299   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.583310   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:24.583320   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:24.583381   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:24.620018   78008 cri.go:89] found id: ""
	I0917 18:31:24.620051   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.620063   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:24.620070   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:24.620133   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:24.659528   78008 cri.go:89] found id: ""
	I0917 18:31:24.659556   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.659566   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:24.659573   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:24.659636   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:24.699115   78008 cri.go:89] found id: ""
	I0917 18:31:24.699153   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.699167   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:24.699175   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:24.699234   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:24.745358   78008 cri.go:89] found id: ""
	I0917 18:31:24.745392   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.745404   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:24.745414   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:24.745483   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:24.786606   78008 cri.go:89] found id: ""
	I0917 18:31:24.786635   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.786644   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:24.786657   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:24.786671   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:24.838417   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:24.838462   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:24.852959   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:24.852988   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:24.927013   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:24.927039   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:24.927058   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:25.008679   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:25.008720   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:21.164500   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:23.165380   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:25.665618   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:23.501265   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:26.002113   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:25.126406   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:27.627599   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:27.549945   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:27.565336   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:27.565450   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:27.605806   78008 cri.go:89] found id: ""
	I0917 18:31:27.605844   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.605853   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:27.605860   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:27.605909   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:27.652915   78008 cri.go:89] found id: ""
	I0917 18:31:27.652955   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.652968   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:27.652977   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:27.653044   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:27.701732   78008 cri.go:89] found id: ""
	I0917 18:31:27.701759   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.701771   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:27.701778   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:27.701841   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:27.744587   78008 cri.go:89] found id: ""
	I0917 18:31:27.744616   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.744628   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:27.744635   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:27.744705   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:27.789161   78008 cri.go:89] found id: ""
	I0917 18:31:27.789196   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.789207   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:27.789214   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:27.789296   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:27.833484   78008 cri.go:89] found id: ""
	I0917 18:31:27.833513   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.833525   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:27.833532   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:27.833591   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:27.873669   78008 cri.go:89] found id: ""
	I0917 18:31:27.873703   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.873715   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:27.873722   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:27.873792   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:27.911270   78008 cri.go:89] found id: ""
	I0917 18:31:27.911301   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.911313   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:27.911323   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:27.911336   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:27.951769   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:27.951798   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:28.002220   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:28.002254   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:28.017358   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:28.017392   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:28.091456   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:28.091481   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:28.091492   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:27.666003   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:30.164548   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:28.501094   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:31.005569   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:30.124439   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:32.126247   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:30.679643   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:30.693877   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:30.693948   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:30.732196   78008 cri.go:89] found id: ""
	I0917 18:31:30.732228   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.732240   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:30.732247   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:30.732320   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:30.774700   78008 cri.go:89] found id: ""
	I0917 18:31:30.774730   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.774742   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:30.774749   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:30.774838   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:30.814394   78008 cri.go:89] found id: ""
	I0917 18:31:30.814420   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.814428   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:30.814434   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:30.814487   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:30.854746   78008 cri.go:89] found id: ""
	I0917 18:31:30.854788   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.854801   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:30.854830   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:30.854899   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:30.893533   78008 cri.go:89] found id: ""
	I0917 18:31:30.893564   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.893574   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:30.893580   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:30.893649   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:30.932719   78008 cri.go:89] found id: ""
	I0917 18:31:30.932746   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.932757   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:30.932763   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:30.932811   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:30.974004   78008 cri.go:89] found id: ""
	I0917 18:31:30.974047   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.974056   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:30.974061   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:30.974117   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:31.017469   78008 cri.go:89] found id: ""
	I0917 18:31:31.017498   78008 logs.go:276] 0 containers: []
	W0917 18:31:31.017509   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:31.017517   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:31.017529   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:31.094385   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:31.094409   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:31.094424   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:31.177975   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:31.178012   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:31.218773   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:31.218804   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:31.272960   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:31.272996   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:33.788825   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:33.804904   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:33.804985   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:33.847149   78008 cri.go:89] found id: ""
	I0917 18:31:33.847178   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.847190   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:33.847198   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:33.847259   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:33.883548   78008 cri.go:89] found id: ""
	I0917 18:31:33.883573   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.883581   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:33.883586   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:33.883632   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:33.917495   78008 cri.go:89] found id: ""
	I0917 18:31:33.917523   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.917535   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:33.917542   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:33.917634   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:33.954931   78008 cri.go:89] found id: ""
	I0917 18:31:33.954955   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.954963   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:33.954969   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:33.955019   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:33.991535   78008 cri.go:89] found id: ""
	I0917 18:31:33.991568   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.991577   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:33.991582   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:33.991639   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:34.039451   78008 cri.go:89] found id: ""
	I0917 18:31:34.039478   78008 logs.go:276] 0 containers: []
	W0917 18:31:34.039489   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:34.039497   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:34.039557   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:34.081258   78008 cri.go:89] found id: ""
	I0917 18:31:34.081300   78008 logs.go:276] 0 containers: []
	W0917 18:31:34.081311   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:34.081317   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:34.081379   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:34.119557   78008 cri.go:89] found id: ""
	I0917 18:31:34.119586   78008 logs.go:276] 0 containers: []
	W0917 18:31:34.119597   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:34.119608   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:34.119623   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:34.163345   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:34.163379   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:34.218399   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:34.218454   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:34.232705   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:34.232736   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:34.309948   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:34.309972   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:34.309984   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:32.164688   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:34.165267   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:33.500604   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:35.501094   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:34.624847   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:36.624971   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:36.896504   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:36.913784   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:36.913870   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:36.954525   78008 cri.go:89] found id: ""
	I0917 18:31:36.954557   78008 logs.go:276] 0 containers: []
	W0917 18:31:36.954568   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:36.954578   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:36.954648   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:36.989379   78008 cri.go:89] found id: ""
	I0917 18:31:36.989408   78008 logs.go:276] 0 containers: []
	W0917 18:31:36.989419   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:36.989426   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:36.989491   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:37.029078   78008 cri.go:89] found id: ""
	I0917 18:31:37.029107   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.029119   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:37.029126   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:37.029180   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:37.066636   78008 cri.go:89] found id: ""
	I0917 18:31:37.066670   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.066683   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:37.066691   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:37.066754   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:37.109791   78008 cri.go:89] found id: ""
	I0917 18:31:37.109827   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.109838   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:37.109849   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:37.109925   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:37.153415   78008 cri.go:89] found id: ""
	I0917 18:31:37.153448   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.153459   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:37.153467   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:37.153527   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:37.192826   78008 cri.go:89] found id: ""
	I0917 18:31:37.192853   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.192864   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:37.192871   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:37.192930   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:37.230579   78008 cri.go:89] found id: ""
	I0917 18:31:37.230632   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.230647   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:37.230665   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:37.230677   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:37.315392   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:37.315430   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:37.356521   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:37.356554   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:37.410552   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:37.410591   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:37.426013   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:37.426040   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:37.499352   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:39.999538   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:40.014515   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:40.014590   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:40.051511   78008 cri.go:89] found id: ""
	I0917 18:31:40.051548   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.051558   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:40.051564   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:40.051623   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:40.089707   78008 cri.go:89] found id: ""
	I0917 18:31:40.089733   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.089747   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:40.089752   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:40.089802   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:40.137303   78008 cri.go:89] found id: ""
	I0917 18:31:40.137326   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.137335   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:40.137341   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:40.137389   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:40.176721   78008 cri.go:89] found id: ""
	I0917 18:31:40.176746   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.176755   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:40.176761   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:40.176809   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:40.212369   78008 cri.go:89] found id: ""
	I0917 18:31:40.212401   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.212412   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:40.212421   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:40.212494   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:40.255798   78008 cri.go:89] found id: ""
	I0917 18:31:40.255828   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.255838   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:40.255847   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:40.255982   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:40.293643   78008 cri.go:89] found id: ""
	I0917 18:31:40.293672   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.293682   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:40.293689   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:40.293752   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:40.332300   78008 cri.go:89] found id: ""
	I0917 18:31:40.332330   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.332340   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:40.332350   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:40.332365   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:40.389068   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:40.389115   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:40.403118   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:40.403149   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:40.476043   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:40.476067   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:40.476081   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:40.563164   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:40.563204   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:36.664291   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:38.666750   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:37.501943   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:40.000891   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:42.001550   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:38.625406   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:41.124655   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:43.126544   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:43.112107   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:43.127968   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:43.128034   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:43.166351   78008 cri.go:89] found id: ""
	I0917 18:31:43.166371   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.166379   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:43.166384   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:43.166433   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:43.201124   78008 cri.go:89] found id: ""
	I0917 18:31:43.201160   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.201173   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:43.201181   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:43.201265   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:43.245684   78008 cri.go:89] found id: ""
	I0917 18:31:43.245717   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.245728   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:43.245735   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:43.245796   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:43.282751   78008 cri.go:89] found id: ""
	I0917 18:31:43.282777   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.282785   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:43.282791   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:43.282844   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:43.322180   78008 cri.go:89] found id: ""
	I0917 18:31:43.322212   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.322223   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:43.322230   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:43.322294   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:43.359575   78008 cri.go:89] found id: ""
	I0917 18:31:43.359608   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.359620   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:43.359627   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:43.359689   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:43.398782   78008 cri.go:89] found id: ""
	I0917 18:31:43.398811   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.398824   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:43.398833   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:43.398913   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:43.437747   78008 cri.go:89] found id: ""
	I0917 18:31:43.437771   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.437779   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:43.437787   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:43.437800   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:43.477986   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:43.478019   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:43.532637   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:43.532674   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:43.547552   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:43.547577   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:43.632556   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:43.632578   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:43.632592   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:41.163988   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:43.165378   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:45.664803   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:44.500302   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:46.500489   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:45.128136   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:47.626024   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:46.214890   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:46.229327   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:46.229408   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:46.268605   78008 cri.go:89] found id: ""
	I0917 18:31:46.268632   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.268642   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:46.268649   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:46.268711   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:46.309508   78008 cri.go:89] found id: ""
	I0917 18:31:46.309539   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.309549   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:46.309558   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:46.309620   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:46.352610   78008 cri.go:89] found id: ""
	I0917 18:31:46.352639   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.352648   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:46.352654   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:46.352723   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:46.398702   78008 cri.go:89] found id: ""
	I0917 18:31:46.398738   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.398747   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:46.398753   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:46.398811   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:46.437522   78008 cri.go:89] found id: ""
	I0917 18:31:46.437545   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.437554   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:46.437559   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:46.437641   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:46.474865   78008 cri.go:89] found id: ""
	I0917 18:31:46.474893   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.474902   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:46.474909   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:46.474957   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:46.514497   78008 cri.go:89] found id: ""
	I0917 18:31:46.514525   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.514536   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:46.514543   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:46.514605   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:46.556948   78008 cri.go:89] found id: ""
	I0917 18:31:46.556979   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.556988   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:46.556997   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:46.557008   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:46.609300   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:46.609337   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:46.626321   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:46.626351   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:46.707669   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:46.707701   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:46.707714   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:46.789774   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:46.789815   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:49.332780   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:49.347262   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:49.347334   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:49.388368   78008 cri.go:89] found id: ""
	I0917 18:31:49.388411   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.388423   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:49.388431   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:49.388493   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:49.423664   78008 cri.go:89] found id: ""
	I0917 18:31:49.423694   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.423707   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:49.423714   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:49.423776   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:49.462882   78008 cri.go:89] found id: ""
	I0917 18:31:49.462911   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.462924   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:49.462931   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:49.462988   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:49.524014   78008 cri.go:89] found id: ""
	I0917 18:31:49.524047   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.524056   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:49.524062   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:49.524114   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:49.564703   78008 cri.go:89] found id: ""
	I0917 18:31:49.564737   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.564748   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:49.564762   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:49.564827   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:49.609460   78008 cri.go:89] found id: ""
	I0917 18:31:49.609484   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.609493   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:49.609499   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:49.609554   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:49.651008   78008 cri.go:89] found id: ""
	I0917 18:31:49.651032   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.651040   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:49.651045   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:49.651106   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:49.693928   78008 cri.go:89] found id: ""
	I0917 18:31:49.693954   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.693961   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:49.693969   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:49.693981   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:49.774940   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:49.774977   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:49.820362   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:49.820398   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:49.875508   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:49.875549   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:49.890690   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:49.890723   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:49.967803   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:47.664890   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:49.664943   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:48.502246   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:51.001296   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:50.125915   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:52.625169   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:52.468533   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:52.483749   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:52.483812   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:52.523017   78008 cri.go:89] found id: ""
	I0917 18:31:52.523040   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.523048   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:52.523055   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:52.523101   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:52.559848   78008 cri.go:89] found id: ""
	I0917 18:31:52.559879   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.559889   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:52.559895   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:52.559955   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:52.597168   78008 cri.go:89] found id: ""
	I0917 18:31:52.597192   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.597200   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:52.597207   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:52.597278   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:52.634213   78008 cri.go:89] found id: ""
	I0917 18:31:52.634241   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.634252   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:52.634265   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:52.634326   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:52.673842   78008 cri.go:89] found id: ""
	I0917 18:31:52.673865   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.673873   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:52.673878   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:52.673926   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:52.711568   78008 cri.go:89] found id: ""
	I0917 18:31:52.711596   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.711609   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:52.711617   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:52.711676   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:52.757002   78008 cri.go:89] found id: ""
	I0917 18:31:52.757030   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.757038   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:52.757043   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:52.757092   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:52.793092   78008 cri.go:89] found id: ""
	I0917 18:31:52.793126   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.793135   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:52.793143   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:52.793155   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:52.847641   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:52.847682   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:52.862287   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:52.862314   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:52.941307   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:52.941331   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:52.941344   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:53.026114   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:53.026155   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:55.573116   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:55.588063   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:55.588125   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:55.633398   78008 cri.go:89] found id: ""
	I0917 18:31:55.633422   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.633430   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:55.633437   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:55.633511   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:55.669754   78008 cri.go:89] found id: ""
	I0917 18:31:55.669785   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.669796   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:55.669803   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:55.669876   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:52.165645   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:54.166228   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:53.500688   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:55.501849   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:55.126327   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:57.624683   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:55.711492   78008 cri.go:89] found id: ""
	I0917 18:31:55.711521   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.711533   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:55.711541   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:55.711603   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:55.749594   78008 cri.go:89] found id: ""
	I0917 18:31:55.749628   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.749638   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:55.749643   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:55.749695   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:55.786114   78008 cri.go:89] found id: ""
	I0917 18:31:55.786143   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.786155   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:55.786162   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:55.786222   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:55.824254   78008 cri.go:89] found id: ""
	I0917 18:31:55.824282   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.824293   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:55.824301   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:55.824361   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:55.861690   78008 cri.go:89] found id: ""
	I0917 18:31:55.861718   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.861728   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:55.861733   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:55.861794   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:55.913729   78008 cri.go:89] found id: ""
	I0917 18:31:55.913754   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.913766   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:55.913775   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:55.913788   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:55.976835   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:55.976880   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:56.003201   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:56.003236   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:56.092101   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:56.092123   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:56.092137   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:56.170498   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:56.170533   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:58.714212   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:58.730997   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:58.731072   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:58.775640   78008 cri.go:89] found id: ""
	I0917 18:31:58.775678   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.775693   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:58.775701   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:58.775770   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:58.811738   78008 cri.go:89] found id: ""
	I0917 18:31:58.811764   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.811776   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:58.811783   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:58.811852   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:58.849803   78008 cri.go:89] found id: ""
	I0917 18:31:58.849827   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.849836   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:58.849841   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:58.849898   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:58.885827   78008 cri.go:89] found id: ""
	I0917 18:31:58.885856   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.885871   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:58.885878   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:58.885943   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:58.925539   78008 cri.go:89] found id: ""
	I0917 18:31:58.925565   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.925574   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:58.925579   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:58.925628   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:58.961074   78008 cri.go:89] found id: ""
	I0917 18:31:58.961104   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.961116   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:58.961123   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:58.961190   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:58.997843   78008 cri.go:89] found id: ""
	I0917 18:31:58.997878   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.997889   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:58.997896   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:58.997962   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:59.034836   78008 cri.go:89] found id: ""
	I0917 18:31:59.034866   78008 logs.go:276] 0 containers: []
	W0917 18:31:59.034876   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:59.034884   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:59.034899   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:59.049085   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:59.049116   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:59.126143   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:59.126168   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:59.126183   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:59.210043   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:59.210096   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:59.258546   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:59.258575   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:56.664145   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:58.664990   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:58.000809   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:58.494554   77433 pod_ready.go:82] duration metric: took 4m0.000545882s for pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace to be "Ready" ...
	E0917 18:31:58.494588   77433 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace to be "Ready" (will not retry!)
	I0917 18:31:58.494611   77433 pod_ready.go:39] duration metric: took 4m9.313096637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:31:58.494638   77433 kubeadm.go:597] duration metric: took 4m19.208089477s to restartPrimaryControlPlane
	W0917 18:31:58.494716   77433 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 18:31:58.494760   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:31:59.625888   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:02.125831   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:01.811930   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:01.833160   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:32:01.833223   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:32:01.891148   78008 cri.go:89] found id: ""
	I0917 18:32:01.891178   78008 logs.go:276] 0 containers: []
	W0917 18:32:01.891189   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:32:01.891197   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:32:01.891260   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:32:01.954367   78008 cri.go:89] found id: ""
	I0917 18:32:01.954407   78008 logs.go:276] 0 containers: []
	W0917 18:32:01.954418   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:32:01.954425   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:32:01.954490   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:32:01.998154   78008 cri.go:89] found id: ""
	I0917 18:32:01.998187   78008 logs.go:276] 0 containers: []
	W0917 18:32:01.998199   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:32:01.998206   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:32:01.998267   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:32:02.035412   78008 cri.go:89] found id: ""
	I0917 18:32:02.035446   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.035457   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:32:02.035464   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:32:02.035539   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:32:02.070552   78008 cri.go:89] found id: ""
	I0917 18:32:02.070586   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.070599   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:32:02.070604   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:32:02.070673   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:32:02.108680   78008 cri.go:89] found id: ""
	I0917 18:32:02.108717   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.108729   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:32:02.108737   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:32:02.108787   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:32:02.148560   78008 cri.go:89] found id: ""
	I0917 18:32:02.148585   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.148594   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:32:02.148600   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:32:02.148647   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:32:02.186398   78008 cri.go:89] found id: ""
	I0917 18:32:02.186434   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.186445   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:32:02.186454   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:32:02.186468   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:32:02.273674   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:32:02.273695   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:32:02.273708   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:32:02.359656   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:32:02.359704   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:32:02.405465   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:32:02.405494   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:32:02.466534   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:32:02.466568   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:32:04.983572   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:04.998711   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:32:04.998796   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:32:05.038080   78008 cri.go:89] found id: ""
	I0917 18:32:05.038111   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.038121   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:32:05.038129   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:32:05.038189   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:32:05.074542   78008 cri.go:89] found id: ""
	I0917 18:32:05.074571   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.074582   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:32:05.074588   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:32:05.074652   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:32:05.113115   78008 cri.go:89] found id: ""
	I0917 18:32:05.113140   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.113149   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:32:05.113156   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:32:05.113216   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:32:05.151752   78008 cri.go:89] found id: ""
	I0917 18:32:05.151777   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.151786   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:32:05.151791   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:32:05.151840   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:32:05.191014   78008 cri.go:89] found id: ""
	I0917 18:32:05.191044   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.191056   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:32:05.191064   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:32:05.191126   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:32:05.226738   78008 cri.go:89] found id: ""
	I0917 18:32:05.226774   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.226787   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:32:05.226794   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:32:05.226856   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:32:05.263072   78008 cri.go:89] found id: ""
	I0917 18:32:05.263102   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.263115   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:32:05.263124   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:32:05.263184   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:32:05.302622   78008 cri.go:89] found id: ""
	I0917 18:32:05.302651   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.302666   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:32:05.302677   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:32:05.302691   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:32:05.358101   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:32:05.358150   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:32:05.373289   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:32:05.373326   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:32:05.451451   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:32:05.451484   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:32:05.451496   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:32:05.532529   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:32:05.532570   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:32:01.165911   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:03.665523   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:04.126090   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:06.625207   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:08.079204   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:08.093914   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:32:08.093996   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:32:08.131132   78008 cri.go:89] found id: ""
	I0917 18:32:08.131164   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.131173   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:32:08.131178   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:32:08.131230   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:32:08.168017   78008 cri.go:89] found id: ""
	I0917 18:32:08.168044   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.168055   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:32:08.168062   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:32:08.168124   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:32:08.210190   78008 cri.go:89] found id: ""
	I0917 18:32:08.210212   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.210221   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:32:08.210226   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:32:08.210279   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:32:08.250264   78008 cri.go:89] found id: ""
	I0917 18:32:08.250291   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.250299   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:32:08.250304   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:32:08.250352   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:32:08.287732   78008 cri.go:89] found id: ""
	I0917 18:32:08.287760   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.287768   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:32:08.287775   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:32:08.287826   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:32:08.325131   78008 cri.go:89] found id: ""
	I0917 18:32:08.325161   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.325170   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:32:08.325176   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:32:08.325243   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:32:08.365979   78008 cri.go:89] found id: ""
	I0917 18:32:08.366008   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.366019   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:32:08.366027   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:32:08.366088   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:32:08.403430   78008 cri.go:89] found id: ""
	I0917 18:32:08.403472   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.403484   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:32:08.403495   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:32:08.403511   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:32:08.444834   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:32:08.444869   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:32:08.500363   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:32:08.500408   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:32:08.516624   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:32:08.516653   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:32:08.591279   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:32:08.591304   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:32:08.591317   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:32:06.165279   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:08.168012   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:10.665050   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:11.173345   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:11.187689   78008 kubeadm.go:597] duration metric: took 4m1.808927826s to restartPrimaryControlPlane
	W0917 18:32:11.187762   78008 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 18:32:11.187786   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:32:12.794262   78008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.606454478s)
	I0917 18:32:12.794344   78008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:32:12.809379   78008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:32:12.821912   78008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:32:12.833176   78008 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:32:12.833201   78008 kubeadm.go:157] found existing configuration files:
	
	I0917 18:32:12.833279   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:32:12.843175   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:32:12.843245   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:32:12.855310   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:32:12.866777   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:32:12.866846   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:32:12.878436   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:32:12.889677   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:32:12.889763   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:32:12.900141   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:32:12.909916   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:32:12.909994   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:32:12.920578   78008 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:32:12.993663   78008 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0917 18:32:12.993743   78008 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:32:13.145113   78008 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:32:13.145321   78008 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:32:13.145451   78008 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0917 18:32:13.346279   78008 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:32:08.627002   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:09.118558   77819 pod_ready.go:82] duration metric: took 4m0.00024297s for pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace to be "Ready" ...
	E0917 18:32:09.118584   77819 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0917 18:32:09.118600   77819 pod_ready.go:39] duration metric: took 4m13.424544466s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:09.118628   77819 kubeadm.go:597] duration metric: took 4m21.847475999s to restartPrimaryControlPlane
	W0917 18:32:09.118695   77819 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 18:32:09.118723   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:32:13.348308   78008 out.go:235]   - Generating certificates and keys ...
	I0917 18:32:13.348411   78008 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:32:13.348505   78008 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:32:13.348622   78008 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:32:13.348719   78008 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:32:13.348814   78008 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:32:13.348895   78008 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:32:13.348991   78008 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:32:13.349126   78008 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:32:13.349595   78008 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:32:13.349939   78008 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:32:13.350010   78008 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:32:13.350096   78008 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:32:13.677314   78008 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:32:13.840807   78008 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:32:13.886801   78008 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:32:13.937675   78008 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:32:13.956057   78008 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:32:13.957185   78008 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:32:13.957266   78008 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:32:14.099317   78008 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:32:14.101339   78008 out.go:235]   - Booting up control plane ...
	I0917 18:32:14.101446   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:32:14.107518   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:32:14.107630   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:32:14.107964   78008 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:32:14.118995   78008 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0917 18:32:13.164003   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:15.165309   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:17.664956   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:20.165073   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:24.890884   77433 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.396095322s)
	I0917 18:32:24.890966   77433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:32:24.915367   77433 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:32:24.928191   77433 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:32:24.945924   77433 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:32:24.945943   77433 kubeadm.go:157] found existing configuration files:
	
	I0917 18:32:24.945988   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:32:24.961382   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:32:24.961454   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:32:24.977324   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:32:24.989771   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:32:24.989861   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:32:25.001342   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:32:25.035933   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:32:25.036004   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:32:25.047185   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:32:25.058299   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:32:25.058358   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:32:25.070264   77433 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:32:25.124517   77433 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 18:32:25.124634   77433 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:32:25.257042   77433 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:32:25.257211   77433 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:32:25.257378   77433 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 18:32:25.267568   77433 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:32:22.663592   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:24.665849   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:25.269902   77433 out.go:235]   - Generating certificates and keys ...
	I0917 18:32:25.270012   77433 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:32:25.270115   77433 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:32:25.270221   77433 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:32:25.270288   77433 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:32:25.270379   77433 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:32:25.270462   77433 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:32:25.270563   77433 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:32:25.270664   77433 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:32:25.270747   77433 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:32:25.270810   77433 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:32:25.270844   77433 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:32:25.270892   77433 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:32:25.425276   77433 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:32:25.498604   77433 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 18:32:25.848094   77433 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:32:26.011742   77433 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:32:26.097462   77433 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:32:26.097929   77433 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:32:26.100735   77433 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:32:26.102662   77433 out.go:235]   - Booting up control plane ...
	I0917 18:32:26.102777   77433 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:32:26.102880   77433 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:32:26.102954   77433 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:32:26.123221   77433 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:32:26.130932   77433 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:32:26.131021   77433 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:32:26.291311   77433 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 18:32:26.291462   77433 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 18:32:27.164870   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:29.165716   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:27.298734   77433 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00350356s
	I0917 18:32:27.298851   77433 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 18:32:32.298994   77433 kubeadm.go:310] [api-check] The API server is healthy after 5.002867585s
	I0917 18:32:32.319430   77433 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 18:32:32.345527   77433 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 18:32:32.381518   77433 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 18:32:32.381817   77433 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-328741 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 18:32:32.398185   77433 kubeadm.go:310] [bootstrap-token] Using token: jgy27g.uvhet1w3psx1hofx
	I0917 18:32:32.399853   77433 out.go:235]   - Configuring RBAC rules ...
	I0917 18:32:32.400009   77433 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 18:32:32.407740   77433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 18:32:32.421320   77433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 18:32:32.427046   77433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 18:32:32.434506   77433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 18:32:32.438950   77433 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 18:32:32.705233   77433 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 18:32:33.140761   77433 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 18:32:33.720560   77433 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 18:32:33.720589   77433 kubeadm.go:310] 
	I0917 18:32:33.720679   77433 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 18:32:33.720690   77433 kubeadm.go:310] 
	I0917 18:32:33.720803   77433 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 18:32:33.720823   77433 kubeadm.go:310] 
	I0917 18:32:33.720869   77433 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 18:32:33.720932   77433 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 18:32:33.721010   77433 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 18:32:33.721021   77433 kubeadm.go:310] 
	I0917 18:32:33.721094   77433 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 18:32:33.721103   77433 kubeadm.go:310] 
	I0917 18:32:33.721168   77433 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 18:32:33.721176   77433 kubeadm.go:310] 
	I0917 18:32:33.721291   77433 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 18:32:33.721406   77433 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 18:32:33.721515   77433 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 18:32:33.721527   77433 kubeadm.go:310] 
	I0917 18:32:33.721653   77433 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 18:32:33.721780   77433 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 18:32:33.721797   77433 kubeadm.go:310] 
	I0917 18:32:33.721923   77433 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jgy27g.uvhet1w3psx1hofx \
	I0917 18:32:33.722093   77433 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 \
	I0917 18:32:33.722131   77433 kubeadm.go:310] 	--control-plane 
	I0917 18:32:33.722140   77433 kubeadm.go:310] 
	I0917 18:32:33.722267   77433 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 18:32:33.722278   77433 kubeadm.go:310] 
	I0917 18:32:33.722389   77433 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jgy27g.uvhet1w3psx1hofx \
	I0917 18:32:33.722565   77433 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 
	I0917 18:32:33.723290   77433 kubeadm.go:310] W0917 18:32:25.090856    2941 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:32:33.723705   77433 kubeadm.go:310] W0917 18:32:25.092716    2941 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:32:33.723861   77433 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:32:33.723883   77433 cni.go:84] Creating CNI manager for ""
	I0917 18:32:33.723896   77433 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:32:33.725956   77433 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:32:31.665048   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:34.166586   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:33.727153   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:32:33.739127   77433 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:32:33.759704   77433 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 18:32:33.759766   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:33.759799   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-328741 minikube.k8s.io/updated_at=2024_09_17T18_32_33_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=no-preload-328741 minikube.k8s.io/primary=true
	I0917 18:32:33.977462   77433 ops.go:34] apiserver oom_adj: -16
	I0917 18:32:33.977485   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:34.477572   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:34.977644   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:35.477829   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:35.977732   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:36.477549   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:36.978147   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:37.477629   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:37.977554   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:38.125930   77433 kubeadm.go:1113] duration metric: took 4.366225265s to wait for elevateKubeSystemPrivileges
	I0917 18:32:38.125973   77433 kubeadm.go:394] duration metric: took 4m58.899335742s to StartCluster
	I0917 18:32:38.125999   77433 settings.go:142] acquiring lock: {Name:mk34898861b3ed534da4bd8404feab95fe690001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:32:38.126117   77433 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:32:38.128667   77433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:32:38.129071   77433 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 18:32:38.129134   77433 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 18:32:38.129258   77433 addons.go:69] Setting storage-provisioner=true in profile "no-preload-328741"
	I0917 18:32:38.129284   77433 addons.go:234] Setting addon storage-provisioner=true in "no-preload-328741"
	W0917 18:32:38.129295   77433 addons.go:243] addon storage-provisioner should already be in state true
	I0917 18:32:38.129331   77433 host.go:66] Checking if "no-preload-328741" exists ...
	I0917 18:32:38.129364   77433 config.go:182] Loaded profile config "no-preload-328741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:32:38.129374   77433 addons.go:69] Setting default-storageclass=true in profile "no-preload-328741"
	I0917 18:32:38.129397   77433 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-328741"
	I0917 18:32:38.129397   77433 addons.go:69] Setting metrics-server=true in profile "no-preload-328741"
	I0917 18:32:38.129440   77433 addons.go:234] Setting addon metrics-server=true in "no-preload-328741"
	W0917 18:32:38.129451   77433 addons.go:243] addon metrics-server should already be in state true
	I0917 18:32:38.129491   77433 host.go:66] Checking if "no-preload-328741" exists ...
	I0917 18:32:38.129831   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.129832   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.129875   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.129965   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.129980   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.129992   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.130833   77433 out.go:177] * Verifying Kubernetes components...
	I0917 18:32:38.132232   77433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:32:38.151440   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37047
	I0917 18:32:38.151521   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43717
	I0917 18:32:38.151524   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46631
	I0917 18:32:38.152003   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.152216   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.152574   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.152591   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.152728   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.152743   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.153076   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.153077   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.153304   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetState
	I0917 18:32:38.153689   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.153731   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.156960   77433 addons.go:234] Setting addon default-storageclass=true in "no-preload-328741"
	W0917 18:32:38.156980   77433 addons.go:243] addon default-storageclass should already be in state true
	I0917 18:32:38.157007   77433 host.go:66] Checking if "no-preload-328741" exists ...
	I0917 18:32:38.157358   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.157404   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.157700   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.158314   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.158332   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.158738   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.159296   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.159332   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.179409   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41819
	I0917 18:32:38.179948   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.180402   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.180433   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.180922   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.181082   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetState
	I0917 18:32:38.183522   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35935
	I0917 18:32:38.183904   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.184373   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.184389   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.184750   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.184911   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetState
	I0917 18:32:38.187520   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37647
	I0917 18:32:38.187560   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:32:38.187560   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:32:38.188071   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.188750   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.188768   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.189208   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.189573   77433 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:32:38.189597   77433 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0917 18:32:35.488250   77819 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.369501216s)
	I0917 18:32:35.488328   77819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:32:35.507245   77819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:32:35.522739   77819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:32:35.537981   77819 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:32:35.538002   77819 kubeadm.go:157] found existing configuration files:
	
	I0917 18:32:35.538060   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0917 18:32:35.552269   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:32:35.552346   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:32:35.566005   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0917 18:32:35.577402   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:32:35.577482   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:32:35.588633   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0917 18:32:35.600487   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:32:35.600559   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:32:35.612243   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0917 18:32:35.623548   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:32:35.623630   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:32:35.635837   77819 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:32:35.690169   77819 kubeadm.go:310] W0917 18:32:35.657767    2589 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:32:35.690728   77819 kubeadm.go:310] W0917 18:32:35.658500    2589 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:32:35.819945   77819 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:32:38.189867   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.189904   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.191297   77433 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 18:32:38.191318   77433 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 18:32:38.191340   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:32:38.191421   77433 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:32:38.191441   77433 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 18:32:38.191467   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:32:38.195617   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.196040   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:32:38.196070   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.196098   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.196292   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:32:38.196554   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:32:38.196633   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:32:38.196645   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.196829   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:32:38.196868   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:32:38.196999   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:32:38.197320   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:32:38.197549   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:32:38.197724   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:32:38.211021   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34023
	I0917 18:32:38.211713   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.212330   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.212349   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.212900   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.213161   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetState
	I0917 18:32:38.214937   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:32:38.215252   77433 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 18:32:38.215267   77433 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 18:32:38.215284   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:32:38.218542   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.219120   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:32:38.219141   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.219398   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:32:38.219649   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:32:38.219795   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:32:38.219983   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:32:38.350631   77433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:32:38.420361   77433 node_ready.go:35] waiting up to 6m0s for node "no-preload-328741" to be "Ready" ...
	I0917 18:32:38.445121   77433 node_ready.go:49] node "no-preload-328741" has status "Ready":"True"
	I0917 18:32:38.445147   77433 node_ready.go:38] duration metric: took 24.749282ms for node "no-preload-328741" to be "Ready" ...
	I0917 18:32:38.445159   77433 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:38.468481   77433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:38.473593   77433 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 18:32:38.529563   77433 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 18:32:38.529592   77433 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0917 18:32:38.569714   77433 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:32:38.611817   77433 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 18:32:38.611845   77433 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 18:32:38.681763   77433 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:32:38.681791   77433 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 18:32:38.754936   77433 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:32:38.771115   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:38.771142   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:38.771564   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:38.771583   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:38.771603   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:38.771612   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:38.773362   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:38.773370   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:38.773381   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:38.782423   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:38.782468   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:38.782821   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:38.782877   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:38.782889   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:39.826176   77433 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.256415127s)
	I0917 18:32:39.826230   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:39.826241   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:39.826591   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:39.826618   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:39.826619   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:39.826627   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:39.826638   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:39.826905   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:39.828259   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:39.828279   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:40.095498   77433 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.340502717s)
	I0917 18:32:40.095562   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:40.095578   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:40.096020   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:40.096018   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:40.096047   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:40.096056   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:40.096064   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:40.096372   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:40.096391   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:40.097299   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:40.097317   77433 addons.go:475] Verifying addon metrics-server=true in "no-preload-328741"
	I0917 18:32:40.099125   77433 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0917 18:32:36.663739   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:38.666621   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:40.100317   77433 addons.go:510] duration metric: took 1.971194765s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0917 18:32:40.481646   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:44.319473   77819 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 18:32:44.319570   77819 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:32:44.319698   77819 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:32:44.319793   77819 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:32:44.319888   77819 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 18:32:44.319977   77819 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:32:44.322424   77819 out.go:235]   - Generating certificates and keys ...
	I0917 18:32:44.322509   77819 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:32:44.322570   77819 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:32:44.322640   77819 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:32:44.322732   77819 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:32:44.322806   77819 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:32:44.322854   77819 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:32:44.322911   77819 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:32:44.322993   77819 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:32:44.323071   77819 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:32:44.323150   77819 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:32:44.323197   77819 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:32:44.323246   77819 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:32:44.323289   77819 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:32:44.323337   77819 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 18:32:44.323390   77819 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:32:44.323456   77819 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:32:44.323521   77819 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:32:44.323613   77819 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:32:44.323704   77819 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:32:44.324959   77819 out.go:235]   - Booting up control plane ...
	I0917 18:32:44.325043   77819 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:32:44.325120   77819 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:32:44.325187   77819 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:32:44.325303   77819 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:32:44.325371   77819 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:32:44.325404   77819 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:32:44.325533   77819 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 18:32:44.325635   77819 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 18:32:44.325710   77819 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001958745s
	I0917 18:32:44.325774   77819 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 18:32:44.325830   77819 kubeadm.go:310] [api-check] The API server is healthy after 5.002835169s
	I0917 18:32:44.325919   77819 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 18:32:44.326028   77819 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 18:32:44.326086   77819 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 18:32:44.326239   77819 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-438836 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 18:32:44.326311   77819 kubeadm.go:310] [bootstrap-token] Using token: xgap2f.3rz1qjyfivkbqx8u
	I0917 18:32:44.327661   77819 out.go:235]   - Configuring RBAC rules ...
	I0917 18:32:44.327770   77819 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 18:32:44.327838   77819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 18:32:44.328050   77819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 18:32:44.328166   77819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 18:32:44.328266   77819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 18:32:44.328337   77819 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 18:32:44.328483   77819 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 18:32:44.328519   77819 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 18:32:44.328564   77819 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 18:32:44.328573   77819 kubeadm.go:310] 
	I0917 18:32:44.328628   77819 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 18:32:44.328634   77819 kubeadm.go:310] 
	I0917 18:32:44.328702   77819 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 18:32:44.328710   77819 kubeadm.go:310] 
	I0917 18:32:44.328736   77819 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 18:32:44.328798   77819 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 18:32:44.328849   77819 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 18:32:44.328858   77819 kubeadm.go:310] 
	I0917 18:32:44.328940   77819 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 18:32:44.328949   77819 kubeadm.go:310] 
	I0917 18:32:44.328997   77819 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 18:32:44.329011   77819 kubeadm.go:310] 
	I0917 18:32:44.329054   77819 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 18:32:44.329122   77819 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 18:32:44.329184   77819 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 18:32:44.329191   77819 kubeadm.go:310] 
	I0917 18:32:44.329281   77819 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 18:32:44.329359   77819 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 18:32:44.329372   77819 kubeadm.go:310] 
	I0917 18:32:44.329487   77819 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token xgap2f.3rz1qjyfivkbqx8u \
	I0917 18:32:44.329599   77819 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 \
	I0917 18:32:44.329619   77819 kubeadm.go:310] 	--control-plane 
	I0917 18:32:44.329625   77819 kubeadm.go:310] 
	I0917 18:32:44.329709   77819 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 18:32:44.329716   77819 kubeadm.go:310] 
	I0917 18:32:44.329784   77819 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token xgap2f.3rz1qjyfivkbqx8u \
	I0917 18:32:44.329896   77819 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 
	I0917 18:32:44.329910   77819 cni.go:84] Creating CNI manager for ""
	I0917 18:32:44.329916   77819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:32:44.331403   77819 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:32:41.165452   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:43.167184   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:45.664612   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:42.976970   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:45.475620   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:44.332786   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:32:44.344553   77819 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:32:44.365355   77819 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 18:32:44.365417   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:44.365457   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-438836 minikube.k8s.io/updated_at=2024_09_17T18_32_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=default-k8s-diff-port-438836 minikube.k8s.io/primary=true
	I0917 18:32:44.393987   77819 ops.go:34] apiserver oom_adj: -16
	I0917 18:32:44.608512   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:45.109295   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:45.609455   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:46.108538   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:46.609062   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:47.108933   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:47.608565   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:48.109355   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:48.609390   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:49.109204   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:49.305574   77819 kubeadm.go:1113] duration metric: took 4.940218828s to wait for elevateKubeSystemPrivileges
	I0917 18:32:49.305616   77819 kubeadm.go:394] duration metric: took 5m2.086280483s to StartCluster
	I0917 18:32:49.305640   77819 settings.go:142] acquiring lock: {Name:mk34898861b3ed534da4bd8404feab95fe690001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:32:49.305743   77819 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:32:49.308226   77819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:32:49.308590   77819 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.58 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 18:32:49.308755   77819 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 18:32:49.308838   77819 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-438836"
	I0917 18:32:49.308861   77819 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-438836"
	I0917 18:32:49.308863   77819 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-438836"
	I0917 18:32:49.308882   77819 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-438836"
	I0917 18:32:49.308881   77819 config.go:182] Loaded profile config "default-k8s-diff-port-438836": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:32:49.308895   77819 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-438836"
	W0917 18:32:49.308946   77819 addons.go:243] addon metrics-server should already be in state true
	I0917 18:32:49.309006   77819 host.go:66] Checking if "default-k8s-diff-port-438836" exists ...
	I0917 18:32:49.308895   77819 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-438836"
	W0917 18:32:49.308873   77819 addons.go:243] addon storage-provisioner should already be in state true
	I0917 18:32:49.309151   77819 host.go:66] Checking if "default-k8s-diff-port-438836" exists ...
	I0917 18:32:49.309458   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.309509   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.309544   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.309580   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.309585   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.309613   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.310410   77819 out.go:177] * Verifying Kubernetes components...
	I0917 18:32:49.311819   77819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:32:49.326762   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40995
	I0917 18:32:49.327055   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41171
	I0917 18:32:49.327287   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.327615   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.327869   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.327888   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.328171   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.328194   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.328215   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.328403   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetState
	I0917 18:32:49.328622   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.329285   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.329330   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.329573   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42919
	I0917 18:32:49.330145   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.330651   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.330674   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.331084   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.331715   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.331767   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.332232   77819 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-438836"
	W0917 18:32:49.332250   77819 addons.go:243] addon default-storageclass should already be in state true
	I0917 18:32:49.332278   77819 host.go:66] Checking if "default-k8s-diff-port-438836" exists ...
	I0917 18:32:49.332550   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.332595   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.346536   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35171
	I0917 18:32:49.347084   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.347712   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.347737   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.348229   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.348469   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetState
	I0917 18:32:49.350631   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35929
	I0917 18:32:49.351520   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.351581   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:32:49.352110   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.352138   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.352297   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35853
	I0917 18:32:49.352720   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.352736   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.353270   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.353310   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.353318   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.353334   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.353707   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.353861   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetState
	I0917 18:32:49.354855   77819 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0917 18:32:49.356031   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:32:49.356123   77819 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 18:32:49.356153   77819 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 18:32:49.356181   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:32:49.358023   77819 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:32:47.475181   77433 pod_ready.go:93] pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:47.475212   77433 pod_ready.go:82] duration metric: took 9.006699747s for pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:47.475230   77433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qv4pq" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.483276   77433 pod_ready.go:93] pod "coredns-7c65d6cfc9-qv4pq" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:48.483301   77433 pod_ready.go:82] duration metric: took 1.008063055s for pod "coredns-7c65d6cfc9-qv4pq" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.483310   77433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.488897   77433 pod_ready.go:93] pod "etcd-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:48.488922   77433 pod_ready.go:82] duration metric: took 5.605818ms for pod "etcd-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.488931   77433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.493809   77433 pod_ready.go:93] pod "kube-apiserver-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:48.493840   77433 pod_ready.go:82] duration metric: took 4.899361ms for pod "kube-apiserver-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.493853   77433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.498703   77433 pod_ready.go:93] pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:48.498730   77433 pod_ready.go:82] duration metric: took 4.869599ms for pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.498741   77433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2945m" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.673260   77433 pod_ready.go:93] pod "kube-proxy-2945m" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:48.673288   77433 pod_ready.go:82] duration metric: took 174.539603ms for pod "kube-proxy-2945m" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.673300   77433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:49.073094   77433 pod_ready.go:93] pod "kube-scheduler-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:49.073121   77433 pod_ready.go:82] duration metric: took 399.810804ms for pod "kube-scheduler-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:49.073132   77433 pod_ready.go:39] duration metric: took 10.627960333s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:49.073148   77433 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:32:49.073220   77433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:49.089310   77433 api_server.go:72] duration metric: took 10.960186006s to wait for apiserver process to appear ...
	I0917 18:32:49.089337   77433 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:32:49.089360   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:32:49.094838   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 200:
	ok
	I0917 18:32:49.095838   77433 api_server.go:141] control plane version: v1.31.1
	I0917 18:32:49.095862   77433 api_server.go:131] duration metric: took 6.516706ms to wait for apiserver health ...
	I0917 18:32:49.095872   77433 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:32:49.278262   77433 system_pods.go:59] 9 kube-system pods found
	I0917 18:32:49.278306   77433 system_pods.go:61] "coredns-7c65d6cfc9-gddwk" [57f85dd3-be48-4648-8d70-7a06aeaecdc2] Running
	I0917 18:32:49.278312   77433 system_pods.go:61] "coredns-7c65d6cfc9-qv4pq" [31f7e4b5-3870-41a1-96f8-8e13511fe684] Running
	I0917 18:32:49.278315   77433 system_pods.go:61] "etcd-no-preload-328741" [42b632f3-5576-4779-8895-3adcecfb278c] Running
	I0917 18:32:49.278319   77433 system_pods.go:61] "kube-apiserver-no-preload-328741" [ff2d44e3-dad5-4c24-a24d-2df425466747] Running
	I0917 18:32:49.278323   77433 system_pods.go:61] "kube-controller-manager-no-preload-328741" [eec3bebd-16ed-428e-8411-bca31800b36c] Running
	I0917 18:32:49.278326   77433 system_pods.go:61] "kube-proxy-2945m" [8a7b75b4-28c5-476a-b002-05313976c138] Running
	I0917 18:32:49.278329   77433 system_pods.go:61] "kube-scheduler-no-preload-328741" [06c97bf5-3ad3-45c5-8eaa-aa3cdbf51f12] Running
	I0917 18:32:49.278337   77433 system_pods.go:61] "metrics-server-6867b74b74-cvttg" [1b2d6700-2e3c-4a35-9794-0ec095eed0d4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:32:49.278341   77433 system_pods.go:61] "storage-provisioner" [03a8e7f5-ea70-4653-837b-5ad54de48136] Running
	I0917 18:32:49.278348   77433 system_pods.go:74] duration metric: took 182.470522ms to wait for pod list to return data ...
	I0917 18:32:49.278355   77433 default_sa.go:34] waiting for default service account to be created ...
	I0917 18:32:49.474126   77433 default_sa.go:45] found service account: "default"
	I0917 18:32:49.474155   77433 default_sa.go:55] duration metric: took 195.79307ms for default service account to be created ...
	I0917 18:32:49.474166   77433 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 18:32:49.678032   77433 system_pods.go:86] 9 kube-system pods found
	I0917 18:32:49.678062   77433 system_pods.go:89] "coredns-7c65d6cfc9-gddwk" [57f85dd3-be48-4648-8d70-7a06aeaecdc2] Running
	I0917 18:32:49.678068   77433 system_pods.go:89] "coredns-7c65d6cfc9-qv4pq" [31f7e4b5-3870-41a1-96f8-8e13511fe684] Running
	I0917 18:32:49.678072   77433 system_pods.go:89] "etcd-no-preload-328741" [42b632f3-5576-4779-8895-3adcecfb278c] Running
	I0917 18:32:49.678076   77433 system_pods.go:89] "kube-apiserver-no-preload-328741" [ff2d44e3-dad5-4c24-a24d-2df425466747] Running
	I0917 18:32:49.678080   77433 system_pods.go:89] "kube-controller-manager-no-preload-328741" [eec3bebd-16ed-428e-8411-bca31800b36c] Running
	I0917 18:32:49.678083   77433 system_pods.go:89] "kube-proxy-2945m" [8a7b75b4-28c5-476a-b002-05313976c138] Running
	I0917 18:32:49.678086   77433 system_pods.go:89] "kube-scheduler-no-preload-328741" [06c97bf5-3ad3-45c5-8eaa-aa3cdbf51f12] Running
	I0917 18:32:49.678095   77433 system_pods.go:89] "metrics-server-6867b74b74-cvttg" [1b2d6700-2e3c-4a35-9794-0ec095eed0d4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:32:49.678101   77433 system_pods.go:89] "storage-provisioner" [03a8e7f5-ea70-4653-837b-5ad54de48136] Running
	I0917 18:32:49.678111   77433 system_pods.go:126] duration metric: took 203.938016ms to wait for k8s-apps to be running ...
	I0917 18:32:49.678120   77433 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 18:32:49.678169   77433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:32:49.698139   77433 system_svc.go:56] duration metric: took 20.008261ms WaitForService to wait for kubelet
	I0917 18:32:49.698169   77433 kubeadm.go:582] duration metric: took 11.569050863s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 18:32:49.698188   77433 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:32:49.873214   77433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:32:49.873286   77433 node_conditions.go:123] node cpu capacity is 2
	I0917 18:32:49.873304   77433 node_conditions.go:105] duration metric: took 175.108582ms to run NodePressure ...
	I0917 18:32:49.873319   77433 start.go:241] waiting for startup goroutines ...
	I0917 18:32:49.873329   77433 start.go:246] waiting for cluster config update ...
	I0917 18:32:49.873342   77433 start.go:255] writing updated cluster config ...
	I0917 18:32:49.873719   77433 ssh_runner.go:195] Run: rm -f paused
	I0917 18:32:49.928157   77433 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 18:32:49.930136   77433 out.go:177] * Done! kubectl is now configured to use "no-preload-328741" cluster and "default" namespace by default
	I0917 18:32:47.158355   77264 pod_ready.go:82] duration metric: took 4m0.000722561s for pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace to be "Ready" ...
	E0917 18:32:47.158398   77264 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0917 18:32:47.158416   77264 pod_ready.go:39] duration metric: took 4m11.016184959s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:47.158443   77264 kubeadm.go:597] duration metric: took 4m19.974943276s to restartPrimaryControlPlane
	W0917 18:32:47.158508   77264 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 18:32:47.158539   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:32:49.359450   77819 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:32:49.359475   77819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 18:32:49.359496   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:32:49.360356   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.361125   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:32:49.360783   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:32:49.361427   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:32:49.361439   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.361615   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:32:49.361803   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:32:49.363091   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.363388   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:32:49.363420   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.363601   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:32:49.363803   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:32:49.363956   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:32:49.364108   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:32:49.374395   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40731
	I0917 18:32:49.374937   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.375474   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.375506   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.375858   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.376073   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetState
	I0917 18:32:49.377667   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:32:49.377884   77819 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 18:32:49.377899   77819 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 18:32:49.377912   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:32:49.381821   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.381992   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:32:49.382009   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.382202   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:32:49.382366   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:32:49.382534   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:32:49.382855   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:32:49.601072   77819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:32:49.657872   77819 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-438836" to be "Ready" ...
	I0917 18:32:49.669721   77819 node_ready.go:49] node "default-k8s-diff-port-438836" has status "Ready":"True"
	I0917 18:32:49.669750   77819 node_ready.go:38] duration metric: took 11.838649ms for node "default-k8s-diff-port-438836" to be "Ready" ...
	I0917 18:32:49.669761   77819 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:49.692344   77819 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8nrnc" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:49.774555   77819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:32:49.821754   77819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 18:32:49.826676   77819 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 18:32:49.826694   77819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0917 18:32:49.941685   77819 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 18:32:49.941712   77819 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 18:32:50.121418   77819 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:32:50.121444   77819 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 18:32:50.233586   77819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:32:50.948870   77819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.174278798s)
	I0917 18:32:50.948915   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:50.948926   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:50.948941   77819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.12715113s)
	I0917 18:32:50.948983   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:50.948997   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:50.949213   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:50.949240   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:50.949249   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:50.949257   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:50.949335   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:50.949346   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Closing plugin on server side
	I0917 18:32:50.949349   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:50.949367   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:50.949375   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:50.949484   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Closing plugin on server side
	I0917 18:32:50.949517   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:50.949530   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:50.949689   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Closing plugin on server side
	I0917 18:32:50.949700   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:50.949720   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:50.971989   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:50.972009   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:50.972307   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:50.972326   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:51.167019   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:51.167041   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:51.167324   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:51.167350   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:51.167358   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:51.167356   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Closing plugin on server side
	I0917 18:32:51.167366   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:51.167581   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:51.167593   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:51.167605   77819 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-438836"
	I0917 18:32:51.170208   77819 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0917 18:32:51.171345   77819 addons.go:510] duration metric: took 1.86260047s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0917 18:32:51.701056   77819 pod_ready.go:103] pod "coredns-7c65d6cfc9-8nrnc" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:53.199802   77819 pod_ready.go:93] pod "coredns-7c65d6cfc9-8nrnc" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:53.199832   77819 pod_ready.go:82] duration metric: took 3.507449551s for pod "coredns-7c65d6cfc9-8nrnc" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:53.199846   77819 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x4l48" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:54.116602   78008 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0917 18:32:54.116783   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:32:54.117004   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:32:55.207337   77819 pod_ready.go:103] pod "coredns-7c65d6cfc9-x4l48" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:56.207361   77819 pod_ready.go:93] pod "coredns-7c65d6cfc9-x4l48" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:56.207390   77819 pod_ready.go:82] duration metric: took 3.007535449s for pod "coredns-7c65d6cfc9-x4l48" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.207403   77819 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.212003   77819 pod_ready.go:93] pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:56.212025   77819 pod_ready.go:82] duration metric: took 4.613897ms for pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.212034   77819 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.216625   77819 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:56.216645   77819 pod_ready.go:82] duration metric: took 4.604444ms for pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.216654   77819 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.724223   77819 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:56.724257   77819 pod_ready.go:82] duration metric: took 507.594976ms for pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.724277   77819 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xwqtr" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.729284   77819 pod_ready.go:93] pod "kube-proxy-xwqtr" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:56.729312   77819 pod_ready.go:82] duration metric: took 5.025818ms for pod "kube-proxy-xwqtr" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.729324   77819 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:57.004900   77819 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:57.004926   77819 pod_ready.go:82] duration metric: took 275.593421ms for pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:57.004935   77819 pod_ready.go:39] duration metric: took 7.335162837s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:57.004951   77819 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:32:57.004999   77819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:57.020042   77819 api_server.go:72] duration metric: took 7.711410338s to wait for apiserver process to appear ...
	I0917 18:32:57.020070   77819 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:32:57.020095   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:32:57.024504   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 200:
	ok
	I0917 18:32:57.025722   77819 api_server.go:141] control plane version: v1.31.1
	I0917 18:32:57.025749   77819 api_server.go:131] duration metric: took 5.670742ms to wait for apiserver health ...
	I0917 18:32:57.025759   77819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:32:57.206512   77819 system_pods.go:59] 9 kube-system pods found
	I0917 18:32:57.206548   77819 system_pods.go:61] "coredns-7c65d6cfc9-8nrnc" [96eeb328-605e-468b-a022-dbb7b5b44501] Running
	I0917 18:32:57.206555   77819 system_pods.go:61] "coredns-7c65d6cfc9-x4l48" [12a20eeb-edd1-4f5b-bf64-ba3d2c8ae05b] Running
	I0917 18:32:57.206561   77819 system_pods.go:61] "etcd-default-k8s-diff-port-438836" [091ba47e-1133-4557-b3d7-eb39578840ab] Running
	I0917 18:32:57.206567   77819 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-438836" [cbb0e5fe-7583-4f3e-a0cd-dc32b00bb161] Running
	I0917 18:32:57.206573   77819 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-438836" [fe0a5927-2747-4e04-b9fc-c3071cb01ceb] Running
	I0917 18:32:57.206577   77819 system_pods.go:61] "kube-proxy-xwqtr" [5875ff28-7e41-4887-94da-d7632d8141e8] Running
	I0917 18:32:57.206582   77819 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-438836" [b25c5a55-a0e5-432a-a490-69b75d3a48d8] Running
	I0917 18:32:57.206593   77819 system_pods.go:61] "metrics-server-6867b74b74-qnfv2" [75be5ed8-b62d-42c8-8ea9-5809187be05a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:32:57.206599   77819 system_pods.go:61] "storage-provisioner" [a1ae1ecf-9311-4d61-a56d-9147876d4a9d] Running
	I0917 18:32:57.206609   77819 system_pods.go:74] duration metric: took 180.842325ms to wait for pod list to return data ...
	I0917 18:32:57.206619   77819 default_sa.go:34] waiting for default service account to be created ...
	I0917 18:32:57.404368   77819 default_sa.go:45] found service account: "default"
	I0917 18:32:57.404395   77819 default_sa.go:55] duration metric: took 197.770326ms for default service account to be created ...
	I0917 18:32:57.404404   77819 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 18:32:57.607472   77819 system_pods.go:86] 9 kube-system pods found
	I0917 18:32:57.607504   77819 system_pods.go:89] "coredns-7c65d6cfc9-8nrnc" [96eeb328-605e-468b-a022-dbb7b5b44501] Running
	I0917 18:32:57.607513   77819 system_pods.go:89] "coredns-7c65d6cfc9-x4l48" [12a20eeb-edd1-4f5b-bf64-ba3d2c8ae05b] Running
	I0917 18:32:57.607519   77819 system_pods.go:89] "etcd-default-k8s-diff-port-438836" [091ba47e-1133-4557-b3d7-eb39578840ab] Running
	I0917 18:32:57.607523   77819 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-438836" [cbb0e5fe-7583-4f3e-a0cd-dc32b00bb161] Running
	I0917 18:32:57.607529   77819 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-438836" [fe0a5927-2747-4e04-b9fc-c3071cb01ceb] Running
	I0917 18:32:57.607536   77819 system_pods.go:89] "kube-proxy-xwqtr" [5875ff28-7e41-4887-94da-d7632d8141e8] Running
	I0917 18:32:57.607542   77819 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-438836" [b25c5a55-a0e5-432a-a490-69b75d3a48d8] Running
	I0917 18:32:57.607552   77819 system_pods.go:89] "metrics-server-6867b74b74-qnfv2" [75be5ed8-b62d-42c8-8ea9-5809187be05a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:32:57.607558   77819 system_pods.go:89] "storage-provisioner" [a1ae1ecf-9311-4d61-a56d-9147876d4a9d] Running
	I0917 18:32:57.607573   77819 system_pods.go:126] duration metric: took 203.161716ms to wait for k8s-apps to be running ...
	I0917 18:32:57.607584   77819 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 18:32:57.607642   77819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:32:57.623570   77819 system_svc.go:56] duration metric: took 15.976138ms WaitForService to wait for kubelet
	I0917 18:32:57.623607   77819 kubeadm.go:582] duration metric: took 8.314980472s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 18:32:57.623629   77819 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:32:57.804485   77819 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:32:57.804510   77819 node_conditions.go:123] node cpu capacity is 2
	I0917 18:32:57.804520   77819 node_conditions.go:105] duration metric: took 180.885929ms to run NodePressure ...
	I0917 18:32:57.804532   77819 start.go:241] waiting for startup goroutines ...
	I0917 18:32:57.804539   77819 start.go:246] waiting for cluster config update ...
	I0917 18:32:57.804549   77819 start.go:255] writing updated cluster config ...
	I0917 18:32:57.804868   77819 ssh_runner.go:195] Run: rm -f paused
	I0917 18:32:57.854248   77819 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 18:32:57.856295   77819 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-438836" cluster and "default" namespace by default
	I0917 18:32:59.116802   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:32:59.117073   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:33:09.116772   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:33:09.117022   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:33:13.480418   77264 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.32185403s)
	I0917 18:33:13.480497   77264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:33:13.497676   77264 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:33:13.509036   77264 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:33:13.519901   77264 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:33:13.519927   77264 kubeadm.go:157] found existing configuration files:
	
	I0917 18:33:13.519985   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:33:13.530704   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:33:13.530784   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:33:13.541442   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:33:13.553771   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:33:13.553844   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:33:13.566357   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:33:13.576787   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:33:13.576871   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:33:13.587253   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:33:13.597253   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:33:13.597331   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:33:13.607686   77264 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:33:13.657294   77264 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 18:33:13.657416   77264 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:33:13.784063   77264 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:33:13.784228   77264 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:33:13.784388   77264 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 18:33:13.797531   77264 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:33:13.799464   77264 out.go:235]   - Generating certificates and keys ...
	I0917 18:33:13.799555   77264 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:33:13.799626   77264 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:33:13.799735   77264 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:33:13.799849   77264 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:33:13.799964   77264 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:33:13.800059   77264 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:33:13.800305   77264 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:33:13.800620   77264 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:33:13.800843   77264 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:33:13.801056   77264 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:33:13.801220   77264 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:33:13.801361   77264 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:33:13.949574   77264 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:33:14.002216   77264 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 18:33:14.113507   77264 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:33:14.328861   77264 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:33:14.452448   77264 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:33:14.452956   77264 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:33:14.456029   77264 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:33:14.458085   77264 out.go:235]   - Booting up control plane ...
	I0917 18:33:14.458197   77264 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:33:14.458298   77264 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:33:14.458418   77264 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:33:14.480556   77264 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:33:14.490011   77264 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:33:14.490108   77264 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:33:14.641550   77264 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 18:33:14.641680   77264 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 18:33:16.163986   77264 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.521637216s
	I0917 18:33:16.164081   77264 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 18:33:21.167283   77264 kubeadm.go:310] [api-check] The API server is healthy after 5.003926265s
	I0917 18:33:21.187439   77264 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 18:33:21.214590   77264 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 18:33:21.256056   77264 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 18:33:21.256319   77264 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-081863 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 18:33:21.274920   77264 kubeadm.go:310] [bootstrap-token] Using token: tkf10q.2xx4v0n14dywt5kc
	I0917 18:33:21.276557   77264 out.go:235]   - Configuring RBAC rules ...
	I0917 18:33:21.276707   77264 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 18:33:21.286544   77264 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 18:33:21.299514   77264 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 18:33:21.304466   77264 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 18:33:21.309218   77264 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 18:33:21.315113   77264 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 18:33:21.575303   77264 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 18:33:22.022249   77264 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 18:33:22.576184   77264 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 18:33:22.576211   77264 kubeadm.go:310] 
	I0917 18:33:22.576279   77264 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 18:33:22.576291   77264 kubeadm.go:310] 
	I0917 18:33:22.576360   77264 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 18:33:22.576367   77264 kubeadm.go:310] 
	I0917 18:33:22.576388   77264 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 18:33:22.576480   77264 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 18:33:22.576565   77264 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 18:33:22.576576   77264 kubeadm.go:310] 
	I0917 18:33:22.576640   77264 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 18:33:22.576649   77264 kubeadm.go:310] 
	I0917 18:33:22.576725   77264 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 18:33:22.576742   77264 kubeadm.go:310] 
	I0917 18:33:22.576802   77264 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 18:33:22.576884   77264 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 18:33:22.576987   77264 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 18:33:22.577008   77264 kubeadm.go:310] 
	I0917 18:33:22.577111   77264 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 18:33:22.577221   77264 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 18:33:22.577246   77264 kubeadm.go:310] 
	I0917 18:33:22.577361   77264 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tkf10q.2xx4v0n14dywt5kc \
	I0917 18:33:22.577505   77264 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 \
	I0917 18:33:22.577543   77264 kubeadm.go:310] 	--control-plane 
	I0917 18:33:22.577552   77264 kubeadm.go:310] 
	I0917 18:33:22.577660   77264 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 18:33:22.577671   77264 kubeadm.go:310] 
	I0917 18:33:22.577778   77264 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tkf10q.2xx4v0n14dywt5kc \
	I0917 18:33:22.577908   77264 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 
	I0917 18:33:22.579092   77264 kubeadm.go:310] W0917 18:33:13.630065    2521 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:33:22.579481   77264 kubeadm.go:310] W0917 18:33:13.630936    2521 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:33:22.579593   77264 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:33:22.579621   77264 cni.go:84] Creating CNI manager for ""
	I0917 18:33:22.579630   77264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:33:22.581566   77264 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:33:22.582849   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:33:22.595489   77264 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:33:22.627349   77264 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 18:33:22.627411   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:22.627448   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-081863 minikube.k8s.io/updated_at=2024_09_17T18_33_22_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=embed-certs-081863 minikube.k8s.io/primary=true
	I0917 18:33:22.862361   77264 ops.go:34] apiserver oom_adj: -16
	I0917 18:33:22.862491   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:23.362641   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:23.863054   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:24.363374   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:24.862744   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:25.362644   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:25.863065   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:25.974152   77264 kubeadm.go:1113] duration metric: took 3.346801442s to wait for elevateKubeSystemPrivileges
	I0917 18:33:25.974185   77264 kubeadm.go:394] duration metric: took 4m58.844504582s to StartCluster
	I0917 18:33:25.974203   77264 settings.go:142] acquiring lock: {Name:mk34898861b3ed534da4bd8404feab95fe690001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:33:25.974289   77264 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:33:25.976039   77264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:33:25.976296   77264 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.61 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 18:33:25.976407   77264 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 18:33:25.976517   77264 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-081863"
	I0917 18:33:25.976528   77264 addons.go:69] Setting default-storageclass=true in profile "embed-certs-081863"
	I0917 18:33:25.976535   77264 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-081863"
	W0917 18:33:25.976543   77264 addons.go:243] addon storage-provisioner should already be in state true
	I0917 18:33:25.976547   77264 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-081863"
	I0917 18:33:25.976573   77264 host.go:66] Checking if "embed-certs-081863" exists ...
	I0917 18:33:25.976624   77264 config.go:182] Loaded profile config "embed-certs-081863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:33:25.976662   77264 addons.go:69] Setting metrics-server=true in profile "embed-certs-081863"
	I0917 18:33:25.976672   77264 addons.go:234] Setting addon metrics-server=true in "embed-certs-081863"
	W0917 18:33:25.976679   77264 addons.go:243] addon metrics-server should already be in state true
	I0917 18:33:25.976698   77264 host.go:66] Checking if "embed-certs-081863" exists ...
	I0917 18:33:25.976964   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.976984   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.976997   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:25.977013   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:25.977030   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.977050   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:25.978439   77264 out.go:177] * Verifying Kubernetes components...
	I0917 18:33:25.980250   77264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:33:25.993034   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46451
	I0917 18:33:25.993038   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45343
	I0917 18:33:25.993551   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38487
	I0917 18:33:25.993589   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:25.993625   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:25.993887   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:25.994098   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:25.994122   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:25.994193   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:25.994211   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:25.994442   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:25.994466   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:25.994523   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:25.994523   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:25.994762   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetState
	I0917 18:33:25.994791   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:25.995118   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.995168   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:25.995251   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.995284   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:25.998228   77264 addons.go:234] Setting addon default-storageclass=true in "embed-certs-081863"
	W0917 18:33:25.998260   77264 addons.go:243] addon default-storageclass should already be in state true
	I0917 18:33:25.998301   77264 host.go:66] Checking if "embed-certs-081863" exists ...
	I0917 18:33:25.998642   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.998688   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:26.011862   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0917 18:33:26.012556   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:26.013142   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:26.013168   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:26.013578   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:26.014129   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36901
	I0917 18:33:26.014246   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43409
	I0917 18:33:26.014331   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetState
	I0917 18:33:26.014633   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:26.014703   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:26.015086   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:26.015108   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:26.015379   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:26.015396   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:26.015451   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:26.015895   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:26.016078   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:26.016113   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:26.016486   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetState
	I0917 18:33:26.016525   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:33:26.018385   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:33:26.019139   77264 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0917 18:33:26.020119   77264 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:33:26.020991   77264 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 18:33:26.021013   77264 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 18:33:26.021035   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:33:26.021810   77264 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:33:26.021825   77264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 18:33:26.021839   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:33:26.025804   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.026074   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:33:26.026097   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.025803   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.026468   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:33:26.026649   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:33:26.026937   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:33:26.026982   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:33:26.026991   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.027025   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:33:26.027114   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:33:26.027232   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:33:26.027417   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:33:26.027580   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:33:26.035905   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42799
	I0917 18:33:26.036621   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:26.037566   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:26.037597   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:26.038013   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:26.038317   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetState
	I0917 18:33:26.040464   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:33:26.040887   77264 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 18:33:26.040908   77264 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 18:33:26.040922   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:33:26.043857   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.044291   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:33:26.044325   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.044488   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:33:26.044682   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:33:26.044838   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:33:26.045034   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:33:26.155880   77264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:33:26.182293   77264 node_ready.go:35] waiting up to 6m0s for node "embed-certs-081863" to be "Ready" ...
	I0917 18:33:26.191336   77264 node_ready.go:49] node "embed-certs-081863" has status "Ready":"True"
	I0917 18:33:26.191358   77264 node_ready.go:38] duration metric: took 9.032061ms for node "embed-certs-081863" to be "Ready" ...
	I0917 18:33:26.191366   77264 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:33:26.196333   77264 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:26.260819   77264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:33:26.270678   77264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 18:33:26.270701   77264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0917 18:33:26.306169   77264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 18:33:26.310271   77264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 18:33:26.310300   77264 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 18:33:26.367576   77264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:33:26.367603   77264 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 18:33:26.424838   77264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:33:27.088293   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.088326   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.088329   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.088352   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.088726   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.088759   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.088782   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Closing plugin on server side
	I0917 18:33:27.088794   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.088831   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.088845   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.088853   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.088798   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.088923   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.089075   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.089088   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.089200   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.089210   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.089242   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Closing plugin on server side
	I0917 18:33:27.162204   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.162227   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.162597   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.162619   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.423795   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.423824   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.424110   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.424127   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.424136   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.424145   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.424369   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.424385   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.424395   77264 addons.go:475] Verifying addon metrics-server=true in "embed-certs-081863"
	I0917 18:33:27.424390   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Closing plugin on server side
	I0917 18:33:27.426548   77264 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0917 18:33:29.116398   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:33:29.116681   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:33:27.427684   77264 addons.go:510] duration metric: took 1.451280405s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0917 18:33:28.311561   77264 pod_ready.go:103] pod "etcd-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:33:30.703554   77264 pod_ready.go:103] pod "etcd-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:33:31.203018   77264 pod_ready.go:93] pod "etcd-embed-certs-081863" in "kube-system" namespace has status "Ready":"True"
	I0917 18:33:31.203047   77264 pod_ready.go:82] duration metric: took 5.006684537s for pod "etcd-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:31.203057   77264 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:31.207921   77264 pod_ready.go:93] pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace has status "Ready":"True"
	I0917 18:33:31.207949   77264 pod_ready.go:82] duration metric: took 4.88424ms for pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:31.207964   77264 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:31.212804   77264 pod_ready.go:93] pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace has status "Ready":"True"
	I0917 18:33:31.212830   77264 pod_ready.go:82] duration metric: took 4.856814ms for pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:31.212842   77264 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:32.221895   77264 pod_ready.go:93] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"True"
	I0917 18:33:32.221921   77264 pod_ready.go:82] duration metric: took 1.009071567s for pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:32.221929   77264 pod_ready.go:39] duration metric: took 6.030554324s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:33:32.221942   77264 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:33:32.221991   77264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:33:32.242087   77264 api_server.go:72] duration metric: took 6.265747566s to wait for apiserver process to appear ...
	I0917 18:33:32.242113   77264 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:33:32.242129   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:33:32.246960   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 200:
	ok
	I0917 18:33:32.248201   77264 api_server.go:141] control plane version: v1.31.1
	I0917 18:33:32.248223   77264 api_server.go:131] duration metric: took 6.105102ms to wait for apiserver health ...
	I0917 18:33:32.248231   77264 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:33:32.257513   77264 system_pods.go:59] 9 kube-system pods found
	I0917 18:33:32.257546   77264 system_pods.go:61] "coredns-7c65d6cfc9-662sf" [dc7d0fc2-f0dd-420c-b2f9-16ddb84bf3c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:33:32.257557   77264 system_pods.go:61] "coredns-7c65d6cfc9-dxjr7" [16ebe197-5fcf-4988-968b-c9edd71886ff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:33:32.257563   77264 system_pods.go:61] "etcd-embed-certs-081863" [305d6255-3a64-42e2-ad46-cfb94470289d] Running
	I0917 18:33:32.257569   77264 system_pods.go:61] "kube-apiserver-embed-certs-081863" [693ee853-314d-49fc-884c-aaaa2ac17a59] Running
	I0917 18:33:32.257575   77264 system_pods.go:61] "kube-controller-manager-embed-certs-081863" [ff8d98db-0214-405a-858d-e720dccd0492] Running
	I0917 18:33:32.257579   77264 system_pods.go:61] "kube-proxy-7w64h" [46f3bcbd-64c9-4b30-9aa9-6f6e1eb9833b] Running
	I0917 18:33:32.257585   77264 system_pods.go:61] "kube-scheduler-embed-certs-081863" [fb3b40eb-5f37-486c-a897-c7d3574ea408] Running
	I0917 18:33:32.257593   77264 system_pods.go:61] "metrics-server-6867b74b74-98t8z" [941996a1-2109-4c06-88d1-19c6987f81bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:33:32.257602   77264 system_pods.go:61] "storage-provisioner" [107868ba-cf29-42b0-bb0d-c0da9b6b4c8c] Running
	I0917 18:33:32.257612   77264 system_pods.go:74] duration metric: took 9.373269ms to wait for pod list to return data ...
	I0917 18:33:32.257625   77264 default_sa.go:34] waiting for default service account to be created ...
	I0917 18:33:32.264675   77264 default_sa.go:45] found service account: "default"
	I0917 18:33:32.264700   77264 default_sa.go:55] duration metric: took 7.064658ms for default service account to be created ...
	I0917 18:33:32.264711   77264 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 18:33:32.270932   77264 system_pods.go:86] 9 kube-system pods found
	I0917 18:33:32.270964   77264 system_pods.go:89] "coredns-7c65d6cfc9-662sf" [dc7d0fc2-f0dd-420c-b2f9-16ddb84bf3c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:33:32.270975   77264 system_pods.go:89] "coredns-7c65d6cfc9-dxjr7" [16ebe197-5fcf-4988-968b-c9edd71886ff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:33:32.270983   77264 system_pods.go:89] "etcd-embed-certs-081863" [305d6255-3a64-42e2-ad46-cfb94470289d] Running
	I0917 18:33:32.270990   77264 system_pods.go:89] "kube-apiserver-embed-certs-081863" [693ee853-314d-49fc-884c-aaaa2ac17a59] Running
	I0917 18:33:32.270996   77264 system_pods.go:89] "kube-controller-manager-embed-certs-081863" [ff8d98db-0214-405a-858d-e720dccd0492] Running
	I0917 18:33:32.271002   77264 system_pods.go:89] "kube-proxy-7w64h" [46f3bcbd-64c9-4b30-9aa9-6f6e1eb9833b] Running
	I0917 18:33:32.271009   77264 system_pods.go:89] "kube-scheduler-embed-certs-081863" [fb3b40eb-5f37-486c-a897-c7d3574ea408] Running
	I0917 18:33:32.271018   77264 system_pods.go:89] "metrics-server-6867b74b74-98t8z" [941996a1-2109-4c06-88d1-19c6987f81bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:33:32.271024   77264 system_pods.go:89] "storage-provisioner" [107868ba-cf29-42b0-bb0d-c0da9b6b4c8c] Running
	I0917 18:33:32.271037   77264 system_pods.go:126] duration metric: took 6.318783ms to wait for k8s-apps to be running ...
	I0917 18:33:32.271049   77264 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 18:33:32.271102   77264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:33:32.287483   77264 system_svc.go:56] duration metric: took 16.427006ms WaitForService to wait for kubelet
	I0917 18:33:32.287516   77264 kubeadm.go:582] duration metric: took 6.311184714s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 18:33:32.287535   77264 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:33:32.406700   77264 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:33:32.406738   77264 node_conditions.go:123] node cpu capacity is 2
	I0917 18:33:32.406754   77264 node_conditions.go:105] duration metric: took 119.213403ms to run NodePressure ...
	I0917 18:33:32.406767   77264 start.go:241] waiting for startup goroutines ...
	I0917 18:33:32.406777   77264 start.go:246] waiting for cluster config update ...
	I0917 18:33:32.406791   77264 start.go:255] writing updated cluster config ...
	I0917 18:33:32.407061   77264 ssh_runner.go:195] Run: rm -f paused
	I0917 18:33:32.455606   77264 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 18:33:32.457636   77264 out.go:177] * Done! kubectl is now configured to use "embed-certs-081863" cluster and "default" namespace by default
	I0917 18:34:09.116050   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:34:09.116348   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:34:09.116382   78008 kubeadm.go:310] 
	I0917 18:34:09.116437   78008 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0917 18:34:09.116522   78008 kubeadm.go:310] 		timed out waiting for the condition
	I0917 18:34:09.116546   78008 kubeadm.go:310] 
	I0917 18:34:09.116595   78008 kubeadm.go:310] 	This error is likely caused by:
	I0917 18:34:09.116645   78008 kubeadm.go:310] 		- The kubelet is not running
	I0917 18:34:09.116792   78008 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0917 18:34:09.116804   78008 kubeadm.go:310] 
	I0917 18:34:09.116949   78008 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0917 18:34:09.116993   78008 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0917 18:34:09.117053   78008 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0917 18:34:09.117070   78008 kubeadm.go:310] 
	I0917 18:34:09.117199   78008 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0917 18:34:09.117318   78008 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0917 18:34:09.117331   78008 kubeadm.go:310] 
	I0917 18:34:09.117467   78008 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0917 18:34:09.117585   78008 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0917 18:34:09.117689   78008 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0917 18:34:09.117782   78008 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0917 18:34:09.117793   78008 kubeadm.go:310] 
	I0917 18:34:09.118509   78008 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:34:09.118613   78008 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0917 18:34:09.118740   78008 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0917 18:34:09.118821   78008 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0917 18:34:09.118869   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:34:09.597153   78008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:34:09.614431   78008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:34:09.627627   78008 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:34:09.627653   78008 kubeadm.go:157] found existing configuration files:
	
	I0917 18:34:09.627702   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:34:09.639927   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:34:09.639997   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:34:09.651694   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:34:09.662886   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:34:09.662951   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:34:09.675194   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:34:09.686971   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:34:09.687040   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:34:09.699343   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:34:09.711202   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:34:09.711259   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:34:09.722049   78008 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:34:09.800536   78008 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0917 18:34:09.800589   78008 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:34:09.951244   78008 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:34:09.951389   78008 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:34:09.951517   78008 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0917 18:34:10.148311   78008 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:34:10.150656   78008 out.go:235]   - Generating certificates and keys ...
	I0917 18:34:10.150769   78008 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:34:10.150858   78008 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:34:10.150978   78008 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:34:10.151065   78008 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:34:10.151169   78008 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:34:10.151256   78008 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:34:10.151519   78008 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:34:10.151757   78008 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:34:10.152388   78008 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:34:10.152908   78008 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:34:10.153071   78008 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:34:10.153159   78008 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:34:10.298790   78008 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:34:10.463403   78008 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:34:10.699997   78008 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:34:10.983279   78008 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:34:11.006708   78008 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:34:11.008239   78008 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:34:11.008306   78008 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:34:11.173261   78008 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:34:11.175163   78008 out.go:235]   - Booting up control plane ...
	I0917 18:34:11.175324   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:34:11.188834   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:34:11.189874   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:34:11.190719   78008 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:34:11.193221   78008 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0917 18:34:51.193814   78008 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0917 18:34:51.194231   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:34:51.194466   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:34:56.194972   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:34:56.195214   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:35:06.195454   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:35:06.195700   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:35:26.196645   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:35:26.196867   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:36:06.199013   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:36:06.199291   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:36:06.199313   78008 kubeadm.go:310] 
	I0917 18:36:06.199374   78008 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0917 18:36:06.199427   78008 kubeadm.go:310] 		timed out waiting for the condition
	I0917 18:36:06.199434   78008 kubeadm.go:310] 
	I0917 18:36:06.199481   78008 kubeadm.go:310] 	This error is likely caused by:
	I0917 18:36:06.199514   78008 kubeadm.go:310] 		- The kubelet is not running
	I0917 18:36:06.199643   78008 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0917 18:36:06.199663   78008 kubeadm.go:310] 
	I0917 18:36:06.199785   78008 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0917 18:36:06.199835   78008 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0917 18:36:06.199878   78008 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0917 18:36:06.199882   78008 kubeadm.go:310] 
	I0917 18:36:06.200017   78008 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0917 18:36:06.200218   78008 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0917 18:36:06.200235   78008 kubeadm.go:310] 
	I0917 18:36:06.200391   78008 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0917 18:36:06.200515   78008 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0917 18:36:06.200640   78008 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0917 18:36:06.200746   78008 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0917 18:36:06.200763   78008 kubeadm.go:310] 
	I0917 18:36:06.201520   78008 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:36:06.201636   78008 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0917 18:36:06.201798   78008 kubeadm.go:394] duration metric: took 7m56.884157814s to StartCluster
	I0917 18:36:06.201852   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:36:06.201800   78008 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0917 18:36:06.201920   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:36:06.251742   78008 cri.go:89] found id: ""
	I0917 18:36:06.251773   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.251781   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:36:06.251787   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:36:06.251853   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:36:06.292437   78008 cri.go:89] found id: ""
	I0917 18:36:06.292471   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.292483   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:36:06.292490   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:36:06.292548   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:36:06.334539   78008 cri.go:89] found id: ""
	I0917 18:36:06.334571   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.334580   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:36:06.334590   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:36:06.334641   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:36:06.372231   78008 cri.go:89] found id: ""
	I0917 18:36:06.372267   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.372279   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:36:06.372287   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:36:06.372346   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:36:06.411995   78008 cri.go:89] found id: ""
	I0917 18:36:06.412023   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.412031   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:36:06.412036   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:36:06.412100   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:36:06.450809   78008 cri.go:89] found id: ""
	I0917 18:36:06.450834   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.450842   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:36:06.450848   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:36:06.450897   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:36:06.486772   78008 cri.go:89] found id: ""
	I0917 18:36:06.486802   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.486814   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:36:06.486831   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:36:06.486886   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:36:06.528167   78008 cri.go:89] found id: ""
	I0917 18:36:06.528198   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.528210   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:36:06.528222   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:36:06.528234   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:36:06.610415   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:36:06.610445   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:36:06.610461   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:36:06.745881   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:36:06.745921   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:36:06.788764   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:36:06.788802   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:36:06.843477   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:36:06.843514   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0917 18:36:06.858338   78008 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0917 18:36:06.858388   78008 out.go:270] * 
	W0917 18:36:06.858456   78008 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0917 18:36:06.858485   78008 out.go:270] * 
	W0917 18:36:06.859898   78008 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 18:36:06.863606   78008 out.go:201] 
	W0917 18:36:06.865246   78008 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0917 18:36:06.865293   78008 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0917 18:36:06.865313   78008 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0917 18:36:06.866942   78008 out.go:201] 
	
	
	==> CRI-O <==
	Sep 17 18:42:34 embed-certs-081863 crio[712]: time="2024-09-17 18:42:34.655261368Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598554655237227,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dcbf6470-e2c7-4f83-9134-7bd14a151224 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:42:34 embed-certs-081863 crio[712]: time="2024-09-17 18:42:34.656096388Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9d4f3ae1-d731-41cc-b8e6-695b32174a87 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:42:34 embed-certs-081863 crio[712]: time="2024-09-17 18:42:34.656152174Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9d4f3ae1-d731-41cc-b8e6-695b32174a87 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:42:34 embed-certs-081863 crio[712]: time="2024-09-17 18:42:34.656365549Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2fd2982baed92b765f1cd00e438c595150bfc1d7db6f00fbd13f1c4301be0af,PodSandboxId:65b6042b431eaafc929cdd98b54f6fad68c8a5390cfceed3b82bc91f2c321c56,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726598007921318657,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-662sf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc7d0fc2-f0dd-420c-b2f9-16ddb84bf3c3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dfd4aa286a96cc38bc161378523546816375a97e3975bfdb9ea096e29560424,PodSandboxId:ff5d150cca77926ac03fc941ef94c090c41b11bac1847d8744a77cad58301148,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726598007920264982,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dxjr7,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 16ebe197-5fcf-4988-968b-c9edd71886ff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa96d2aea6e07cacabc5f4fac23da55198ad1d1d74bd2c8ad9cb041b9062ed3,PodSandboxId:a770382b9e3102837ee6c972c554af3bb606e19484480ae2fd4aefa8a6624b12,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1726598007680141278,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 107868ba-cf29-42b0-bb0d-c0da9b6b4c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a62d26a92dc9106cd1c098db2c8d60ffed160222c87217dd5e6d2716a35030a,PodSandboxId:69994d5984084ef0727a158f891504afa1d80d54defc1030a63eae732214a8eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726598007635029542,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7w64h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46f3bcbd-64c9-4b30-9aa9-6f6e1eb9833b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0345045d68d04ef66df723640923d62d2528e7d5f7bbbd152118ed20fbb0eea,PodSandboxId:e6ac0bad7080625b8dc6d41d6e220900fd3b1e4dc4148c7444c052a8b97f3acd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726597996390421210
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be47b4099e325b3b1c4c87cd0c31bee,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd490a48dfb6addeeac86af9eb870abbd26bbcb4f9d71fc988e0d6595ae7a68,PodSandboxId:ab9a0859267cf57589906fe1cc1b097491ba4662611b1642d37e5faed82ed6cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726597996382
248806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4405647265d111ad2dc00b43ba5fdd68,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4757efa9abab5ce3aaeff14db8d8984f93281054c88e4c1507e7d5c2aeacf7c,PodSandboxId:9b15f72b553f9954485a3e190436c4287181cd99fb5d57c530a2c1938ae21091,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726597996385426819,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e77089a25f90888de8661fff9420990,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbb0b6d28b2ad7146435b1aacb4a12e2234e63117af18165d784114330c1a1db,PodSandboxId:7f79282b9b1b55422486de0f779a3323c024aa4802224b31dd4a4cf356d99c76,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726597996239938189,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4243d94cac9cd89eef8782f3a6d2858f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19bcc0b5d372609d466df179c233f539c54b17933e07226f60276eee65377599,PodSandboxId:8a2ddcc7f3ffa6df7580e485781e8b1e65358919e22d1c3ea2265bec8706e241,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726597709488234048,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e77089a25f90888de8661fff9420990,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9d4f3ae1-d731-41cc-b8e6-695b32174a87 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:42:34 embed-certs-081863 crio[712]: time="2024-09-17 18:42:34.694336951Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7c6e6312-0460-4fd7-b41f-dfa68982ecbe name=/runtime.v1.RuntimeService/Version
	Sep 17 18:42:34 embed-certs-081863 crio[712]: time="2024-09-17 18:42:34.694416784Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7c6e6312-0460-4fd7-b41f-dfa68982ecbe name=/runtime.v1.RuntimeService/Version
	Sep 17 18:42:34 embed-certs-081863 crio[712]: time="2024-09-17 18:42:34.695584941Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=159c8ecf-700f-4ce8-89f5-a0d65d2e4fe9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:42:34 embed-certs-081863 crio[712]: time="2024-09-17 18:42:34.696002532Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598554695978391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=159c8ecf-700f-4ce8-89f5-a0d65d2e4fe9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:42:34 embed-certs-081863 crio[712]: time="2024-09-17 18:42:34.696544427Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=190fac09-a185-4774-a1ce-1e9654685225 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:42:34 embed-certs-081863 crio[712]: time="2024-09-17 18:42:34.696624141Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=190fac09-a185-4774-a1ce-1e9654685225 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:42:34 embed-certs-081863 crio[712]: time="2024-09-17 18:42:34.696849899Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2fd2982baed92b765f1cd00e438c595150bfc1d7db6f00fbd13f1c4301be0af,PodSandboxId:65b6042b431eaafc929cdd98b54f6fad68c8a5390cfceed3b82bc91f2c321c56,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726598007921318657,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-662sf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc7d0fc2-f0dd-420c-b2f9-16ddb84bf3c3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dfd4aa286a96cc38bc161378523546816375a97e3975bfdb9ea096e29560424,PodSandboxId:ff5d150cca77926ac03fc941ef94c090c41b11bac1847d8744a77cad58301148,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726598007920264982,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dxjr7,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 16ebe197-5fcf-4988-968b-c9edd71886ff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa96d2aea6e07cacabc5f4fac23da55198ad1d1d74bd2c8ad9cb041b9062ed3,PodSandboxId:a770382b9e3102837ee6c972c554af3bb606e19484480ae2fd4aefa8a6624b12,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1726598007680141278,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 107868ba-cf29-42b0-bb0d-c0da9b6b4c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a62d26a92dc9106cd1c098db2c8d60ffed160222c87217dd5e6d2716a35030a,PodSandboxId:69994d5984084ef0727a158f891504afa1d80d54defc1030a63eae732214a8eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726598007635029542,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7w64h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46f3bcbd-64c9-4b30-9aa9-6f6e1eb9833b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0345045d68d04ef66df723640923d62d2528e7d5f7bbbd152118ed20fbb0eea,PodSandboxId:e6ac0bad7080625b8dc6d41d6e220900fd3b1e4dc4148c7444c052a8b97f3acd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726597996390421210
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be47b4099e325b3b1c4c87cd0c31bee,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd490a48dfb6addeeac86af9eb870abbd26bbcb4f9d71fc988e0d6595ae7a68,PodSandboxId:ab9a0859267cf57589906fe1cc1b097491ba4662611b1642d37e5faed82ed6cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726597996382
248806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4405647265d111ad2dc00b43ba5fdd68,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4757efa9abab5ce3aaeff14db8d8984f93281054c88e4c1507e7d5c2aeacf7c,PodSandboxId:9b15f72b553f9954485a3e190436c4287181cd99fb5d57c530a2c1938ae21091,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726597996385426819,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e77089a25f90888de8661fff9420990,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbb0b6d28b2ad7146435b1aacb4a12e2234e63117af18165d784114330c1a1db,PodSandboxId:7f79282b9b1b55422486de0f779a3323c024aa4802224b31dd4a4cf356d99c76,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726597996239938189,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4243d94cac9cd89eef8782f3a6d2858f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19bcc0b5d372609d466df179c233f539c54b17933e07226f60276eee65377599,PodSandboxId:8a2ddcc7f3ffa6df7580e485781e8b1e65358919e22d1c3ea2265bec8706e241,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726597709488234048,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e77089a25f90888de8661fff9420990,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=190fac09-a185-4774-a1ce-1e9654685225 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:42:34 embed-certs-081863 crio[712]: time="2024-09-17 18:42:34.740207518Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0613a57a-1930-4bb9-aa0b-2093e9bc7ffb name=/runtime.v1.RuntimeService/Version
	Sep 17 18:42:34 embed-certs-081863 crio[712]: time="2024-09-17 18:42:34.740281847Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0613a57a-1930-4bb9-aa0b-2093e9bc7ffb name=/runtime.v1.RuntimeService/Version
	Sep 17 18:42:34 embed-certs-081863 crio[712]: time="2024-09-17 18:42:34.741358935Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ccdb829b-2f59-46ae-be55-03ae22c97184 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:42:34 embed-certs-081863 crio[712]: time="2024-09-17 18:42:34.742133542Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598554742105692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ccdb829b-2f59-46ae-be55-03ae22c97184 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:42:34 embed-certs-081863 crio[712]: time="2024-09-17 18:42:34.742706519Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f15e93f2-c737-4903-8ecd-078e4f28ef57 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:42:34 embed-certs-081863 crio[712]: time="2024-09-17 18:42:34.742760302Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f15e93f2-c737-4903-8ecd-078e4f28ef57 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:42:34 embed-certs-081863 crio[712]: time="2024-09-17 18:42:34.742964409Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2fd2982baed92b765f1cd00e438c595150bfc1d7db6f00fbd13f1c4301be0af,PodSandboxId:65b6042b431eaafc929cdd98b54f6fad68c8a5390cfceed3b82bc91f2c321c56,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726598007921318657,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-662sf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc7d0fc2-f0dd-420c-b2f9-16ddb84bf3c3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dfd4aa286a96cc38bc161378523546816375a97e3975bfdb9ea096e29560424,PodSandboxId:ff5d150cca77926ac03fc941ef94c090c41b11bac1847d8744a77cad58301148,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726598007920264982,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dxjr7,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 16ebe197-5fcf-4988-968b-c9edd71886ff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa96d2aea6e07cacabc5f4fac23da55198ad1d1d74bd2c8ad9cb041b9062ed3,PodSandboxId:a770382b9e3102837ee6c972c554af3bb606e19484480ae2fd4aefa8a6624b12,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1726598007680141278,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 107868ba-cf29-42b0-bb0d-c0da9b6b4c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a62d26a92dc9106cd1c098db2c8d60ffed160222c87217dd5e6d2716a35030a,PodSandboxId:69994d5984084ef0727a158f891504afa1d80d54defc1030a63eae732214a8eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726598007635029542,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7w64h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46f3bcbd-64c9-4b30-9aa9-6f6e1eb9833b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0345045d68d04ef66df723640923d62d2528e7d5f7bbbd152118ed20fbb0eea,PodSandboxId:e6ac0bad7080625b8dc6d41d6e220900fd3b1e4dc4148c7444c052a8b97f3acd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726597996390421210
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be47b4099e325b3b1c4c87cd0c31bee,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd490a48dfb6addeeac86af9eb870abbd26bbcb4f9d71fc988e0d6595ae7a68,PodSandboxId:ab9a0859267cf57589906fe1cc1b097491ba4662611b1642d37e5faed82ed6cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726597996382
248806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4405647265d111ad2dc00b43ba5fdd68,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4757efa9abab5ce3aaeff14db8d8984f93281054c88e4c1507e7d5c2aeacf7c,PodSandboxId:9b15f72b553f9954485a3e190436c4287181cd99fb5d57c530a2c1938ae21091,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726597996385426819,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e77089a25f90888de8661fff9420990,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbb0b6d28b2ad7146435b1aacb4a12e2234e63117af18165d784114330c1a1db,PodSandboxId:7f79282b9b1b55422486de0f779a3323c024aa4802224b31dd4a4cf356d99c76,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726597996239938189,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4243d94cac9cd89eef8782f3a6d2858f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19bcc0b5d372609d466df179c233f539c54b17933e07226f60276eee65377599,PodSandboxId:8a2ddcc7f3ffa6df7580e485781e8b1e65358919e22d1c3ea2265bec8706e241,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726597709488234048,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e77089a25f90888de8661fff9420990,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f15e93f2-c737-4903-8ecd-078e4f28ef57 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:42:34 embed-certs-081863 crio[712]: time="2024-09-17 18:42:34.778905426Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0edec70a-57a8-41f1-9b5c-d81cbddc3d41 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:42:34 embed-certs-081863 crio[712]: time="2024-09-17 18:42:34.778981730Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0edec70a-57a8-41f1-9b5c-d81cbddc3d41 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:42:34 embed-certs-081863 crio[712]: time="2024-09-17 18:42:34.780439288Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=01dc4a7b-b061-4d79-8d6d-b116b404ecd3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:42:34 embed-certs-081863 crio[712]: time="2024-09-17 18:42:34.781021042Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598554780995318,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=01dc4a7b-b061-4d79-8d6d-b116b404ecd3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:42:34 embed-certs-081863 crio[712]: time="2024-09-17 18:42:34.781595248Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a6cb5ce-759b-4c2b-95b2-8b572bafffbe name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:42:34 embed-certs-081863 crio[712]: time="2024-09-17 18:42:34.781647880Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a6cb5ce-759b-4c2b-95b2-8b572bafffbe name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:42:34 embed-certs-081863 crio[712]: time="2024-09-17 18:42:34.781882647Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2fd2982baed92b765f1cd00e438c595150bfc1d7db6f00fbd13f1c4301be0af,PodSandboxId:65b6042b431eaafc929cdd98b54f6fad68c8a5390cfceed3b82bc91f2c321c56,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726598007921318657,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-662sf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc7d0fc2-f0dd-420c-b2f9-16ddb84bf3c3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dfd4aa286a96cc38bc161378523546816375a97e3975bfdb9ea096e29560424,PodSandboxId:ff5d150cca77926ac03fc941ef94c090c41b11bac1847d8744a77cad58301148,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726598007920264982,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dxjr7,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 16ebe197-5fcf-4988-968b-c9edd71886ff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa96d2aea6e07cacabc5f4fac23da55198ad1d1d74bd2c8ad9cb041b9062ed3,PodSandboxId:a770382b9e3102837ee6c972c554af3bb606e19484480ae2fd4aefa8a6624b12,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1726598007680141278,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 107868ba-cf29-42b0-bb0d-c0da9b6b4c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a62d26a92dc9106cd1c098db2c8d60ffed160222c87217dd5e6d2716a35030a,PodSandboxId:69994d5984084ef0727a158f891504afa1d80d54defc1030a63eae732214a8eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726598007635029542,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7w64h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46f3bcbd-64c9-4b30-9aa9-6f6e1eb9833b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0345045d68d04ef66df723640923d62d2528e7d5f7bbbd152118ed20fbb0eea,PodSandboxId:e6ac0bad7080625b8dc6d41d6e220900fd3b1e4dc4148c7444c052a8b97f3acd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726597996390421210
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be47b4099e325b3b1c4c87cd0c31bee,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd490a48dfb6addeeac86af9eb870abbd26bbcb4f9d71fc988e0d6595ae7a68,PodSandboxId:ab9a0859267cf57589906fe1cc1b097491ba4662611b1642d37e5faed82ed6cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726597996382
248806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4405647265d111ad2dc00b43ba5fdd68,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4757efa9abab5ce3aaeff14db8d8984f93281054c88e4c1507e7d5c2aeacf7c,PodSandboxId:9b15f72b553f9954485a3e190436c4287181cd99fb5d57c530a2c1938ae21091,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726597996385426819,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e77089a25f90888de8661fff9420990,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbb0b6d28b2ad7146435b1aacb4a12e2234e63117af18165d784114330c1a1db,PodSandboxId:7f79282b9b1b55422486de0f779a3323c024aa4802224b31dd4a4cf356d99c76,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726597996239938189,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4243d94cac9cd89eef8782f3a6d2858f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19bcc0b5d372609d466df179c233f539c54b17933e07226f60276eee65377599,PodSandboxId:8a2ddcc7f3ffa6df7580e485781e8b1e65358919e22d1c3ea2265bec8706e241,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726597709488234048,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e77089a25f90888de8661fff9420990,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5a6cb5ce-759b-4c2b-95b2-8b572bafffbe name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c2fd2982baed9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   65b6042b431ea       coredns-7c65d6cfc9-662sf
	8dfd4aa286a96       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   ff5d150cca779       coredns-7c65d6cfc9-dxjr7
	1aa96d2aea6e0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   a770382b9e310       storage-provisioner
	0a62d26a92dc9       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   69994d5984084       kube-proxy-7w64h
	b0345045d68d0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   e6ac0bad70806       kube-controller-manager-embed-certs-081863
	e4757efa9abab       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   9b15f72b553f9       kube-apiserver-embed-certs-081863
	0cd490a48dfb6       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   ab9a0859267cf       kube-scheduler-embed-certs-081863
	fbb0b6d28b2ad       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   7f79282b9b1b5       etcd-embed-certs-081863
	19bcc0b5d3726       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   8a2ddcc7f3ffa       kube-apiserver-embed-certs-081863
	
	
	==> coredns [8dfd4aa286a96cc38bc161378523546816375a97e3975bfdb9ea096e29560424] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [c2fd2982baed92b765f1cd00e438c595150bfc1d7db6f00fbd13f1c4301be0af] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-081863
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-081863
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=embed-certs-081863
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T18_33_22_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 18:33:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-081863
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 18:42:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 18:38:38 +0000   Tue, 17 Sep 2024 18:33:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 18:38:38 +0000   Tue, 17 Sep 2024 18:33:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 18:38:38 +0000   Tue, 17 Sep 2024 18:33:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 18:38:38 +0000   Tue, 17 Sep 2024 18:33:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.61
	  Hostname:    embed-certs-081863
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 484b610558c74ad0a08f35c832966507
	  System UUID:                484b6105-58c7-4ad0-a08f-35c832966507
	  Boot ID:                    f49a0f38-8397-4d05-9ae1-35d932263375
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-662sf                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m9s
	  kube-system                 coredns-7c65d6cfc9-dxjr7                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m9s
	  kube-system                 etcd-embed-certs-081863                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m14s
	  kube-system                 kube-apiserver-embed-certs-081863             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-controller-manager-embed-certs-081863    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 kube-proxy-7w64h                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m9s
	  kube-system                 kube-scheduler-embed-certs-081863             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 metrics-server-6867b74b74-98t8z               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m8s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m6s                   kube-proxy       
	  Normal  Starting                 9m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m20s (x8 over 9m20s)  kubelet          Node embed-certs-081863 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s (x8 over 9m20s)  kubelet          Node embed-certs-081863 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s (x7 over 9m20s)  kubelet          Node embed-certs-081863 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m14s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m13s                  kubelet          Node embed-certs-081863 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m13s                  kubelet          Node embed-certs-081863 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m13s                  kubelet          Node embed-certs-081863 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m10s                  node-controller  Node embed-certs-081863 event: Registered Node embed-certs-081863 in Controller
	
	
	==> dmesg <==
	[  +0.044771] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.253306] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.702251] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.700482] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.165682] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.064829] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059806] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.195482] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.155993] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.300584] systemd-fstab-generator[702]: Ignoring "noauto" option for root device
	[  +4.332498] systemd-fstab-generator[793]: Ignoring "noauto" option for root device
	[  +0.070969] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.934659] systemd-fstab-generator[914]: Ignoring "noauto" option for root device
	[  +4.592666] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.329192] kauditd_printk_skb: 54 callbacks suppressed
	[Sep17 18:29] kauditd_printk_skb: 31 callbacks suppressed
	[Sep17 18:33] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.264527] systemd-fstab-generator[2548]: Ignoring "noauto" option for root device
	[  +5.190181] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.902582] systemd-fstab-generator[2869]: Ignoring "noauto" option for root device
	[  +4.410741] systemd-fstab-generator[2975]: Ignoring "noauto" option for root device
	[  +0.098701] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.689755] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [fbb0b6d28b2ad7146435b1aacb4a12e2234e63117af18165d784114330c1a1db] <==
	{"level":"info","ts":"2024-09-17T18:33:16.647013Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-17T18:33:16.647592Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.61:2380"}
	{"level":"info","ts":"2024-09-17T18:33:16.647682Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.61:2380"}
	{"level":"info","ts":"2024-09-17T18:33:16.650322Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"e29fd3db84bd8ae5","initial-advertise-peer-urls":["https://192.168.50.61:2380"],"listen-peer-urls":["https://192.168.50.61:2380"],"advertise-client-urls":["https://192.168.50.61:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.61:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-17T18:33:16.652215Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-17T18:33:17.034554Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e29fd3db84bd8ae5 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-17T18:33:17.034664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e29fd3db84bd8ae5 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-17T18:33:17.034700Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e29fd3db84bd8ae5 received MsgPreVoteResp from e29fd3db84bd8ae5 at term 1"}
	{"level":"info","ts":"2024-09-17T18:33:17.034740Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e29fd3db84bd8ae5 became candidate at term 2"}
	{"level":"info","ts":"2024-09-17T18:33:17.034763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e29fd3db84bd8ae5 received MsgVoteResp from e29fd3db84bd8ae5 at term 2"}
	{"level":"info","ts":"2024-09-17T18:33:17.034791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e29fd3db84bd8ae5 became leader at term 2"}
	{"level":"info","ts":"2024-09-17T18:33:17.034817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e29fd3db84bd8ae5 elected leader e29fd3db84bd8ae5 at term 2"}
	{"level":"info","ts":"2024-09-17T18:33:17.040725Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e29fd3db84bd8ae5","local-member-attributes":"{Name:embed-certs-081863 ClientURLs:[https://192.168.50.61:2379]}","request-path":"/0/members/e29fd3db84bd8ae5/attributes","cluster-id":"1b36a7ea249c729a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T18:33:17.040829Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T18:33:17.041263Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T18:33:17.042027Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T18:33:17.050539Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T18:33:17.050609Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T18:33:17.046776Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T18:33:17.055203Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-17T18:33:17.048570Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:33:17.054292Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.61:2379"}
	{"level":"info","ts":"2024-09-17T18:33:17.121264Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1b36a7ea249c729a","local-member-id":"e29fd3db84bd8ae5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:33:17.124582Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:33:17.124771Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 18:42:35 up 14 min,  0 users,  load average: 0.13, 0.19, 0.15
	Linux embed-certs-081863 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [19bcc0b5d372609d466df179c233f539c54b17933e07226f60276eee65377599] <==
	W0917 18:33:09.563771       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:09.590705       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:09.594296       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:09.622191       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:09.635384       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:09.635663       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:09.726658       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:09.731067       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:09.744700       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:09.750304       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:09.750712       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:09.796053       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:09.931680       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:09.935417       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:09.945421       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:09.983301       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:10.045221       1 logging.go:55] [core] [Channel #15 SubChannel #17]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:10.076110       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:10.076358       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:10.093360       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:10.159023       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:10.195005       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:10.207101       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:10.268783       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:10.347417       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e4757efa9abab5ce3aaeff14db8d8984f93281054c88e4c1507e7d5c2aeacf7c] <==
	E0917 18:38:20.000020       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0917 18:38:20.000180       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0917 18:38:20.001333       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0917 18:38:20.001415       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0917 18:39:20.002170       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 18:39:20.002433       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0917 18:39:20.002385       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 18:39:20.002654       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0917 18:39:20.003793       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0917 18:39:20.003899       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0917 18:41:20.004412       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 18:41:20.004589       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0917 18:41:20.004668       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 18:41:20.004756       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0917 18:41:20.005789       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0917 18:41:20.005825       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b0345045d68d04ef66df723640923d62d2528e7d5f7bbbd152118ed20fbb0eea] <==
	E0917 18:37:26.013031       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:37:26.494392       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:37:56.020980       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:37:56.503056       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:38:26.028647       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:38:26.519793       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0917 18:38:38.417819       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-081863"
	E0917 18:38:56.034829       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:38:56.528005       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:39:26.042009       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:39:26.537353       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0917 18:39:33.000535       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="276.2µs"
	I0917 18:39:46.003056       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="85.342µs"
	E0917 18:39:56.048928       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:39:56.545903       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:40:26.056909       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:40:26.559321       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:40:56.064951       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:40:56.567802       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:41:26.071938       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:41:26.576132       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:41:56.079414       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:41:56.586896       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:42:26.087813       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:42:26.599677       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0a62d26a92dc9106cd1c098db2c8d60ffed160222c87217dd5e6d2716a35030a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 18:33:28.454705       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 18:33:28.465201       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.61"]
	E0917 18:33:28.465295       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 18:33:28.504354       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 18:33:28.504405       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 18:33:28.504429       1 server_linux.go:169] "Using iptables Proxier"
	I0917 18:33:28.507407       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 18:33:28.508081       1 server.go:483] "Version info" version="v1.31.1"
	I0917 18:33:28.508112       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 18:33:28.512761       1 config.go:199] "Starting service config controller"
	I0917 18:33:28.512861       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 18:33:28.512918       1 config.go:105] "Starting endpoint slice config controller"
	I0917 18:33:28.512939       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 18:33:28.514416       1 config.go:328] "Starting node config controller"
	I0917 18:33:28.514453       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 18:33:28.613106       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 18:33:28.613234       1 shared_informer.go:320] Caches are synced for service config
	I0917 18:33:28.614642       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0cd490a48dfb6addeeac86af9eb870abbd26bbcb4f9d71fc988e0d6595ae7a68] <==
	W0917 18:33:19.846693       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0917 18:33:19.846747       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 18:33:19.851709       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 18:33:19.851760       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 18:33:19.856865       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 18:33:19.856913       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 18:33:19.899195       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0917 18:33:19.899331       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0917 18:33:19.919278       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0917 18:33:19.919346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 18:33:19.946143       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 18:33:19.946213       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 18:33:19.964981       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0917 18:33:19.965040       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 18:33:19.981516       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0917 18:33:19.981568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 18:33:20.003895       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0917 18:33:20.003950       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 18:33:20.126928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 18:33:20.127004       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 18:33:20.156232       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0917 18:33:20.156290       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 18:33:20.394451       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0917 18:33:20.394556       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0917 18:33:22.603215       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 17 18:41:27 embed-certs-081863 kubelet[2876]: E0917 18:41:27.982762    2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-98t8z" podUID="941996a1-2109-4c06-88d1-19c6987f81bf"
	Sep 17 18:41:32 embed-certs-081863 kubelet[2876]: E0917 18:41:32.142375    2876 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598492141890806,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:41:32 embed-certs-081863 kubelet[2876]: E0917 18:41:32.142730    2876 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598492141890806,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:41:42 embed-certs-081863 kubelet[2876]: E0917 18:41:42.144932    2876 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598502144317953,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:41:42 embed-certs-081863 kubelet[2876]: E0917 18:41:42.145207    2876 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598502144317953,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:41:42 embed-certs-081863 kubelet[2876]: E0917 18:41:42.981296    2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-98t8z" podUID="941996a1-2109-4c06-88d1-19c6987f81bf"
	Sep 17 18:41:52 embed-certs-081863 kubelet[2876]: E0917 18:41:52.147505    2876 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598512146883720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:41:52 embed-certs-081863 kubelet[2876]: E0917 18:41:52.147574    2876 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598512146883720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:41:56 embed-certs-081863 kubelet[2876]: E0917 18:41:56.981185    2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-98t8z" podUID="941996a1-2109-4c06-88d1-19c6987f81bf"
	Sep 17 18:42:02 embed-certs-081863 kubelet[2876]: E0917 18:42:02.150327    2876 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598522149732975,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:42:02 embed-certs-081863 kubelet[2876]: E0917 18:42:02.151248    2876 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598522149732975,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:42:08 embed-certs-081863 kubelet[2876]: E0917 18:42:08.982445    2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-98t8z" podUID="941996a1-2109-4c06-88d1-19c6987f81bf"
	Sep 17 18:42:12 embed-certs-081863 kubelet[2876]: E0917 18:42:12.153376    2876 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598532152782567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:42:12 embed-certs-081863 kubelet[2876]: E0917 18:42:12.153707    2876 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598532152782567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:42:22 embed-certs-081863 kubelet[2876]: E0917 18:42:22.032920    2876 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 17 18:42:22 embed-certs-081863 kubelet[2876]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 17 18:42:22 embed-certs-081863 kubelet[2876]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 17 18:42:22 embed-certs-081863 kubelet[2876]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 17 18:42:22 embed-certs-081863 kubelet[2876]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 17 18:42:22 embed-certs-081863 kubelet[2876]: E0917 18:42:22.155599    2876 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598542154931302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:42:22 embed-certs-081863 kubelet[2876]: E0917 18:42:22.155632    2876 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598542154931302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:42:22 embed-certs-081863 kubelet[2876]: E0917 18:42:22.981416    2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-98t8z" podUID="941996a1-2109-4c06-88d1-19c6987f81bf"
	Sep 17 18:42:32 embed-certs-081863 kubelet[2876]: E0917 18:42:32.158277    2876 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598552157652977,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:42:32 embed-certs-081863 kubelet[2876]: E0917 18:42:32.158856    2876 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598552157652977,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:42:33 embed-certs-081863 kubelet[2876]: E0917 18:42:33.981372    2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-98t8z" podUID="941996a1-2109-4c06-88d1-19c6987f81bf"
	
	
	==> storage-provisioner [1aa96d2aea6e07cacabc5f4fac23da55198ad1d1d74bd2c8ad9cb041b9062ed3] <==
	I0917 18:33:28.304878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 18:33:28.340297       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 18:33:28.342898       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0917 18:33:28.360592       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 18:33:28.361238       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-081863_8b6cbfb3-cf47-4ca3-ac91-300ec6505313!
	I0917 18:33:28.367404       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8436ff3f-2690-449c-9ae4-d4990227f65a", APIVersion:"v1", ResourceVersion:"433", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-081863_8b6cbfb3-cf47-4ca3-ac91-300ec6505313 became leader
	I0917 18:33:28.462631       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-081863_8b6cbfb3-cf47-4ca3-ac91-300ec6505313!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-081863 -n embed-certs-081863
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-081863 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-98t8z
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-081863 describe pod metrics-server-6867b74b74-98t8z
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-081863 describe pod metrics-server-6867b74b74-98t8z: exit status 1 (64.67168ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-98t8z" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-081863 describe pod metrics-server-6867b74b74-98t8z: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
E0917 18:36:11.961274   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
E0917 18:36:19.270518   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
E0917 18:36:21.889817   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/calico-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
E0917 18:36:24.983393   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
E0917 18:36:44.544889   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/custom-flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
E0917 18:36:53.609112   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
E0917 18:37:02.207084   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/bridge-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
E0917 18:37:44.954027   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/calico-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
E0917 18:38:07.608529   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/custom-flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
E0917 18:38:10.821056   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
E0917 18:38:25.271538   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/bridge-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
E0917 18:38:50.532300   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
E0917 18:38:58.727911   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/enable-default-cni-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
E0917 18:39:33.884758   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
E0917 18:39:48.896388   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
E0917 18:39:56.205834   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
E0917 18:40:21.792341   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/enable-default-cni-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
E0917 18:41:21.889138   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/calico-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
E0917 18:41:24.983236   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
E0917 18:41:44.545067   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/custom-flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
E0917 18:43:10.821076   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
E0917 18:43:50.532066   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
E0917 18:43:58.728141   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/enable-default-cni-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
E0917 18:44:28.055013   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
E0917 18:44:48.896651   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
E0917 18:44:56.205159   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-190698 -n old-k8s-version-190698
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-190698 -n old-k8s-version-190698: exit status 2 (233.106448ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-190698" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-190698 -n old-k8s-version-190698
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-190698 -n old-k8s-version-190698: exit status 2 (230.295363ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-190698 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-190698 logs -n 25: (1.786589199s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo containerd config dump                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	| delete  | -p                                                     | disable-driver-mounts-671774 | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | disable-driver-mounts-671774                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:20 UTC |
	|         | default-k8s-diff-port-438836                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-081863            | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-081863                                  | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-328741             | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:20 UTC | 17 Sep 24 18:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-328741                                   | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:20 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-438836  | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:21 UTC | 17 Sep 24 18:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:21 UTC |                     |
	|         | default-k8s-diff-port-438836                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-081863                 | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-190698        | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-081863                                  | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC | 17 Sep 24 18:33 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-328741                  | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-328741                                   | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC | 17 Sep 24 18:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-438836       | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC | 17 Sep 24 18:32 UTC |
	|         | default-k8s-diff-port-438836                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-190698                              | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC | 17 Sep 24 18:23 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-190698             | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC | 17 Sep 24 18:23 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-190698                              | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 18:23:50
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 18:23:50.674050   78008 out.go:345] Setting OutFile to fd 1 ...
	I0917 18:23:50.674338   78008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:23:50.674349   78008 out.go:358] Setting ErrFile to fd 2...
	I0917 18:23:50.674356   78008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:23:50.674556   78008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 18:23:50.675161   78008 out.go:352] Setting JSON to false
	I0917 18:23:50.676159   78008 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7546,"bootTime":1726589885,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 18:23:50.676252   78008 start.go:139] virtualization: kvm guest
	I0917 18:23:50.678551   78008 out.go:177] * [old-k8s-version-190698] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 18:23:50.679898   78008 notify.go:220] Checking for updates...
	I0917 18:23:50.679923   78008 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 18:23:50.681520   78008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 18:23:50.683062   78008 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:23:50.684494   78008 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 18:23:50.685988   78008 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 18:23:50.687372   78008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 18:23:50.689066   78008 config.go:182] Loaded profile config "old-k8s-version-190698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0917 18:23:50.689526   78008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:23:50.689604   78008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:23:50.704879   78008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41839
	I0917 18:23:50.705416   78008 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:23:50.705985   78008 main.go:141] libmachine: Using API Version  1
	I0917 18:23:50.706014   78008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:23:50.706318   78008 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:23:50.706508   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:23:50.708560   78008 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0917 18:23:50.709804   78008 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 18:23:50.710139   78008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:23:50.710185   78008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:23:50.725466   78008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41935
	I0917 18:23:50.725978   78008 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:23:50.726521   78008 main.go:141] libmachine: Using API Version  1
	I0917 18:23:50.726552   78008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:23:50.726874   78008 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:23:50.727047   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:23:50.764769   78008 out.go:177] * Using the kvm2 driver based on existing profile
	I0917 18:23:50.766378   78008 start.go:297] selected driver: kvm2
	I0917 18:23:50.766396   78008 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:23:50.766522   78008 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 18:23:50.767254   78008 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:23:50.767323   78008 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19662-11085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0917 18:23:50.783226   78008 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0917 18:23:50.783619   78008 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 18:23:50.783658   78008 cni.go:84] Creating CNI manager for ""
	I0917 18:23:50.783697   78008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:23:50.783745   78008 start.go:340] cluster config:
	{Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:23:50.783859   78008 iso.go:125] acquiring lock: {Name:mkee7131e09b1c5eb94d106bda884bcbdbf6f906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:23:48.141429   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:23:50.786173   78008 out.go:177] * Starting "old-k8s-version-190698" primary control-plane node in "old-k8s-version-190698" cluster
	I0917 18:23:50.787985   78008 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0917 18:23:50.788036   78008 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0917 18:23:50.788046   78008 cache.go:56] Caching tarball of preloaded images
	I0917 18:23:50.788122   78008 preload.go:172] Found /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 18:23:50.788132   78008 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0917 18:23:50.788236   78008 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/config.json ...
	I0917 18:23:50.788409   78008 start.go:360] acquireMachinesLock for old-k8s-version-190698: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 18:23:54.221530   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:23:57.293515   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:03.373505   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:06.445563   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:12.525534   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:15.597572   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:21.677533   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:24.749529   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:30.829519   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:33.901554   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:39.981533   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:43.053468   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:49.133556   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:52.205564   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:58.285562   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:01.357500   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:07.437467   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:10.509559   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:16.589464   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:19.661586   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:25.741498   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:28.813506   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:34.893488   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:37.965553   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:44.045546   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:47.117526   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:53.197534   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:56.269532   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:02.349528   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:05.421492   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:11.501470   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:14.573534   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:20.653500   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:23.725530   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:29.805601   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:32.877548   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:38.957496   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:42.029510   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:48.109547   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:51.181567   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:57.261480   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:27:00.333628   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:27:03.338059   77433 start.go:364] duration metric: took 4m21.061938866s to acquireMachinesLock for "no-preload-328741"
	I0917 18:27:03.338119   77433 start.go:96] Skipping create...Using existing machine configuration
	I0917 18:27:03.338127   77433 fix.go:54] fixHost starting: 
	I0917 18:27:03.338580   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:27:03.338627   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:27:03.353917   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38843
	I0917 18:27:03.354383   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:27:03.354859   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:27:03.354881   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:27:03.355169   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:27:03.355331   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:03.355481   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetState
	I0917 18:27:03.357141   77433 fix.go:112] recreateIfNeeded on no-preload-328741: state=Stopped err=<nil>
	I0917 18:27:03.357164   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	W0917 18:27:03.357305   77433 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 18:27:03.359125   77433 out.go:177] * Restarting existing kvm2 VM for "no-preload-328741" ...
	I0917 18:27:03.335549   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:27:03.335586   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetMachineName
	I0917 18:27:03.335955   77264 buildroot.go:166] provisioning hostname "embed-certs-081863"
	I0917 18:27:03.335984   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetMachineName
	I0917 18:27:03.336183   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:27:03.337915   77264 machine.go:96] duration metric: took 4m37.417759423s to provisionDockerMachine
	I0917 18:27:03.337964   77264 fix.go:56] duration metric: took 4m37.441049892s for fixHost
	I0917 18:27:03.337973   77264 start.go:83] releasing machines lock for "embed-certs-081863", held for 4m37.441075799s
	W0917 18:27:03.337995   77264 start.go:714] error starting host: provision: host is not running
	W0917 18:27:03.338098   77264 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0917 18:27:03.338107   77264 start.go:729] Will try again in 5 seconds ...
	I0917 18:27:03.360504   77433 main.go:141] libmachine: (no-preload-328741) Calling .Start
	I0917 18:27:03.360723   77433 main.go:141] libmachine: (no-preload-328741) Ensuring networks are active...
	I0917 18:27:03.361552   77433 main.go:141] libmachine: (no-preload-328741) Ensuring network default is active
	I0917 18:27:03.361892   77433 main.go:141] libmachine: (no-preload-328741) Ensuring network mk-no-preload-328741 is active
	I0917 18:27:03.362266   77433 main.go:141] libmachine: (no-preload-328741) Getting domain xml...
	I0917 18:27:03.362986   77433 main.go:141] libmachine: (no-preload-328741) Creating domain...
	I0917 18:27:04.605668   77433 main.go:141] libmachine: (no-preload-328741) Waiting to get IP...
	I0917 18:27:04.606667   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:04.607120   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:04.607206   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:04.607116   78688 retry.go:31] will retry after 233.634344ms: waiting for machine to come up
	I0917 18:27:04.842666   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:04.843211   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:04.843238   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:04.843149   78688 retry.go:31] will retry after 295.987515ms: waiting for machine to come up
	I0917 18:27:05.140821   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:05.141150   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:05.141173   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:05.141121   78688 retry.go:31] will retry after 482.890276ms: waiting for machine to come up
	I0917 18:27:05.625952   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:05.626401   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:05.626461   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:05.626347   78688 retry.go:31] will retry after 554.515102ms: waiting for machine to come up
	I0917 18:27:06.182038   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:06.182421   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:06.182448   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:06.182375   78688 retry.go:31] will retry after 484.48355ms: waiting for machine to come up
	I0917 18:27:06.668366   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:06.668886   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:06.668917   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:06.668862   78688 retry.go:31] will retry after 821.433387ms: waiting for machine to come up
	I0917 18:27:08.338629   77264 start.go:360] acquireMachinesLock for embed-certs-081863: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 18:27:07.491878   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:07.492313   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:07.492333   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:07.492274   78688 retry.go:31] will retry after 777.017059ms: waiting for machine to come up
	I0917 18:27:08.271320   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:08.271721   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:08.271748   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:08.271671   78688 retry.go:31] will retry after 1.033548419s: waiting for machine to come up
	I0917 18:27:09.307361   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:09.307889   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:09.307922   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:09.307826   78688 retry.go:31] will retry after 1.347955425s: waiting for machine to come up
	I0917 18:27:10.657426   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:10.657903   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:10.657927   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:10.657850   78688 retry.go:31] will retry after 1.52847221s: waiting for machine to come up
	I0917 18:27:12.188594   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:12.189069   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:12.189094   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:12.189031   78688 retry.go:31] will retry after 2.329019451s: waiting for machine to come up
	I0917 18:27:14.519240   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:14.519691   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:14.519718   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:14.519643   78688 retry.go:31] will retry after 2.547184893s: waiting for machine to come up
	I0917 18:27:17.068162   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:17.068621   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:17.068645   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:17.068577   78688 retry.go:31] will retry after 3.042534231s: waiting for machine to come up
	I0917 18:27:21.442547   77819 start.go:364] duration metric: took 3m42.844200352s to acquireMachinesLock for "default-k8s-diff-port-438836"
	I0917 18:27:21.442612   77819 start.go:96] Skipping create...Using existing machine configuration
	I0917 18:27:21.442620   77819 fix.go:54] fixHost starting: 
	I0917 18:27:21.443035   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:27:21.443089   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:27:21.462997   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38191
	I0917 18:27:21.463468   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:27:21.464035   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:27:21.464056   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:27:21.464377   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:27:21.464546   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:21.464703   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetState
	I0917 18:27:21.466460   77819 fix.go:112] recreateIfNeeded on default-k8s-diff-port-438836: state=Stopped err=<nil>
	I0917 18:27:21.466502   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	W0917 18:27:21.466643   77819 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 18:27:21.468932   77819 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-438836" ...
	I0917 18:27:20.113857   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.114336   77433 main.go:141] libmachine: (no-preload-328741) Found IP for machine: 192.168.72.182
	I0917 18:27:20.114359   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has current primary IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.114364   77433 main.go:141] libmachine: (no-preload-328741) Reserving static IP address...
	I0917 18:27:20.114774   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "no-preload-328741", mac: "52:54:00:de:bd:6d", ip: "192.168.72.182"} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.114792   77433 main.go:141] libmachine: (no-preload-328741) Reserved static IP address: 192.168.72.182
	I0917 18:27:20.114808   77433 main.go:141] libmachine: (no-preload-328741) DBG | skip adding static IP to network mk-no-preload-328741 - found existing host DHCP lease matching {name: "no-preload-328741", mac: "52:54:00:de:bd:6d", ip: "192.168.72.182"}
	I0917 18:27:20.114822   77433 main.go:141] libmachine: (no-preload-328741) DBG | Getting to WaitForSSH function...
	I0917 18:27:20.114831   77433 main.go:141] libmachine: (no-preload-328741) Waiting for SSH to be available...
	I0917 18:27:20.116945   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.117224   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.117268   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.117371   77433 main.go:141] libmachine: (no-preload-328741) DBG | Using SSH client type: external
	I0917 18:27:20.117396   77433 main.go:141] libmachine: (no-preload-328741) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa (-rw-------)
	I0917 18:27:20.117427   77433 main.go:141] libmachine: (no-preload-328741) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:27:20.117439   77433 main.go:141] libmachine: (no-preload-328741) DBG | About to run SSH command:
	I0917 18:27:20.117446   77433 main.go:141] libmachine: (no-preload-328741) DBG | exit 0
	I0917 18:27:20.241462   77433 main.go:141] libmachine: (no-preload-328741) DBG | SSH cmd err, output: <nil>: 
	I0917 18:27:20.241844   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetConfigRaw
	I0917 18:27:20.242520   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetIP
	I0917 18:27:20.245397   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.245786   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.245821   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.246121   77433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/config.json ...
	I0917 18:27:20.246346   77433 machine.go:93] provisionDockerMachine start ...
	I0917 18:27:20.246367   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:20.246573   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.248978   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.249318   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.249345   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.249489   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:20.249643   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.249795   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.249911   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:20.250048   77433 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:20.250301   77433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0917 18:27:20.250317   77433 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 18:27:20.357778   77433 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 18:27:20.357805   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetMachineName
	I0917 18:27:20.358058   77433 buildroot.go:166] provisioning hostname "no-preload-328741"
	I0917 18:27:20.358083   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetMachineName
	I0917 18:27:20.358261   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.361057   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.361463   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.361498   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.361617   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:20.361774   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.361948   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.362031   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:20.362157   77433 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:20.362321   77433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0917 18:27:20.362337   77433 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-328741 && echo "no-preload-328741" | sudo tee /etc/hostname
	I0917 18:27:20.486928   77433 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-328741
	
	I0917 18:27:20.486956   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.489814   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.490212   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.490245   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.490451   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:20.490627   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.490846   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.491105   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:20.491327   77433 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:20.491532   77433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0917 18:27:20.491553   77433 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-328741' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-328741/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-328741' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:27:20.607308   77433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:27:20.607336   77433 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:27:20.607379   77433 buildroot.go:174] setting up certificates
	I0917 18:27:20.607394   77433 provision.go:84] configureAuth start
	I0917 18:27:20.607407   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetMachineName
	I0917 18:27:20.607708   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetIP
	I0917 18:27:20.610353   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.610722   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.610751   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.610897   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.612874   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.613160   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.613196   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.613366   77433 provision.go:143] copyHostCerts
	I0917 18:27:20.613425   77433 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:27:20.613435   77433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:27:20.613508   77433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:27:20.613607   77433 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:27:20.613614   77433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:27:20.613645   77433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:27:20.613706   77433 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:27:20.613713   77433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:27:20.613734   77433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:27:20.613789   77433 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.no-preload-328741 san=[127.0.0.1 192.168.72.182 localhost minikube no-preload-328741]
	I0917 18:27:20.808567   77433 provision.go:177] copyRemoteCerts
	I0917 18:27:20.808634   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:27:20.808662   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.811568   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.811927   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.811954   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.812154   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:20.812347   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.812503   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:20.812627   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:27:20.895825   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0917 18:27:20.922489   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 18:27:20.948827   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:27:20.974824   77433 provision.go:87] duration metric: took 367.418792ms to configureAuth
	I0917 18:27:20.974852   77433 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:27:20.975023   77433 config.go:182] Loaded profile config "no-preload-328741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:27:20.975090   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.977758   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.978068   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.978105   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.978254   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:20.978473   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.978662   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.978784   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:20.978951   77433 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:20.979110   77433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0917 18:27:20.979126   77433 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:27:21.205095   77433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:27:21.205123   77433 machine.go:96] duration metric: took 958.76263ms to provisionDockerMachine
	I0917 18:27:21.205136   77433 start.go:293] postStartSetup for "no-preload-328741" (driver="kvm2")
	I0917 18:27:21.205148   77433 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:27:21.205167   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:21.205532   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:27:21.205565   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:21.208451   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.208840   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:21.208882   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.209046   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:21.209355   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:21.209578   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:21.209759   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:27:21.291918   77433 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:27:21.296054   77433 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:27:21.296077   77433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:27:21.296139   77433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:27:21.296215   77433 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:27:21.296313   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:27:21.305838   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:27:21.331220   77433 start.go:296] duration metric: took 126.069168ms for postStartSetup
	I0917 18:27:21.331261   77433 fix.go:56] duration metric: took 17.993134184s for fixHost
	I0917 18:27:21.331280   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:21.334290   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.334663   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:21.334688   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.334893   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:21.335134   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:21.335275   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:21.335443   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:21.335597   77433 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:21.335788   77433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0917 18:27:21.335803   77433 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:27:21.442323   77433 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726597641.413351440
	
	I0917 18:27:21.442375   77433 fix.go:216] guest clock: 1726597641.413351440
	I0917 18:27:21.442390   77433 fix.go:229] Guest: 2024-09-17 18:27:21.41335144 +0000 UTC Remote: 2024-09-17 18:27:21.331264373 +0000 UTC m=+279.198911017 (delta=82.087067ms)
	I0917 18:27:21.442423   77433 fix.go:200] guest clock delta is within tolerance: 82.087067ms
	I0917 18:27:21.442443   77433 start.go:83] releasing machines lock for "no-preload-328741", held for 18.10434208s
	I0917 18:27:21.442489   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:21.442775   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetIP
	I0917 18:27:21.445223   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.445561   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:21.445602   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.445710   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:21.446182   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:21.446357   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:21.446466   77433 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:27:21.446519   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:21.446551   77433 ssh_runner.go:195] Run: cat /version.json
	I0917 18:27:21.446574   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:21.449063   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.449340   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.449400   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:21.449435   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.449557   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:21.449699   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:21.449832   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:21.449833   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:21.449866   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.450010   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:21.450004   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:27:21.450104   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:21.450222   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:21.450352   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:27:21.552947   77433 ssh_runner.go:195] Run: systemctl --version
	I0917 18:27:21.559634   77433 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:27:21.707720   77433 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:27:21.714672   77433 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:27:21.714746   77433 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:27:21.731669   77433 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:27:21.731700   77433 start.go:495] detecting cgroup driver to use...
	I0917 18:27:21.731776   77433 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:27:21.749370   77433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:27:21.765181   77433 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:27:21.765284   77433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:27:21.782356   77433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:27:21.801216   77433 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:27:21.918587   77433 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:27:22.089578   77433 docker.go:233] disabling docker service ...
	I0917 18:27:22.089661   77433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:27:22.110533   77433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:27:22.125372   77433 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:27:22.241575   77433 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:27:22.367081   77433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:27:22.381835   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:27:22.402356   77433 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 18:27:22.402432   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.413980   77433 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:27:22.414051   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.426845   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.439426   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.451352   77433 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:27:22.463891   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.475686   77433 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.495380   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.507217   77433 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:27:22.517776   77433 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:27:22.517844   77433 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:27:22.537889   77433 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:27:22.549554   77433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:27:22.663258   77433 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:27:22.762619   77433 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:27:22.762693   77433 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:27:22.769911   77433 start.go:563] Will wait 60s for crictl version
	I0917 18:27:22.769967   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:22.775014   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:27:22.819750   77433 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:27:22.819864   77433 ssh_runner.go:195] Run: crio --version
	I0917 18:27:22.849303   77433 ssh_runner.go:195] Run: crio --version
	I0917 18:27:22.887418   77433 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 18:27:21.470362   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Start
	I0917 18:27:21.470570   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Ensuring networks are active...
	I0917 18:27:21.471316   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Ensuring network default is active
	I0917 18:27:21.471781   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Ensuring network mk-default-k8s-diff-port-438836 is active
	I0917 18:27:21.472151   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Getting domain xml...
	I0917 18:27:21.472856   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Creating domain...
	I0917 18:27:22.744436   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting to get IP...
	I0917 18:27:22.745314   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:22.745829   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:22.745899   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:22.745819   78807 retry.go:31] will retry after 201.903728ms: waiting for machine to come up
	I0917 18:27:22.949838   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:22.951570   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:22.951596   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:22.951537   78807 retry.go:31] will retry after 376.852856ms: waiting for machine to come up
	I0917 18:27:23.330165   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:23.330685   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:23.330706   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:23.330633   78807 retry.go:31] will retry after 415.874344ms: waiting for machine to come up
	I0917 18:27:22.888728   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetIP
	I0917 18:27:22.891793   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:22.892111   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:22.892130   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:22.892513   77433 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0917 18:27:22.897071   77433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:27:22.911118   77433 kubeadm.go:883] updating cluster {Name:no-preload-328741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-328741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:27:22.911279   77433 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 18:27:22.911333   77433 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:27:22.949155   77433 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0917 18:27:22.949180   77433 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0917 18:27:22.949270   77433 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:22.949289   77433 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:22.949319   77433 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0917 18:27:22.949298   77433 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:22.949398   77433 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:22.949424   77433 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:22.949449   77433 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:22.949339   77433 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:22.950952   77433 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:22.951106   77433 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:22.951113   77433 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:22.951238   77433 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:22.951257   77433 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0917 18:27:22.951257   77433 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:22.951343   77433 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:22.951426   77433 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.145473   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:23.155577   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:23.167187   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:23.169154   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:23.171736   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.196199   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:23.225029   77433 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0917 18:27:23.225085   77433 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:23.225133   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.233185   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0917 18:27:23.269008   77433 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0917 18:27:23.269045   77433 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:23.269092   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.307273   77433 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0917 18:27:23.307319   77433 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:23.307374   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.345906   77433 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0917 18:27:23.345949   77433 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.345999   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.346222   77433 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0917 18:27:23.346259   77433 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:23.346316   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.362612   77433 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0917 18:27:23.362657   77433 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:23.362684   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:23.362707   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.464589   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:23.464684   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:23.464742   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:23.464815   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.464903   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:23.464911   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:23.616289   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:23.616349   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:23.616400   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:23.616459   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.616514   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:23.616548   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:23.752643   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:23.752754   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:23.752754   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:23.761857   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.761945   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0917 18:27:23.762041   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0917 18:27:23.768641   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:23.883181   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0917 18:27:23.883181   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0917 18:27:23.883230   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0917 18:27:23.883294   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0917 18:27:23.883301   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0917 18:27:23.883302   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0917 18:27:23.883314   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0917 18:27:23.883371   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0917 18:27:23.883388   77433 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0917 18:27:23.883401   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0917 18:27:23.883413   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0917 18:27:23.883680   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0917 18:27:23.883758   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0917 18:27:23.894354   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0917 18:27:23.894539   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0917 18:27:23.901735   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0917 18:27:23.901990   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0917 18:27:23.909116   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:26.450360   77433 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.566575076s)
	I0917 18:27:26.450405   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0917 18:27:26.450360   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.566921389s)
	I0917 18:27:26.450422   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0917 18:27:26.450429   77433 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.541282746s)
	I0917 18:27:26.450444   77433 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0917 18:27:26.450492   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0917 18:27:26.450485   77433 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0917 18:27:26.450524   77433 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:26.450567   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.748331   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:23.748832   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:23.748862   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:23.748765   78807 retry.go:31] will retry after 515.370863ms: waiting for machine to come up
	I0917 18:27:24.265477   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:24.265902   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:24.265939   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:24.265859   78807 retry.go:31] will retry after 629.410487ms: waiting for machine to come up
	I0917 18:27:24.896939   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:24.897469   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:24.897500   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:24.897415   78807 retry.go:31] will retry after 846.873676ms: waiting for machine to come up
	I0917 18:27:25.745594   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:25.746228   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:25.746254   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:25.746167   78807 retry.go:31] will retry after 1.192058073s: waiting for machine to come up
	I0917 18:27:26.940216   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:26.940678   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:26.940702   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:26.940637   78807 retry.go:31] will retry after 1.449067435s: waiting for machine to come up
	I0917 18:27:28.392247   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:28.392711   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:28.392753   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:28.392665   78807 retry.go:31] will retry after 1.444723582s: waiting for machine to come up
	I0917 18:27:29.730898   77433 ssh_runner.go:235] Completed: which crictl: (3.280308944s)
	I0917 18:27:29.730988   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:29.731032   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.280407278s)
	I0917 18:27:29.731069   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0917 18:27:29.731121   77433 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0917 18:27:29.731164   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0917 18:27:29.781214   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:32.016162   77433 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.234900005s)
	I0917 18:27:32.016246   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:32.016175   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.284993422s)
	I0917 18:27:32.016331   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0917 18:27:32.016382   77433 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0917 18:27:32.016431   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0917 18:27:32.062774   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0917 18:27:32.062903   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0917 18:27:29.839565   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:29.840118   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:29.840154   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:29.840044   78807 retry.go:31] will retry after 1.984255207s: waiting for machine to come up
	I0917 18:27:31.825642   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:31.826059   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:31.826105   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:31.826027   78807 retry.go:31] will retry after 1.870760766s: waiting for machine to come up
	I0917 18:27:34.201435   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.18496735s)
	I0917 18:27:34.201470   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0917 18:27:34.201493   77433 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0917 18:27:34.201506   77433 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.138578181s)
	I0917 18:27:34.201545   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0917 18:27:34.201547   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0917 18:27:36.281470   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.079903331s)
	I0917 18:27:36.281515   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0917 18:27:36.281539   77433 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0917 18:27:36.281581   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0917 18:27:33.698947   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:33.699358   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:33.699389   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:33.699308   78807 retry.go:31] will retry after 2.194557575s: waiting for machine to come up
	I0917 18:27:35.896774   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:35.897175   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:35.897215   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:35.897139   78807 retry.go:31] will retry after 3.232409388s: waiting for machine to come up
	I0917 18:27:40.422552   78008 start.go:364] duration metric: took 3m49.634084682s to acquireMachinesLock for "old-k8s-version-190698"
	I0917 18:27:40.422631   78008 start.go:96] Skipping create...Using existing machine configuration
	I0917 18:27:40.422641   78008 fix.go:54] fixHost starting: 
	I0917 18:27:40.423075   78008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:27:40.423129   78008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:27:40.444791   78008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38471
	I0917 18:27:40.445363   78008 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:27:40.446028   78008 main.go:141] libmachine: Using API Version  1
	I0917 18:27:40.446063   78008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:27:40.446445   78008 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:27:40.446690   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:27:40.446844   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetState
	I0917 18:27:40.448698   78008 fix.go:112] recreateIfNeeded on old-k8s-version-190698: state=Stopped err=<nil>
	I0917 18:27:40.448743   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	W0917 18:27:40.448912   78008 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 18:27:40.451316   78008 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-190698" ...
	I0917 18:27:40.452694   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .Start
	I0917 18:27:40.452899   78008 main.go:141] libmachine: (old-k8s-version-190698) Ensuring networks are active...
	I0917 18:27:40.453913   78008 main.go:141] libmachine: (old-k8s-version-190698) Ensuring network default is active
	I0917 18:27:40.454353   78008 main.go:141] libmachine: (old-k8s-version-190698) Ensuring network mk-old-k8s-version-190698 is active
	I0917 18:27:40.454806   78008 main.go:141] libmachine: (old-k8s-version-190698) Getting domain xml...
	I0917 18:27:40.455606   78008 main.go:141] libmachine: (old-k8s-version-190698) Creating domain...
	I0917 18:27:39.131665   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.132199   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Found IP for machine: 192.168.39.58
	I0917 18:27:39.132224   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Reserving static IP address...
	I0917 18:27:39.132241   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has current primary IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.132683   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-438836", mac: "52:54:00:78:fb:fd", ip: "192.168.39.58"} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.132716   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | skip adding static IP to network mk-default-k8s-diff-port-438836 - found existing host DHCP lease matching {name: "default-k8s-diff-port-438836", mac: "52:54:00:78:fb:fd", ip: "192.168.39.58"}
	I0917 18:27:39.132729   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Reserved static IP address: 192.168.39.58
	I0917 18:27:39.132744   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for SSH to be available...
	I0917 18:27:39.132759   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Getting to WaitForSSH function...
	I0917 18:27:39.135223   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.135590   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.135612   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.135797   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Using SSH client type: external
	I0917 18:27:39.135825   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa (-rw-------)
	I0917 18:27:39.135871   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:27:39.135888   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | About to run SSH command:
	I0917 18:27:39.135899   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | exit 0
	I0917 18:27:39.261644   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | SSH cmd err, output: <nil>: 
	I0917 18:27:39.261978   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetConfigRaw
	I0917 18:27:39.262594   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetIP
	I0917 18:27:39.265005   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.265308   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.265376   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.265576   77819 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/config.json ...
	I0917 18:27:39.265817   77819 machine.go:93] provisionDockerMachine start ...
	I0917 18:27:39.265835   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:39.266039   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.268290   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.268616   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.268646   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.268846   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:39.269019   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.269159   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.269333   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:39.269497   77819 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:39.269689   77819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0917 18:27:39.269701   77819 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 18:27:39.378024   77819 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 18:27:39.378050   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetMachineName
	I0917 18:27:39.378284   77819 buildroot.go:166] provisioning hostname "default-k8s-diff-port-438836"
	I0917 18:27:39.378322   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetMachineName
	I0917 18:27:39.378529   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.381247   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.381574   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.381614   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.381765   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:39.381938   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.382057   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.382169   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:39.382311   77819 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:39.382546   77819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0917 18:27:39.382567   77819 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-438836 && echo "default-k8s-diff-port-438836" | sudo tee /etc/hostname
	I0917 18:27:39.516431   77819 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-438836
	
	I0917 18:27:39.516462   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.519542   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.519934   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.519966   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.520172   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:39.520405   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.520594   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.520773   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:39.520927   77819 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:39.521094   77819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0917 18:27:39.521111   77819 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-438836' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-438836/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-438836' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:27:39.640608   77819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:27:39.640656   77819 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:27:39.640717   77819 buildroot.go:174] setting up certificates
	I0917 18:27:39.640731   77819 provision.go:84] configureAuth start
	I0917 18:27:39.640750   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetMachineName
	I0917 18:27:39.641038   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetIP
	I0917 18:27:39.643698   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.644026   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.644085   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.644374   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.646822   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.647198   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.647227   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.647360   77819 provision.go:143] copyHostCerts
	I0917 18:27:39.647428   77819 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:27:39.647441   77819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:27:39.647516   77819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:27:39.647637   77819 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:27:39.647658   77819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:27:39.647693   77819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:27:39.647782   77819 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:27:39.647790   77819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:27:39.647817   77819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:27:39.647883   77819 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-438836 san=[127.0.0.1 192.168.39.58 default-k8s-diff-port-438836 localhost minikube]
	I0917 18:27:39.751962   77819 provision.go:177] copyRemoteCerts
	I0917 18:27:39.752028   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:27:39.752053   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.754975   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.755348   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.755381   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.755541   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:39.755725   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.755872   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:39.755988   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:27:39.840071   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0917 18:27:39.866175   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 18:27:39.896353   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:27:39.924332   77819 provision.go:87] duration metric: took 283.582838ms to configureAuth
	I0917 18:27:39.924363   77819 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:27:39.924606   77819 config.go:182] Loaded profile config "default-k8s-diff-port-438836": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:27:39.924701   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.927675   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.928027   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.928058   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.928307   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:39.928545   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.928710   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.928854   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:39.929011   77819 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:39.929244   77819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0917 18:27:39.929272   77819 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:27:40.170729   77819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:27:40.170763   77819 machine.go:96] duration metric: took 904.932975ms to provisionDockerMachine
	I0917 18:27:40.170776   77819 start.go:293] postStartSetup for "default-k8s-diff-port-438836" (driver="kvm2")
	I0917 18:27:40.170789   77819 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:27:40.170810   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:40.171145   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:27:40.171187   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:40.173980   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.174451   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:40.174480   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.174739   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:40.174926   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:40.175096   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:40.175261   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:27:40.263764   77819 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:27:40.269500   77819 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:27:40.269528   77819 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:27:40.269611   77819 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:27:40.269711   77819 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:27:40.269838   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:27:40.280672   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:27:40.309608   77819 start.go:296] duration metric: took 138.819033ms for postStartSetup
	I0917 18:27:40.309648   77819 fix.go:56] duration metric: took 18.867027995s for fixHost
	I0917 18:27:40.309668   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:40.312486   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.313018   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:40.313042   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.313201   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:40.313408   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:40.313574   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:40.313691   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:40.313853   77819 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:40.314037   77819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0917 18:27:40.314050   77819 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:27:40.422393   77819 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726597660.391833807
	
	I0917 18:27:40.422417   77819 fix.go:216] guest clock: 1726597660.391833807
	I0917 18:27:40.422424   77819 fix.go:229] Guest: 2024-09-17 18:27:40.391833807 +0000 UTC Remote: 2024-09-17 18:27:40.309651352 +0000 UTC m=+241.856499140 (delta=82.182455ms)
	I0917 18:27:40.422443   77819 fix.go:200] guest clock delta is within tolerance: 82.182455ms
	I0917 18:27:40.422448   77819 start.go:83] releasing machines lock for "default-k8s-diff-port-438836", held for 18.97986821s
	I0917 18:27:40.422473   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:40.422745   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetIP
	I0917 18:27:40.425463   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.425856   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:40.425885   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.426048   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:40.426529   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:40.426665   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:40.426742   77819 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:27:40.426807   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:40.426910   77819 ssh_runner.go:195] Run: cat /version.json
	I0917 18:27:40.426936   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:40.429570   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.429639   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.429967   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:40.430004   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:40.430031   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.430047   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.430161   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:40.430297   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:40.430376   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:40.430470   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:40.430662   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:40.430664   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:40.430841   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:27:40.430837   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:27:40.532536   77819 ssh_runner.go:195] Run: systemctl --version
	I0917 18:27:40.540125   77819 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:27:40.697991   77819 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:27:40.705336   77819 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:27:40.705427   77819 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:27:40.723038   77819 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:27:40.723065   77819 start.go:495] detecting cgroup driver to use...
	I0917 18:27:40.723135   77819 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:27:40.745561   77819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:27:40.765884   77819 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:27:40.765955   77819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:27:40.786769   77819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:27:40.805655   77819 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:27:40.935895   77819 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:27:41.121556   77819 docker.go:233] disabling docker service ...
	I0917 18:27:41.121638   77819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:27:41.144711   77819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:27:41.164782   77819 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:27:41.308439   77819 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:27:41.467525   77819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:27:41.485989   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:27:41.510198   77819 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 18:27:41.510282   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.526458   77819 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:27:41.526566   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.543334   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.558978   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.574621   77819 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:27:41.587226   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.603144   77819 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.627410   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.639981   77819 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:27:41.651547   77819 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:27:41.651615   77819 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:27:41.669534   77819 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:27:41.684429   77819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:27:41.839270   77819 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:27:41.974151   77819 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:27:41.974230   77819 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:27:41.980491   77819 start.go:563] Will wait 60s for crictl version
	I0917 18:27:41.980563   77819 ssh_runner.go:195] Run: which crictl
	I0917 18:27:41.985802   77819 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:27:42.033141   77819 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:27:42.033247   77819 ssh_runner.go:195] Run: crio --version
	I0917 18:27:42.076192   77819 ssh_runner.go:195] Run: crio --version
	I0917 18:27:42.118442   77819 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 18:27:37.750960   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.469353165s)
	I0917 18:27:37.750995   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0917 18:27:37.751021   77433 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0917 18:27:37.751074   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0917 18:27:38.415240   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0917 18:27:38.415308   77433 cache_images.go:123] Successfully loaded all cached images
	I0917 18:27:38.415317   77433 cache_images.go:92] duration metric: took 15.466122195s to LoadCachedImages
	I0917 18:27:38.415338   77433 kubeadm.go:934] updating node { 192.168.72.182 8443 v1.31.1 crio true true} ...
	I0917 18:27:38.415428   77433 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-328741 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-328741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:27:38.415536   77433 ssh_runner.go:195] Run: crio config
	I0917 18:27:38.466849   77433 cni.go:84] Creating CNI manager for ""
	I0917 18:27:38.466880   77433 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:27:38.466893   77433 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:27:38.466921   77433 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.182 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-328741 NodeName:no-preload-328741 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 18:27:38.467090   77433 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-328741"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.182
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.182"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:27:38.467166   77433 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 18:27:38.478263   77433 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:27:38.478345   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:27:38.488938   77433 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0917 18:27:38.509613   77433 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:27:38.529224   77433 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0917 18:27:38.549010   77433 ssh_runner.go:195] Run: grep 192.168.72.182	control-plane.minikube.internal$ /etc/hosts
	I0917 18:27:38.553381   77433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:27:38.566215   77433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:27:38.688671   77433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:27:38.708655   77433 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741 for IP: 192.168.72.182
	I0917 18:27:38.708677   77433 certs.go:194] generating shared ca certs ...
	I0917 18:27:38.708693   77433 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:27:38.708860   77433 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:27:38.708916   77433 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:27:38.708930   77433 certs.go:256] generating profile certs ...
	I0917 18:27:38.709038   77433 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/client.key
	I0917 18:27:38.709130   77433 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/apiserver.key.843ed40b
	I0917 18:27:38.709199   77433 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/proxy-client.key
	I0917 18:27:38.709384   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:27:38.709421   77433 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:27:38.709435   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:27:38.709471   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:27:38.709519   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:27:38.709552   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:27:38.709606   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:27:38.710412   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:27:38.754736   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:27:38.792703   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:27:38.826420   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:27:38.869433   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0917 18:27:38.897601   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 18:27:38.928694   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:27:38.953856   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 18:27:38.978643   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:27:39.004382   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:27:39.031548   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:27:39.057492   77433 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:27:39.075095   77433 ssh_runner.go:195] Run: openssl version
	I0917 18:27:39.081033   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:27:39.092196   77433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:27:39.097013   77433 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:27:39.097070   77433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:27:39.103104   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:27:39.114377   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:27:39.125639   77433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:27:39.130757   77433 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:27:39.130828   77433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:27:39.137857   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:27:39.150215   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:27:39.161792   77433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:39.166467   77433 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:39.166528   77433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:39.172262   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:27:39.183793   77433 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:27:39.188442   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 18:27:39.194477   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 18:27:39.200688   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 18:27:39.207092   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 18:27:39.213451   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 18:27:39.220286   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 18:27:39.226642   77433 kubeadm.go:392] StartCluster: {Name:no-preload-328741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-328741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:27:39.226747   77433 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:27:39.226814   77433 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:27:39.273929   77433 cri.go:89] found id: ""
	I0917 18:27:39.274001   77433 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:27:39.286519   77433 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 18:27:39.286543   77433 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 18:27:39.286584   77433 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 18:27:39.298955   77433 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 18:27:39.300296   77433 kubeconfig.go:125] found "no-preload-328741" server: "https://192.168.72.182:8443"
	I0917 18:27:39.303500   77433 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 18:27:39.316866   77433 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.182
	I0917 18:27:39.316904   77433 kubeadm.go:1160] stopping kube-system containers ...
	I0917 18:27:39.316917   77433 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0917 18:27:39.316980   77433 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:27:39.356519   77433 cri.go:89] found id: ""
	I0917 18:27:39.356608   77433 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 18:27:39.373894   77433 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:27:39.387121   77433 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:27:39.387140   77433 kubeadm.go:157] found existing configuration files:
	
	I0917 18:27:39.387183   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:27:39.397807   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:27:39.397867   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:27:39.408393   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:27:39.420103   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:27:39.420175   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:27:39.432123   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:27:39.442237   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:27:39.442308   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:27:39.452902   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:27:39.462802   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:27:39.462857   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:27:39.473035   77433 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:27:39.483824   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:39.603594   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:40.540682   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:40.798278   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:40.876550   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:41.006410   77433 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:27:41.006504   77433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:41.507355   77433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:42.006707   77433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:42.054395   77433 api_server.go:72] duration metric: took 1.047984188s to wait for apiserver process to appear ...
	I0917 18:27:42.054448   77433 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:27:42.054473   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:42.054949   77433 api_server.go:269] stopped: https://192.168.72.182:8443/healthz: Get "https://192.168.72.182:8443/healthz": dial tcp 192.168.72.182:8443: connect: connection refused
	I0917 18:27:42.119537   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetIP
	I0917 18:27:42.122908   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:42.123378   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:42.123409   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:42.123739   77819 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0917 18:27:42.129654   77819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:27:42.144892   77819 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-438836 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-438836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:27:42.145015   77819 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 18:27:42.145054   77819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:27:42.191002   77819 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0917 18:27:42.191086   77819 ssh_runner.go:195] Run: which lz4
	I0917 18:27:42.196979   77819 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 18:27:42.203024   77819 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 18:27:42.203079   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0917 18:27:41.874915   78008 main.go:141] libmachine: (old-k8s-version-190698) Waiting to get IP...
	I0917 18:27:41.875882   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:41.876350   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:41.876438   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:41.876337   78975 retry.go:31] will retry after 221.467702ms: waiting for machine to come up
	I0917 18:27:42.100196   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:42.100848   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:42.100869   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:42.100798   78975 retry.go:31] will retry after 339.25287ms: waiting for machine to come up
	I0917 18:27:42.441407   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:42.442029   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:42.442057   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:42.441987   78975 retry.go:31] will retry after 471.576193ms: waiting for machine to come up
	I0917 18:27:42.915529   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:42.916159   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:42.916187   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:42.916123   78975 retry.go:31] will retry after 502.97146ms: waiting for machine to come up
	I0917 18:27:43.420795   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:43.421214   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:43.421256   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:43.421163   78975 retry.go:31] will retry after 660.138027ms: waiting for machine to come up
	I0917 18:27:44.082653   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:44.083225   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:44.083255   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:44.083166   78975 retry.go:31] will retry after 656.142121ms: waiting for machine to come up
	I0917 18:27:44.740700   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:44.741167   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:44.741193   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:44.741129   78975 retry.go:31] will retry after 928.613341ms: waiting for machine to come up
	I0917 18:27:45.671934   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:45.672452   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:45.672489   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:45.672370   78975 retry.go:31] will retry after 980.051509ms: waiting for machine to come up
	I0917 18:27:42.554732   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:45.472618   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:27:45.472651   77433 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:27:45.472667   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:45.491418   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:27:45.491447   77433 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:27:45.554728   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:45.562047   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:27:45.562083   77433 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:27:46.054709   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:46.077483   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:27:46.077533   77433 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:27:46.555249   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:46.570200   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:27:46.570242   77433 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:27:47.054604   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:47.062637   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 200:
	ok
	I0917 18:27:47.074075   77433 api_server.go:141] control plane version: v1.31.1
	I0917 18:27:47.074107   77433 api_server.go:131] duration metric: took 5.019651057s to wait for apiserver health ...
	I0917 18:27:47.074118   77433 cni.go:84] Creating CNI manager for ""
	I0917 18:27:47.074127   77433 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:27:47.275236   77433 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:27:43.762089   77819 crio.go:462] duration metric: took 1.565150626s to copy over tarball
	I0917 18:27:43.762183   77819 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 18:27:46.222613   77819 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.460401071s)
	I0917 18:27:46.222640   77819 crio.go:469] duration metric: took 2.460522168s to extract the tarball
	I0917 18:27:46.222649   77819 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 18:27:46.260257   77819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:27:46.314982   77819 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 18:27:46.315007   77819 cache_images.go:84] Images are preloaded, skipping loading
	I0917 18:27:46.315017   77819 kubeadm.go:934] updating node { 192.168.39.58 8444 v1.31.1 crio true true} ...
	I0917 18:27:46.315159   77819 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-438836 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-438836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:27:46.315267   77819 ssh_runner.go:195] Run: crio config
	I0917 18:27:46.372511   77819 cni.go:84] Creating CNI manager for ""
	I0917 18:27:46.372534   77819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:27:46.372545   77819 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:27:46.372564   77819 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-438836 NodeName:default-k8s-diff-port-438836 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 18:27:46.372684   77819 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-438836"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.58
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:27:46.372742   77819 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 18:27:46.383855   77819 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:27:46.383950   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:27:46.394588   77819 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0917 18:27:46.416968   77819 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:27:46.438389   77819 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0917 18:27:46.461630   77819 ssh_runner.go:195] Run: grep 192.168.39.58	control-plane.minikube.internal$ /etc/hosts
	I0917 18:27:46.467126   77819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:27:46.484625   77819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:27:46.614753   77819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:27:46.638959   77819 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836 for IP: 192.168.39.58
	I0917 18:27:46.638984   77819 certs.go:194] generating shared ca certs ...
	I0917 18:27:46.639004   77819 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:27:46.639166   77819 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:27:46.639228   77819 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:27:46.639240   77819 certs.go:256] generating profile certs ...
	I0917 18:27:46.639349   77819 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/client.key
	I0917 18:27:46.639420   77819 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/apiserver.key.06041009
	I0917 18:27:46.639484   77819 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/proxy-client.key
	I0917 18:27:46.639636   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:27:46.639695   77819 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:27:46.639708   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:27:46.639740   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:27:46.639773   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:27:46.639807   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:27:46.639904   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:27:46.640789   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:27:46.681791   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:27:46.715575   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:27:46.746415   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:27:46.780380   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 18:27:46.805518   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 18:27:46.841727   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:27:46.881056   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 18:27:46.918589   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:27:46.947113   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:27:46.977741   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:27:47.015143   77819 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:27:47.036837   77819 ssh_runner.go:195] Run: openssl version
	I0917 18:27:47.043152   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:27:47.057503   77819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:47.063479   77819 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:47.063554   77819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:47.072746   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:27:47.090698   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:27:47.105125   77819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:27:47.110617   77819 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:27:47.110690   77819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:27:47.117267   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:27:47.131593   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:27:47.145726   77819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:27:47.151245   77819 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:27:47.151350   77819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:27:47.157996   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:27:47.171327   77819 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:27:47.178058   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 18:27:47.185068   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 18:27:47.191776   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 18:27:47.198740   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 18:27:47.206057   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 18:27:47.212608   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 18:27:47.219345   77819 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-438836 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-438836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:27:47.219459   77819 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:27:47.219518   77819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:27:47.259853   77819 cri.go:89] found id: ""
	I0917 18:27:47.259944   77819 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:27:47.271127   77819 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 18:27:47.271146   77819 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 18:27:47.271197   77819 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 18:27:47.283724   77819 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 18:27:47.284834   77819 kubeconfig.go:125] found "default-k8s-diff-port-438836" server: "https://192.168.39.58:8444"
	I0917 18:27:47.287040   77819 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 18:27:47.298429   77819 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.58
	I0917 18:27:47.298462   77819 kubeadm.go:1160] stopping kube-system containers ...
	I0917 18:27:47.298481   77819 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0917 18:27:47.298535   77819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:27:47.341739   77819 cri.go:89] found id: ""
	I0917 18:27:47.341820   77819 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 18:27:47.361539   77819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:27:47.377218   77819 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:27:47.377254   77819 kubeadm.go:157] found existing configuration files:
	
	I0917 18:27:47.377301   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0917 18:27:47.390846   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:27:47.390913   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:27:47.401363   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0917 18:27:47.411412   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:27:47.411490   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:27:47.422596   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0917 18:27:47.438021   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:27:47.438102   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:27:47.450085   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0917 18:27:47.461269   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:27:47.461343   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:27:47.472893   77819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:27:47.484393   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:47.620947   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:46.654519   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:46.654962   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:46.655001   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:46.654927   78975 retry.go:31] will retry after 1.346541235s: waiting for machine to come up
	I0917 18:27:48.003569   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:48.004084   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:48.004118   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:48.004017   78975 retry.go:31] will retry after 2.098571627s: waiting for machine to come up
	I0917 18:27:50.105422   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:50.106073   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:50.106096   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:50.105998   78975 retry.go:31] will retry after 1.995584656s: waiting for machine to come up
	I0917 18:27:47.424559   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:27:47.441071   77433 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:27:47.462954   77433 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:27:47.636311   77433 system_pods.go:59] 8 kube-system pods found
	I0917 18:27:47.636361   77433 system_pods.go:61] "coredns-7c65d6cfc9-cgmx9" [e539dfc7-82f3-4e3a-b4d8-262c528fa5bf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:27:47.636373   77433 system_pods.go:61] "etcd-no-preload-328741" [16eed9ef-b991-4760-a116-af9716a70d71] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 18:27:47.636388   77433 system_pods.go:61] "kube-apiserver-no-preload-328741" [ed952dd4-6a99-4ad8-9cdb-c47a5f9d8e46] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 18:27:47.636397   77433 system_pods.go:61] "kube-controller-manager-no-preload-328741" [5da59a8e-4ce3-41f0-a8a0-d022f8788ce1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 18:27:47.636407   77433 system_pods.go:61] "kube-proxy-kpzxv" [eae9f1b2-95bf-44bf-9752-92e34a863520] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 18:27:47.636415   77433 system_pods.go:61] "kube-scheduler-no-preload-328741" [54c4a13c-e03c-4ccb-993b-7b454a66f266] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 18:27:47.636428   77433 system_pods.go:61] "metrics-server-6867b74b74-l8n57" [06210da2-3da4-4082-a966-7a808d762db9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:27:47.636434   77433 system_pods.go:61] "storage-provisioner" [c7501af5-63e1-499f-acfe-48c569e460dd] Running
	I0917 18:27:47.636445   77433 system_pods.go:74] duration metric: took 173.469578ms to wait for pod list to return data ...
	I0917 18:27:47.636458   77433 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:27:47.642831   77433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:27:47.642863   77433 node_conditions.go:123] node cpu capacity is 2
	I0917 18:27:47.642876   77433 node_conditions.go:105] duration metric: took 6.413638ms to run NodePressure ...
	I0917 18:27:47.642898   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:49.172338   77433 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.529413888s)
	I0917 18:27:49.172374   77433 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0917 18:27:49.181467   77433 kubeadm.go:739] kubelet initialised
	I0917 18:27:49.181492   77433 kubeadm.go:740] duration metric: took 9.106065ms waiting for restarted kubelet to initialise ...
	I0917 18:27:49.181504   77433 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:27:49.188444   77433 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:51.196629   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace has status "Ready":"False"
	I0917 18:27:48.837267   77819 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.216281013s)
	I0917 18:27:48.837303   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:49.079443   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:49.184248   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:49.270646   77819 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:27:49.270739   77819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:49.771210   77819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:50.270888   77819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:50.300440   77819 api_server.go:72] duration metric: took 1.029792788s to wait for apiserver process to appear ...
	I0917 18:27:50.300472   77819 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:27:50.300497   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:50.301150   77819 api_server.go:269] stopped: https://192.168.39.58:8444/healthz: Get "https://192.168.39.58:8444/healthz": dial tcp 192.168.39.58:8444: connect: connection refused
	I0917 18:27:50.800904   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:53.830413   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:27:53.830444   77819 api_server.go:103] status: https://192.168.39.58:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:27:53.830466   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:53.863997   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:27:53.864040   77819 api_server.go:103] status: https://192.168.39.58:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:27:54.301188   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:54.308708   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:27:54.308744   77819 api_server.go:103] status: https://192.168.39.58:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:27:54.801293   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:54.810135   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:27:54.810165   77819 api_server.go:103] status: https://192.168.39.58:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:27:55.300669   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:55.306598   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 200:
	ok
	I0917 18:27:55.314062   77819 api_server.go:141] control plane version: v1.31.1
	I0917 18:27:55.314089   77819 api_server.go:131] duration metric: took 5.013610515s to wait for apiserver health ...
	I0917 18:27:55.314098   77819 cni.go:84] Creating CNI manager for ""
	I0917 18:27:55.314105   77819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:27:55.315933   77819 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:27:52.103970   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:52.104598   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:52.104668   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:52.104610   78975 retry.go:31] will retry after 3.302824s: waiting for machine to come up
	I0917 18:27:55.410506   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:55.410967   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:55.410993   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:55.410917   78975 retry.go:31] will retry after 3.790367729s: waiting for machine to come up
	I0917 18:27:53.697650   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace has status "Ready":"False"
	I0917 18:27:56.195779   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace has status "Ready":"False"
	I0917 18:27:55.317026   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:27:55.328593   77819 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:27:55.353710   77819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:27:55.364593   77819 system_pods.go:59] 8 kube-system pods found
	I0917 18:27:55.364637   77819 system_pods.go:61] "coredns-7c65d6cfc9-5wm4j" [af3267b8-4da2-4e95-802e-981814415f7d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:27:55.364649   77819 system_pods.go:61] "etcd-default-k8s-diff-port-438836" [72235e11-dd9c-4560-a258-84ae2fefc0ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 18:27:55.364659   77819 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-438836" [606ffa55-26de-426a-b101-3e5db2329146] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 18:27:55.364682   77819 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-438836" [a9ef6aae-54f9-4ac7-959f-3fb9dcf6019d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 18:27:55.364694   77819 system_pods.go:61] "kube-proxy-pbjlc" [de4d4161-64cd-4794-9eaa-d42b1b13e4a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 18:27:55.364702   77819 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-438836" [ba637ee3-77ca-4b12-8936-3e8616be80d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 18:27:55.364712   77819 system_pods.go:61] "metrics-server-6867b74b74-gpdsn" [4d3193f7-7912-40c6-b86e-402935023601] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:27:55.364722   77819 system_pods.go:61] "storage-provisioner" [5dbf57a2-126c-46e2-9be5-eb2974b84720] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 18:27:55.364739   77819 system_pods.go:74] duration metric: took 10.995638ms to wait for pod list to return data ...
	I0917 18:27:55.364752   77819 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:27:55.369115   77819 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:27:55.369145   77819 node_conditions.go:123] node cpu capacity is 2
	I0917 18:27:55.369159   77819 node_conditions.go:105] duration metric: took 4.401118ms to run NodePressure ...
	I0917 18:27:55.369179   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:55.688791   77819 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0917 18:27:55.694004   77819 kubeadm.go:739] kubelet initialised
	I0917 18:27:55.694035   77819 kubeadm.go:740] duration metric: took 5.21454ms waiting for restarted kubelet to initialise ...
	I0917 18:27:55.694045   77819 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:27:55.700066   77819 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5wm4j" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.706889   77819 pod_ready.go:103] pod "coredns-7c65d6cfc9-5wm4j" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:00.566518   77264 start.go:364] duration metric: took 52.227841633s to acquireMachinesLock for "embed-certs-081863"
	I0917 18:28:00.566588   77264 start.go:96] Skipping create...Using existing machine configuration
	I0917 18:28:00.566596   77264 fix.go:54] fixHost starting: 
	I0917 18:28:00.567020   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:28:00.567055   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:28:00.585812   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46167
	I0917 18:28:00.586338   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:28:00.586855   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:28:00.586878   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:28:00.587201   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:28:00.587368   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:00.587552   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetState
	I0917 18:28:00.589641   77264 fix.go:112] recreateIfNeeded on embed-certs-081863: state=Stopped err=<nil>
	I0917 18:28:00.589668   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	W0917 18:28:00.589827   77264 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 18:28:00.591622   77264 out.go:177] * Restarting existing kvm2 VM for "embed-certs-081863" ...
	I0917 18:27:59.203551   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.204119   78008 main.go:141] libmachine: (old-k8s-version-190698) Found IP for machine: 192.168.61.143
	I0917 18:27:59.204145   78008 main.go:141] libmachine: (old-k8s-version-190698) Reserving static IP address...
	I0917 18:27:59.204160   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has current primary IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.204580   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "old-k8s-version-190698", mac: "52:54:00:72:8a:43", ip: "192.168.61.143"} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.204623   78008 main.go:141] libmachine: (old-k8s-version-190698) Reserved static IP address: 192.168.61.143
	I0917 18:27:59.204642   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | skip adding static IP to network mk-old-k8s-version-190698 - found existing host DHCP lease matching {name: "old-k8s-version-190698", mac: "52:54:00:72:8a:43", ip: "192.168.61.143"}
	I0917 18:27:59.204660   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | Getting to WaitForSSH function...
	I0917 18:27:59.204675   78008 main.go:141] libmachine: (old-k8s-version-190698) Waiting for SSH to be available...
	I0917 18:27:59.206831   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.207248   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.207277   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.207563   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | Using SSH client type: external
	I0917 18:27:59.207591   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa (-rw-------)
	I0917 18:27:59.207628   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.143 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:27:59.207648   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | About to run SSH command:
	I0917 18:27:59.207660   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | exit 0
	I0917 18:27:59.334284   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | SSH cmd err, output: <nil>: 
	I0917 18:27:59.334712   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetConfigRaw
	I0917 18:27:59.335400   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:27:59.337795   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.338175   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.338199   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.338448   78008 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/config.json ...
	I0917 18:27:59.338675   78008 machine.go:93] provisionDockerMachine start ...
	I0917 18:27:59.338696   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:27:59.338932   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.340943   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.341313   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.341338   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.341517   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:27:59.341695   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.341821   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.341953   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:27:59.342138   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:59.342349   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:27:59.342366   78008 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 18:27:59.449958   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 18:27:59.449986   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetMachineName
	I0917 18:27:59.450245   78008 buildroot.go:166] provisioning hostname "old-k8s-version-190698"
	I0917 18:27:59.450275   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetMachineName
	I0917 18:27:59.450449   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.453653   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.454015   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.454044   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.454246   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:27:59.454451   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.454608   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.454777   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:27:59.454978   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:59.455195   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:27:59.455212   78008 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-190698 && echo "old-k8s-version-190698" | sudo tee /etc/hostname
	I0917 18:27:59.576721   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-190698
	
	I0917 18:27:59.576758   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.579821   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.580176   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.580211   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.580420   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:27:59.580601   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.580774   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.580920   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:27:59.581097   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:59.581292   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:27:59.581313   78008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-190698' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-190698/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-190698' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:27:59.696335   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:27:59.696366   78008 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:27:59.696387   78008 buildroot.go:174] setting up certificates
	I0917 18:27:59.696396   78008 provision.go:84] configureAuth start
	I0917 18:27:59.696405   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetMachineName
	I0917 18:27:59.696689   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:27:59.699694   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.700052   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.700079   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.700251   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.702492   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.702870   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.702897   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.703098   78008 provision.go:143] copyHostCerts
	I0917 18:27:59.703211   78008 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:27:59.703228   78008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:27:59.703308   78008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:27:59.703494   78008 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:27:59.703511   78008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:27:59.703557   78008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:27:59.703696   78008 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:27:59.703711   78008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:27:59.703743   78008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:27:59.703843   78008 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-190698 san=[127.0.0.1 192.168.61.143 localhost minikube old-k8s-version-190698]
	I0917 18:27:59.881199   78008 provision.go:177] copyRemoteCerts
	I0917 18:27:59.881281   78008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:27:59.881319   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.884199   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.884526   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.884559   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.884808   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:27:59.885004   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.885174   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:27:59.885311   78008 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:27:59.972021   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:27:59.999996   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0917 18:28:00.028759   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 18:28:00.062167   78008 provision.go:87] duration metric: took 365.752983ms to configureAuth
	I0917 18:28:00.062224   78008 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:28:00.062431   78008 config.go:182] Loaded profile config "old-k8s-version-190698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0917 18:28:00.062530   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.065903   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.066354   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.066387   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.066851   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.067080   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.067272   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.067551   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.067782   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:00.068031   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:28:00.068058   78008 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:28:00.310378   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:28:00.310410   78008 machine.go:96] duration metric: took 971.72114ms to provisionDockerMachine
	I0917 18:28:00.310424   78008 start.go:293] postStartSetup for "old-k8s-version-190698" (driver="kvm2")
	I0917 18:28:00.310444   78008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:28:00.310465   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.310788   78008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:28:00.310822   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.313609   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.313975   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.314004   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.314158   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.314364   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.314518   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.314672   78008 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:28:00.402352   78008 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:28:00.407061   78008 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:28:00.407091   78008 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:28:00.407183   78008 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:28:00.407295   78008 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:28:00.407435   78008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:28:00.419527   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:28:00.449686   78008 start.go:296] duration metric: took 139.247596ms for postStartSetup
	I0917 18:28:00.449739   78008 fix.go:56] duration metric: took 20.027097941s for fixHost
	I0917 18:28:00.449764   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.452672   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.453033   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.453080   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.453218   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.453433   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.453637   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.453793   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.454001   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:00.454175   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:28:00.454185   78008 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:28:00.566377   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726597680.523257617
	
	I0917 18:28:00.566403   78008 fix.go:216] guest clock: 1726597680.523257617
	I0917 18:28:00.566413   78008 fix.go:229] Guest: 2024-09-17 18:28:00.523257617 +0000 UTC Remote: 2024-09-17 18:28:00.449744487 +0000 UTC m=+249.811602656 (delta=73.51313ms)
	I0917 18:28:00.566439   78008 fix.go:200] guest clock delta is within tolerance: 73.51313ms
	I0917 18:28:00.566445   78008 start.go:83] releasing machines lock for "old-k8s-version-190698", held for 20.143843614s
	I0917 18:28:00.566478   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.566748   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:28:00.570065   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.570491   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.570520   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.570731   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.571320   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.571497   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.571584   78008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:28:00.571649   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.571803   78008 ssh_runner.go:195] Run: cat /version.json
	I0917 18:28:00.571830   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.574802   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.575083   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.575343   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.575382   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.575506   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.575574   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.575600   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.575664   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.575881   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.575941   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.576030   78008 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:28:00.576082   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.576278   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.576430   78008 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:28:00.592850   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Start
	I0917 18:28:00.593044   77264 main.go:141] libmachine: (embed-certs-081863) Ensuring networks are active...
	I0917 18:28:00.593996   77264 main.go:141] libmachine: (embed-certs-081863) Ensuring network default is active
	I0917 18:28:00.594404   77264 main.go:141] libmachine: (embed-certs-081863) Ensuring network mk-embed-certs-081863 is active
	I0917 18:28:00.594855   77264 main.go:141] libmachine: (embed-certs-081863) Getting domain xml...
	I0917 18:28:00.595603   77264 main.go:141] libmachine: (embed-certs-081863) Creating domain...
	I0917 18:28:00.685146   78008 ssh_runner.go:195] Run: systemctl --version
	I0917 18:28:00.692059   78008 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:28:00.844888   78008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:28:00.852326   78008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:28:00.852438   78008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:28:00.869907   78008 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:28:00.869934   78008 start.go:495] detecting cgroup driver to use...
	I0917 18:28:00.870010   78008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:28:00.888992   78008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:28:00.905438   78008 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:28:00.905495   78008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:28:00.920872   78008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:28:00.939154   78008 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:28:01.067061   78008 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:28:01.220976   78008 docker.go:233] disabling docker service ...
	I0917 18:28:01.221068   78008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:28:01.240350   78008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:28:01.257396   78008 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:28:01.407317   78008 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:28:01.552256   78008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:28:01.567151   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:28:01.589401   78008 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0917 18:28:01.589465   78008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:01.604462   78008 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:28:01.604527   78008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:01.617293   78008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:01.629766   78008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:01.643336   78008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:28:01.656308   78008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:28:01.667116   78008 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:28:01.667187   78008 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:28:01.683837   78008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:28:01.697438   78008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:28:01.843288   78008 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:28:01.951590   78008 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:28:01.951666   78008 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:28:01.957158   78008 start.go:563] Will wait 60s for crictl version
	I0917 18:28:01.957240   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:01.961218   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:28:02.001679   78008 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:28:02.001772   78008 ssh_runner.go:195] Run: crio --version
	I0917 18:28:02.032619   78008 ssh_runner.go:195] Run: crio --version
	I0917 18:28:02.064108   78008 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0917 18:27:57.695202   77433 pod_ready.go:93] pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:57.695235   77433 pod_ready.go:82] duration metric: took 8.506750324s for pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.695249   77433 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.700040   77433 pod_ready.go:93] pod "etcd-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:57.700062   77433 pod_ready.go:82] duration metric: took 4.804815ms for pod "etcd-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.700070   77433 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.705836   77433 pod_ready.go:93] pod "kube-apiserver-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:57.705867   77433 pod_ready.go:82] duration metric: took 5.789446ms for pod "kube-apiserver-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.705880   77433 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.215156   77433 pod_ready.go:93] pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:58.215180   77433 pod_ready.go:82] duration metric: took 509.29189ms for pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.215193   77433 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kpzxv" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.221031   77433 pod_ready.go:93] pod "kube-proxy-kpzxv" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:58.221054   77433 pod_ready.go:82] duration metric: took 5.853831ms for pod "kube-proxy-kpzxv" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.221065   77433 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.493958   77433 pod_ready.go:93] pod "kube-scheduler-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:58.493984   77433 pod_ready.go:82] duration metric: took 272.911397ms for pod "kube-scheduler-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.493994   77433 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:00.501591   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:27:59.707995   77819 pod_ready.go:93] pod "coredns-7c65d6cfc9-5wm4j" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:59.708017   77819 pod_ready.go:82] duration metric: took 4.007926053s for pod "coredns-7c65d6cfc9-5wm4j" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:59.708026   77819 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:01.716326   77819 pod_ready.go:103] pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:02.065336   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:28:02.068703   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:02.069066   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:02.069094   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:02.069321   78008 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0917 18:28:02.074550   78008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:28:02.091863   78008 kubeadm.go:883] updating cluster {Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:28:02.092006   78008 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0917 18:28:02.092069   78008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:28:02.152944   78008 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0917 18:28:02.153024   78008 ssh_runner.go:195] Run: which lz4
	I0917 18:28:02.157664   78008 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 18:28:02.162231   78008 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 18:28:02.162290   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0917 18:28:04.015315   78008 crio.go:462] duration metric: took 1.857697544s to copy over tarball
	I0917 18:28:04.015398   78008 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 18:28:01.931491   77264 main.go:141] libmachine: (embed-certs-081863) Waiting to get IP...
	I0917 18:28:01.932448   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:01.932939   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:01.933006   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:01.932914   79167 retry.go:31] will retry after 232.498944ms: waiting for machine to come up
	I0917 18:28:02.167642   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:02.168159   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:02.168187   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:02.168114   79167 retry.go:31] will retry after 297.644768ms: waiting for machine to come up
	I0917 18:28:02.467583   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:02.468395   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:02.468422   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:02.468356   79167 retry.go:31] will retry after 486.22753ms: waiting for machine to come up
	I0917 18:28:02.956719   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:02.957187   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:02.957212   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:02.957151   79167 retry.go:31] will retry after 602.094874ms: waiting for machine to come up
	I0917 18:28:03.560509   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:03.561150   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:03.561177   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:03.561102   79167 retry.go:31] will retry after 732.31608ms: waiting for machine to come up
	I0917 18:28:04.294713   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:04.295264   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:04.295306   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:04.295212   79167 retry.go:31] will retry after 826.461309ms: waiting for machine to come up
	I0917 18:28:05.123086   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:05.123570   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:05.123596   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:05.123528   79167 retry.go:31] will retry after 785.524779ms: waiting for machine to come up
	I0917 18:28:02.503063   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:05.002750   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:03.716871   77819 pod_ready.go:103] pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:05.718652   77819 pod_ready.go:93] pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:05.718685   77819 pod_ready.go:82] duration metric: took 6.010651123s for pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:05.718697   77819 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:07.727355   77819 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:07.199571   78008 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.184141166s)
	I0917 18:28:07.199605   78008 crio.go:469] duration metric: took 3.184259546s to extract the tarball
	I0917 18:28:07.199625   78008 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 18:28:07.247308   78008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:28:07.290580   78008 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0917 18:28:07.290605   78008 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0917 18:28:07.290641   78008 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:28:07.290664   78008 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.290685   78008 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.290705   78008 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.290772   78008 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.290865   78008 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.290898   78008 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0917 18:28:07.290896   78008 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.292426   78008 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.292473   78008 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.292479   78008 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.292525   78008 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:28:07.292555   78008 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.292544   78008 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.292594   78008 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.292796   78008 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0917 18:28:07.460802   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.466278   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.466439   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.473442   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.484306   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.490062   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.517285   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0917 18:28:07.550668   78008 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0917 18:28:07.550730   78008 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.550779   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.598383   78008 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0917 18:28:07.598426   78008 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.598468   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.627615   78008 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0917 18:28:07.627665   78008 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.627737   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.675687   78008 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0917 18:28:07.675733   78008 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.675769   78008 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0917 18:28:07.675806   78008 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.675848   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.675809   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.689052   78008 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0917 18:28:07.689106   78008 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.689141   78008 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0917 18:28:07.689169   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.689186   78008 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0917 18:28:07.689200   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.689224   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.689252   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.689296   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.689336   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.689374   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.782923   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0917 18:28:07.783204   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.833121   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.833205   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.833278   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.833316   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.833343   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.880054   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0917 18:28:07.885156   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.982007   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.990252   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:08.005351   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:08.008118   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0917 18:28:08.008319   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:08.066339   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0917 18:28:08.066388   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0917 18:28:08.173842   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0917 18:28:08.173884   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0917 18:28:08.173951   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:08.181801   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0917 18:28:08.181832   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0917 18:28:08.181952   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0917 18:28:08.196666   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:28:08.219844   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0917 18:28:08.351645   78008 cache_images.go:92] duration metric: took 1.061022994s to LoadCachedImages
	W0917 18:28:08.351739   78008 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0917 18:28:08.351760   78008 kubeadm.go:934] updating node { 192.168.61.143 8443 v1.20.0 crio true true} ...
	I0917 18:28:08.351869   78008 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-190698 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.143
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:28:08.351947   78008 ssh_runner.go:195] Run: crio config
	I0917 18:28:08.404304   78008 cni.go:84] Creating CNI manager for ""
	I0917 18:28:08.404333   78008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:28:08.404347   78008 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:28:08.404369   78008 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.143 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-190698 NodeName:old-k8s-version-190698 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.143"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.143 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0917 18:28:08.404554   78008 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.143
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-190698"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.143
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.143"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:28:08.404636   78008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0917 18:28:08.415712   78008 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:28:08.415788   78008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:28:08.426074   78008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0917 18:28:08.446765   78008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:28:08.467884   78008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0917 18:28:08.489565   78008 ssh_runner.go:195] Run: grep 192.168.61.143	control-plane.minikube.internal$ /etc/hosts
	I0917 18:28:08.494030   78008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.143	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:28:08.510100   78008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:28:08.667598   78008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:28:08.686416   78008 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698 for IP: 192.168.61.143
	I0917 18:28:08.686453   78008 certs.go:194] generating shared ca certs ...
	I0917 18:28:08.686477   78008 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:28:08.686680   78008 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:28:08.686743   78008 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:28:08.686762   78008 certs.go:256] generating profile certs ...
	I0917 18:28:08.686886   78008 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/client.key
	I0917 18:28:08.686962   78008 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.key.8ffdb4af
	I0917 18:28:08.687069   78008 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/proxy-client.key
	I0917 18:28:08.687256   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:28:08.687302   78008 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:28:08.687318   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:28:08.687360   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:28:08.687397   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:28:08.687441   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:28:08.687511   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:28:08.688412   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:28:08.729318   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:28:08.772932   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:28:08.815329   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:28:08.866305   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 18:28:08.910004   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 18:28:08.950902   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:28:08.993679   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 18:28:09.021272   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:28:09.046848   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:28:09.078938   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:28:09.110919   78008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:28:09.134493   78008 ssh_runner.go:195] Run: openssl version
	I0917 18:28:09.142920   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:28:09.157440   78008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:28:09.163382   78008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:28:09.163460   78008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:28:09.170446   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:28:09.182690   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:28:09.195144   78008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:28:09.200544   78008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:28:09.200612   78008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:28:09.207418   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:28:09.219931   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:28:09.234765   78008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:09.240859   78008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:09.240930   78008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:09.249168   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:28:09.262225   78008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:28:09.267923   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 18:28:09.276136   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 18:28:09.284356   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 18:28:09.292809   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 18:28:09.301175   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 18:28:09.309486   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 18:28:09.317652   78008 kubeadm.go:392] StartCluster: {Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:28:09.317788   78008 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:28:09.317862   78008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:28:09.367633   78008 cri.go:89] found id: ""
	I0917 18:28:09.367714   78008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:28:09.378721   78008 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 18:28:09.378751   78008 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 18:28:09.378823   78008 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 18:28:09.389949   78008 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 18:28:09.391438   78008 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-190698" does not appear in /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:28:09.392494   78008 kubeconfig.go:62] /home/jenkins/minikube-integration/19662-11085/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-190698" cluster setting kubeconfig missing "old-k8s-version-190698" context setting]
	I0917 18:28:09.393951   78008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:28:09.396482   78008 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 18:28:09.407488   78008 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.143
	I0917 18:28:09.407541   78008 kubeadm.go:1160] stopping kube-system containers ...
	I0917 18:28:09.407555   78008 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0917 18:28:09.407617   78008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:28:09.454529   78008 cri.go:89] found id: ""
	I0917 18:28:09.454609   78008 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 18:28:09.473001   78008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:28:09.483455   78008 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:28:09.483478   78008 kubeadm.go:157] found existing configuration files:
	
	I0917 18:28:09.483524   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:28:09.492941   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:28:09.493015   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:28:09.503733   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:28:09.513646   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:28:09.513744   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:28:09.523852   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:28:09.533964   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:28:09.534023   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:28:09.544196   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:28:09.554778   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:28:09.554867   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:28:09.565305   78008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:28:09.576177   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:09.717093   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:10.376689   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:10.619407   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:05.910824   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:05.911297   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:05.911326   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:05.911249   79167 retry.go:31] will retry after 994.146737ms: waiting for machine to come up
	I0917 18:28:06.906856   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:06.907429   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:06.907489   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:06.907376   79167 retry.go:31] will retry after 1.592998284s: waiting for machine to come up
	I0917 18:28:08.502438   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:08.502946   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:08.502969   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:08.502894   79167 retry.go:31] will retry after 1.71066586s: waiting for machine to come up
	I0917 18:28:10.215620   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:10.216060   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:10.216088   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:10.216019   79167 retry.go:31] will retry after 2.640762654s: waiting for machine to come up
	I0917 18:28:07.502981   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:10.000910   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:12.002029   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:09.068583   77819 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:09.068620   77819 pod_ready.go:82] duration metric: took 3.349915006s for pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.068634   77819 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.104652   77819 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:09.104685   77819 pod_ready.go:82] duration metric: took 36.042715ms for pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.104698   77819 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-pbjlc" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.111983   77819 pod_ready.go:93] pod "kube-proxy-pbjlc" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:09.112010   77819 pod_ready.go:82] duration metric: took 7.304378ms for pod "kube-proxy-pbjlc" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.112022   77819 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.118242   77819 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:09.118270   77819 pod_ready.go:82] duration metric: took 6.238909ms for pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.118284   77819 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:11.128221   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:10.743928   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:10.832172   78008 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:28:10.832275   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:11.333365   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:11.832631   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:12.332364   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:12.832978   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:13.333348   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:13.833325   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:14.333130   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:14.833200   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:15.333019   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:12.859438   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:12.859907   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:12.859933   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:12.859855   79167 retry.go:31] will retry after 2.872904917s: waiting for machine to come up
	I0917 18:28:15.734778   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:15.735248   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:15.735276   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:15.735204   79167 retry.go:31] will retry after 3.980802088s: waiting for machine to come up
	I0917 18:28:14.002604   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:16.501220   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:13.625926   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:16.124315   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:18.125564   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:15.832326   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:16.333353   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:16.833183   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:17.332967   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:17.833315   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:18.333025   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:18.832727   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:19.333388   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:19.833387   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:20.332777   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:19.720378   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.720874   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has current primary IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.720895   77264 main.go:141] libmachine: (embed-certs-081863) Found IP for machine: 192.168.50.61
	I0917 18:28:19.720909   77264 main.go:141] libmachine: (embed-certs-081863) Reserving static IP address...
	I0917 18:28:19.721385   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "embed-certs-081863", mac: "52:54:00:3f:17:3d", ip: "192.168.50.61"} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:19.721428   77264 main.go:141] libmachine: (embed-certs-081863) DBG | skip adding static IP to network mk-embed-certs-081863 - found existing host DHCP lease matching {name: "embed-certs-081863", mac: "52:54:00:3f:17:3d", ip: "192.168.50.61"}
	I0917 18:28:19.721444   77264 main.go:141] libmachine: (embed-certs-081863) Reserved static IP address: 192.168.50.61
	I0917 18:28:19.721461   77264 main.go:141] libmachine: (embed-certs-081863) Waiting for SSH to be available...
	I0917 18:28:19.721478   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Getting to WaitForSSH function...
	I0917 18:28:19.723623   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.723932   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:19.723960   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.724082   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Using SSH client type: external
	I0917 18:28:19.724109   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa (-rw-------)
	I0917 18:28:19.724139   77264 main.go:141] libmachine: (embed-certs-081863) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.61 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:28:19.724161   77264 main.go:141] libmachine: (embed-certs-081863) DBG | About to run SSH command:
	I0917 18:28:19.724173   77264 main.go:141] libmachine: (embed-certs-081863) DBG | exit 0
	I0917 18:28:19.849714   77264 main.go:141] libmachine: (embed-certs-081863) DBG | SSH cmd err, output: <nil>: 
	I0917 18:28:19.850124   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetConfigRaw
	I0917 18:28:19.850841   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetIP
	I0917 18:28:19.853490   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.853866   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:19.853891   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.854193   77264 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/config.json ...
	I0917 18:28:19.854396   77264 machine.go:93] provisionDockerMachine start ...
	I0917 18:28:19.854414   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:19.854653   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:19.857041   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.857395   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:19.857423   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.857547   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:19.857729   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:19.857863   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:19.857975   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:19.858079   77264 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:19.858237   77264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I0917 18:28:19.858247   77264 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 18:28:19.965775   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 18:28:19.965805   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetMachineName
	I0917 18:28:19.966057   77264 buildroot.go:166] provisioning hostname "embed-certs-081863"
	I0917 18:28:19.966091   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetMachineName
	I0917 18:28:19.966278   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:19.968957   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.969277   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:19.969308   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.969469   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:19.969656   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:19.969816   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:19.969923   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:19.970068   77264 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:19.970294   77264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I0917 18:28:19.970314   77264 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-081863 && echo "embed-certs-081863" | sudo tee /etc/hostname
	I0917 18:28:20.096717   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-081863
	
	I0917 18:28:20.096753   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.099788   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.100162   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.100195   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.100351   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.100571   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.100731   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.100864   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.101043   77264 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:20.101273   77264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I0917 18:28:20.101297   77264 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-081863' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-081863/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-081863' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:28:20.224405   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:28:20.224447   77264 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:28:20.224468   77264 buildroot.go:174] setting up certificates
	I0917 18:28:20.224476   77264 provision.go:84] configureAuth start
	I0917 18:28:20.224487   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetMachineName
	I0917 18:28:20.224796   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetIP
	I0917 18:28:20.227642   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.227990   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.228020   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.228128   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.230411   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.230785   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.230819   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.230945   77264 provision.go:143] copyHostCerts
	I0917 18:28:20.231012   77264 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:28:20.231026   77264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:28:20.231097   77264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:28:20.231220   77264 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:28:20.231232   77264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:28:20.231263   77264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:28:20.231349   77264 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:28:20.231361   77264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:28:20.231387   77264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:28:20.231460   77264 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.embed-certs-081863 san=[127.0.0.1 192.168.50.61 embed-certs-081863 localhost minikube]
	I0917 18:28:20.293317   77264 provision.go:177] copyRemoteCerts
	I0917 18:28:20.293370   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:28:20.293395   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.296247   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.296611   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.296649   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.296878   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.297065   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.297251   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.297411   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:28:20.384577   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:28:20.409805   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0917 18:28:20.436199   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 18:28:20.463040   77264 provision.go:87] duration metric: took 238.548615ms to configureAuth
	I0917 18:28:20.463072   77264 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:28:20.463270   77264 config.go:182] Loaded profile config "embed-certs-081863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:28:20.463371   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.466291   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.466656   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.466688   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.466942   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.467172   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.467363   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.467511   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.467661   77264 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:20.467850   77264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I0917 18:28:20.467864   77264 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:28:20.713934   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:28:20.713961   77264 machine.go:96] duration metric: took 859.552656ms to provisionDockerMachine
	I0917 18:28:20.713975   77264 start.go:293] postStartSetup for "embed-certs-081863" (driver="kvm2")
	I0917 18:28:20.713989   77264 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:28:20.714017   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:20.714338   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:28:20.714366   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.717415   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.717784   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.717810   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.717979   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.718181   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.718334   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.718489   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:28:18.501410   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:21.001625   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:20.808582   77264 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:28:20.812874   77264 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:28:20.812903   77264 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:28:20.812985   77264 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:28:20.813082   77264 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:28:20.813202   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:28:20.823533   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:28:20.853907   77264 start.go:296] duration metric: took 139.917603ms for postStartSetup
	I0917 18:28:20.853950   77264 fix.go:56] duration metric: took 20.287354242s for fixHost
	I0917 18:28:20.853974   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.856746   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.857114   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.857141   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.857324   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.857572   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.857749   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.857925   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.858084   77264 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:20.858314   77264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I0917 18:28:20.858329   77264 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:28:20.970530   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726597700.949100009
	
	I0917 18:28:20.970553   77264 fix.go:216] guest clock: 1726597700.949100009
	I0917 18:28:20.970561   77264 fix.go:229] Guest: 2024-09-17 18:28:20.949100009 +0000 UTC Remote: 2024-09-17 18:28:20.853955257 +0000 UTC m=+355.105413575 (delta=95.144752ms)
	I0917 18:28:20.970581   77264 fix.go:200] guest clock delta is within tolerance: 95.144752ms
	I0917 18:28:20.970586   77264 start.go:83] releasing machines lock for "embed-certs-081863", held for 20.404030588s
	I0917 18:28:20.970604   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:20.970874   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetIP
	I0917 18:28:20.973477   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.973786   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.973813   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.973938   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:20.974529   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:20.974733   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:20.974825   77264 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:28:20.974881   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.974945   77264 ssh_runner.go:195] Run: cat /version.json
	I0917 18:28:20.974973   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.977671   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.977994   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.978044   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.978074   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.978203   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.978365   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.978517   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.978555   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.978590   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.978659   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:28:20.978775   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.978915   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.979042   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.979161   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:28:21.080649   77264 ssh_runner.go:195] Run: systemctl --version
	I0917 18:28:21.087412   77264 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:28:21.241355   77264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:28:21.249173   77264 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:28:21.249245   77264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:28:21.266337   77264 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:28:21.266369   77264 start.go:495] detecting cgroup driver to use...
	I0917 18:28:21.266441   77264 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:28:21.284535   77264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:28:21.300191   77264 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:28:21.300262   77264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:28:21.315687   77264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:28:21.331132   77264 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:28:21.469564   77264 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:28:21.618385   77264 docker.go:233] disabling docker service ...
	I0917 18:28:21.618465   77264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:28:21.635746   77264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:28:21.653011   77264 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:28:21.806397   77264 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:28:21.942768   77264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:28:21.957319   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:28:21.977409   77264 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 18:28:21.977479   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:21.989090   77264 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:28:21.989165   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.001555   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.013044   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.024634   77264 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:28:22.036482   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.048082   77264 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.067971   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.079429   77264 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:28:22.089772   77264 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:28:22.089841   77264 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:28:22.104492   77264 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:28:22.116429   77264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:28:22.250299   77264 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:28:22.353115   77264 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:28:22.353195   77264 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:28:22.359475   77264 start.go:563] Will wait 60s for crictl version
	I0917 18:28:22.359527   77264 ssh_runner.go:195] Run: which crictl
	I0917 18:28:22.363627   77264 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:28:22.402802   77264 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:28:22.402902   77264 ssh_runner.go:195] Run: crio --version
	I0917 18:28:22.432389   77264 ssh_runner.go:195] Run: crio --version
	I0917 18:28:22.463277   77264 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 18:28:20.625519   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:23.126788   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:20.832698   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:21.332644   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:21.832955   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:22.332859   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:22.832393   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:23.333067   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:23.833266   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:24.332837   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:24.832669   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:25.332772   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:22.464498   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetIP
	I0917 18:28:22.467595   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:22.468070   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:22.468104   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:22.468400   77264 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0917 18:28:22.473355   77264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:28:22.487043   77264 kubeadm.go:883] updating cluster {Name:embed-certs-081863 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-081863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.61 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:28:22.487162   77264 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 18:28:22.487204   77264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:28:22.525877   77264 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0917 18:28:22.525947   77264 ssh_runner.go:195] Run: which lz4
	I0917 18:28:22.530318   77264 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 18:28:22.534779   77264 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 18:28:22.534821   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0917 18:28:24.007808   77264 crio.go:462] duration metric: took 1.477544842s to copy over tarball
	I0917 18:28:24.007895   77264 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 18:28:23.002565   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:25.501068   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:25.627993   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:28.126373   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:25.832772   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:26.332949   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:26.833016   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:27.332604   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:27.833127   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:28.332337   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:28.832430   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.332564   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.833193   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:30.333057   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:26.210912   77264 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.202977006s)
	I0917 18:28:26.210942   77264 crio.go:469] duration metric: took 2.203106209s to extract the tarball
	I0917 18:28:26.210950   77264 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 18:28:26.249979   77264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:28:26.297086   77264 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 18:28:26.297112   77264 cache_images.go:84] Images are preloaded, skipping loading
	I0917 18:28:26.297122   77264 kubeadm.go:934] updating node { 192.168.50.61 8443 v1.31.1 crio true true} ...
	I0917 18:28:26.297238   77264 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-081863 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-081863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:28:26.297323   77264 ssh_runner.go:195] Run: crio config
	I0917 18:28:26.343491   77264 cni.go:84] Creating CNI manager for ""
	I0917 18:28:26.343516   77264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:28:26.343526   77264 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:28:26.343547   77264 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.61 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-081863 NodeName:embed-certs-081863 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 18:28:26.343711   77264 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-081863"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:28:26.343786   77264 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 18:28:26.354782   77264 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:28:26.354863   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:28:26.365347   77264 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0917 18:28:26.383377   77264 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:28:26.401629   77264 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0917 18:28:26.420595   77264 ssh_runner.go:195] Run: grep 192.168.50.61	control-plane.minikube.internal$ /etc/hosts
	I0917 18:28:26.424760   77264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.61	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:28:26.439152   77264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:28:26.582540   77264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:28:26.600662   77264 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863 for IP: 192.168.50.61
	I0917 18:28:26.600684   77264 certs.go:194] generating shared ca certs ...
	I0917 18:28:26.600701   77264 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:28:26.600877   77264 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:28:26.600932   77264 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:28:26.600946   77264 certs.go:256] generating profile certs ...
	I0917 18:28:26.601065   77264 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/client.key
	I0917 18:28:26.601154   77264 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/apiserver.key.b407faea
	I0917 18:28:26.601218   77264 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/proxy-client.key
	I0917 18:28:26.601382   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:28:26.601423   77264 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:28:26.601438   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:28:26.601501   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:28:26.601537   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:28:26.601568   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:28:26.601625   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:28:26.602482   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:28:26.641066   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:28:26.665154   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:28:26.699573   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:28:26.749625   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0917 18:28:26.790757   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 18:28:26.818331   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:28:26.848575   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 18:28:26.875901   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:28:26.902547   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:28:26.929873   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:28:26.954674   77264 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:28:26.972433   77264 ssh_runner.go:195] Run: openssl version
	I0917 18:28:26.978761   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:28:26.991752   77264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:28:26.996704   77264 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:28:26.996771   77264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:28:27.003567   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:28:27.015305   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:28:27.027052   77264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:27.032815   77264 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:27.032880   77264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:27.039495   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:28:27.051331   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:28:27.062771   77264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:28:27.067404   77264 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:28:27.067461   77264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:28:27.073663   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:28:27.085283   77264 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:28:27.090171   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 18:28:27.096537   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 18:28:27.103011   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 18:28:27.110516   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 18:28:27.116647   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 18:28:27.123087   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 18:28:27.129689   77264 kubeadm.go:392] StartCluster: {Name:embed-certs-081863 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-081863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.61 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:28:27.129958   77264 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:28:27.130021   77264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:28:27.171240   77264 cri.go:89] found id: ""
	I0917 18:28:27.171312   77264 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:28:27.183474   77264 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 18:28:27.183494   77264 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 18:28:27.183555   77264 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 18:28:27.195418   77264 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 18:28:27.196485   77264 kubeconfig.go:125] found "embed-certs-081863" server: "https://192.168.50.61:8443"
	I0917 18:28:27.198613   77264 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 18:28:27.210454   77264 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.61
	I0917 18:28:27.210489   77264 kubeadm.go:1160] stopping kube-system containers ...
	I0917 18:28:27.210503   77264 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0917 18:28:27.210560   77264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:28:27.249423   77264 cri.go:89] found id: ""
	I0917 18:28:27.249495   77264 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 18:28:27.270900   77264 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:28:27.283556   77264 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:28:27.283577   77264 kubeadm.go:157] found existing configuration files:
	
	I0917 18:28:27.283636   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:28:27.293555   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:28:27.293619   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:28:27.303876   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:28:27.313465   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:28:27.313533   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:28:27.323675   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:28:27.333753   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:28:27.333828   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:28:27.345276   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:28:27.356223   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:28:27.356278   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:28:27.366916   77264 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:28:27.380179   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:27.518193   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:28.381642   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:28.600807   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:28.674888   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:28.751910   77264 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:28:28.752037   77264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.252499   77264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.752690   77264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.792406   77264 api_server.go:72] duration metric: took 1.040494132s to wait for apiserver process to appear ...
	I0917 18:28:29.792439   77264 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:28:29.792463   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:29.793008   77264 api_server.go:269] stopped: https://192.168.50.61:8443/healthz: Get "https://192.168.50.61:8443/healthz": dial tcp 192.168.50.61:8443: connect: connection refused
	I0917 18:28:30.292587   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:27.501185   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:29.501753   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:32.000659   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:30.626195   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:33.126180   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:30.832853   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:31.332521   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:31.832513   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:32.332347   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:32.833201   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:33.332485   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:33.833002   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:34.333150   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:34.832985   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:35.332584   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:32.308247   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:28:32.308273   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:28:32.308286   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:32.327248   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:28:32.327283   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:28:32.792628   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:32.798368   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:32.798399   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:33.292887   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:33.298137   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:33.298162   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:33.792634   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:33.797062   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:33.797095   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:34.292626   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:34.297161   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:34.297198   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:34.792621   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:34.797092   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:34.797124   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:35.292693   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:35.298774   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:35.298806   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:35.793350   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:35.798559   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 200:
	ok
	I0917 18:28:35.805421   77264 api_server.go:141] control plane version: v1.31.1
	I0917 18:28:35.805455   77264 api_server.go:131] duration metric: took 6.013008084s to wait for apiserver health ...
	I0917 18:28:35.805467   77264 cni.go:84] Creating CNI manager for ""
	I0917 18:28:35.805476   77264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:28:35.807270   77264 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:28:34.500180   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:36.501455   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:35.625916   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:38.124412   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:35.833375   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:36.332518   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:36.833057   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:37.333093   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:37.832449   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:38.333260   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:38.832592   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:39.332352   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:39.833094   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:40.332401   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:35.808509   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:28:35.820438   77264 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:28:35.843308   77264 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:28:35.858341   77264 system_pods.go:59] 8 kube-system pods found
	I0917 18:28:35.858375   77264 system_pods.go:61] "coredns-7c65d6cfc9-fv5t2" [6d147703-1be6-4e14-b00a-00563bb9f05d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:28:35.858383   77264 system_pods.go:61] "etcd-embed-certs-081863" [e7da3a2f-02a8-4fb8-bcc1-2057560e2a99] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 18:28:35.858390   77264 system_pods.go:61] "kube-apiserver-embed-certs-081863" [f576f758-867b-45ff-83e7-c7ec010c784d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 18:28:35.858396   77264 system_pods.go:61] "kube-controller-manager-embed-certs-081863" [864cfdcd-bba9-41ef-a014-9b44f90d10fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 18:28:35.858400   77264 system_pods.go:61] "kube-proxy-5ctps" [adbf43b1-986e-4bef-b515-9bf20e847369] Running
	I0917 18:28:35.858407   77264 system_pods.go:61] "kube-scheduler-embed-certs-081863" [1c6dc904-888a-43e2-9edf-ad87025d9cd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 18:28:35.858425   77264 system_pods.go:61] "metrics-server-6867b74b74-g2ttm" [dbb935ab-664c-420e-8b8e-4c033c3e07d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:28:35.858438   77264 system_pods.go:61] "storage-provisioner" [3a81abf3-c894-4279-91ce-6a66e4517de9] Running
	I0917 18:28:35.858446   77264 system_pods.go:74] duration metric: took 15.115932ms to wait for pod list to return data ...
	I0917 18:28:35.858459   77264 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:28:35.865686   77264 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:28:35.865715   77264 node_conditions.go:123] node cpu capacity is 2
	I0917 18:28:35.865728   77264 node_conditions.go:105] duration metric: took 7.262354ms to run NodePressure ...
	I0917 18:28:35.865747   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:36.133217   77264 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0917 18:28:36.142193   77264 kubeadm.go:739] kubelet initialised
	I0917 18:28:36.142216   77264 kubeadm.go:740] duration metric: took 8.957553ms waiting for restarted kubelet to initialise ...
	I0917 18:28:36.142223   77264 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:28:36.148365   77264 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-fv5t2" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.154605   77264 pod_ready.go:98] node "embed-certs-081863" hosting pod "coredns-7c65d6cfc9-fv5t2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.154633   77264 pod_ready.go:82] duration metric: took 6.241589ms for pod "coredns-7c65d6cfc9-fv5t2" in "kube-system" namespace to be "Ready" ...
	E0917 18:28:36.154644   77264 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-081863" hosting pod "coredns-7c65d6cfc9-fv5t2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.154654   77264 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.160864   77264 pod_ready.go:98] node "embed-certs-081863" hosting pod "etcd-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.160888   77264 pod_ready.go:82] duration metric: took 6.224743ms for pod "etcd-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	E0917 18:28:36.160899   77264 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-081863" hosting pod "etcd-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.160906   77264 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.167006   77264 pod_ready.go:98] node "embed-certs-081863" hosting pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.167038   77264 pod_ready.go:82] duration metric: took 6.114714ms for pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	E0917 18:28:36.167049   77264 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-081863" hosting pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.167058   77264 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.247310   77264 pod_ready.go:98] node "embed-certs-081863" hosting pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.247349   77264 pod_ready.go:82] duration metric: took 80.274557ms for pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	E0917 18:28:36.247361   77264 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-081863" hosting pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.247368   77264 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5ctps" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.647989   77264 pod_ready.go:93] pod "kube-proxy-5ctps" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:36.648012   77264 pod_ready.go:82] duration metric: took 400.635503ms for pod "kube-proxy-5ctps" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.648022   77264 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:38.654947   77264 pod_ready.go:103] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:40.658044   77264 pod_ready.go:103] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:39.000917   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:41.001794   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:40.124879   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:42.125939   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:40.832609   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:41.332438   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:41.832456   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:42.332846   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:42.832374   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:43.332703   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:43.832502   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:44.332845   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:44.832341   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:45.333377   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:43.154904   77264 pod_ready.go:103] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:45.155253   77264 pod_ready.go:103] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:43.001900   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:45.501989   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:44.625492   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:47.124276   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:45.832541   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:46.332842   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:46.832446   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:47.333344   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:47.833087   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:48.332527   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:48.832377   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:49.332937   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:49.833254   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:50.332394   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:47.157575   77264 pod_ready.go:93] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:47.157603   77264 pod_ready.go:82] duration metric: took 10.509573459s for pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:47.157614   77264 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:49.163957   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:48.000696   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:50.001527   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:49.627381   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:52.125550   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:50.833049   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:51.333314   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:51.832959   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:52.332830   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:52.832394   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:53.333004   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:53.832841   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:54.333310   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:54.832648   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:55.332487   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:51.164376   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:53.164866   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:55.165065   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:52.501375   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:54.501792   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:57.006451   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:54.624863   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:57.125005   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:55.832339   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:56.333257   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:56.833293   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:57.332665   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:57.833189   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:58.332409   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:58.833030   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:59.333251   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:59.832903   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:00.333365   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:57.664921   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:00.165972   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:59.500173   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:01.501014   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:59.125299   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:01.125883   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:00.833018   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:01.332976   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:01.832860   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:02.332401   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:02.832409   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:03.333273   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:03.832435   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:04.332572   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:04.832618   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:05.333051   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:02.166251   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:04.665729   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:04.000731   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:06.000850   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:03.624799   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:05.625817   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:08.124471   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:05.833109   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:06.332870   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:06.833248   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:07.332856   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:07.832795   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:08.332779   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:08.832356   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:09.333340   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:09.832899   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:10.332646   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:06.666037   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:09.163623   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:08.501863   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:10.504311   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:10.125479   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:12.625676   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:10.833153   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:10.833224   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:10.877318   78008 cri.go:89] found id: ""
	I0917 18:29:10.877347   78008 logs.go:276] 0 containers: []
	W0917 18:29:10.877356   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:10.877363   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:10.877433   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:10.913506   78008 cri.go:89] found id: ""
	I0917 18:29:10.913532   78008 logs.go:276] 0 containers: []
	W0917 18:29:10.913540   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:10.913546   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:10.913607   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:10.952648   78008 cri.go:89] found id: ""
	I0917 18:29:10.952679   78008 logs.go:276] 0 containers: []
	W0917 18:29:10.952689   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:10.952699   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:10.952761   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:10.992819   78008 cri.go:89] found id: ""
	I0917 18:29:10.992851   78008 logs.go:276] 0 containers: []
	W0917 18:29:10.992863   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:10.992870   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:10.992923   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:11.032717   78008 cri.go:89] found id: ""
	I0917 18:29:11.032752   78008 logs.go:276] 0 containers: []
	W0917 18:29:11.032764   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:11.032772   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:11.032831   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:11.070909   78008 cri.go:89] found id: ""
	I0917 18:29:11.070934   78008 logs.go:276] 0 containers: []
	W0917 18:29:11.070944   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:11.070953   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:11.071005   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:11.111115   78008 cri.go:89] found id: ""
	I0917 18:29:11.111146   78008 logs.go:276] 0 containers: []
	W0917 18:29:11.111157   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:11.111164   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:11.111233   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:11.147704   78008 cri.go:89] found id: ""
	I0917 18:29:11.147738   78008 logs.go:276] 0 containers: []
	W0917 18:29:11.147751   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:11.147770   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:11.147783   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:11.222086   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:11.222131   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:11.268572   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:11.268598   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:11.320140   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:11.320179   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:11.336820   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:11.336862   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:11.476726   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:13.977359   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:13.991780   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:13.991861   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:14.029657   78008 cri.go:89] found id: ""
	I0917 18:29:14.029686   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.029697   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:14.029703   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:14.029761   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:14.070673   78008 cri.go:89] found id: ""
	I0917 18:29:14.070707   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.070716   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:14.070722   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:14.070781   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:14.109826   78008 cri.go:89] found id: ""
	I0917 18:29:14.109862   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.109872   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:14.109880   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:14.109938   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:14.156812   78008 cri.go:89] found id: ""
	I0917 18:29:14.156839   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.156848   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:14.156853   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:14.156909   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:14.203877   78008 cri.go:89] found id: ""
	I0917 18:29:14.203906   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.203915   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:14.203921   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:14.203973   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:14.263366   78008 cri.go:89] found id: ""
	I0917 18:29:14.263395   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.263403   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:14.263408   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:14.263469   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:14.305300   78008 cri.go:89] found id: ""
	I0917 18:29:14.305324   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.305331   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:14.305337   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:14.305393   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:14.342838   78008 cri.go:89] found id: ""
	I0917 18:29:14.342874   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.342888   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:14.342900   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:14.342915   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:14.394814   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:14.394864   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:14.410058   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:14.410084   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:14.497503   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:14.497532   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:14.497547   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:14.578545   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:14.578582   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:11.164670   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:13.664310   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:15.664728   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:13.001122   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:15.001204   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:15.124476   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:17.125696   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:17.119953   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:17.134019   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:17.134078   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:17.174236   78008 cri.go:89] found id: ""
	I0917 18:29:17.174259   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.174268   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:17.174273   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:17.174317   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:17.208678   78008 cri.go:89] found id: ""
	I0917 18:29:17.208738   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.208749   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:17.208757   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:17.208820   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:17.242890   78008 cri.go:89] found id: ""
	I0917 18:29:17.242915   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.242923   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:17.242929   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:17.242983   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:17.281990   78008 cri.go:89] found id: ""
	I0917 18:29:17.282013   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.282038   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:17.282046   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:17.282105   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:17.320104   78008 cri.go:89] found id: ""
	I0917 18:29:17.320140   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.320153   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:17.320160   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:17.320220   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:17.361959   78008 cri.go:89] found id: ""
	I0917 18:29:17.361993   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.362004   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:17.362012   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:17.362120   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:17.400493   78008 cri.go:89] found id: ""
	I0917 18:29:17.400531   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.400543   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:17.400550   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:17.400611   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:17.435549   78008 cri.go:89] found id: ""
	I0917 18:29:17.435574   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.435582   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:17.435590   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:17.435605   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:17.483883   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:17.483919   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:17.498771   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:17.498801   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:17.583654   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:17.583680   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:17.583695   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:17.670903   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:17.670935   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:20.213963   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:20.228410   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:20.228487   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:20.268252   78008 cri.go:89] found id: ""
	I0917 18:29:20.268290   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.268301   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:20.268308   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:20.268385   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:20.307725   78008 cri.go:89] found id: ""
	I0917 18:29:20.307765   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.307774   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:20.307779   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:20.307840   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:20.350112   78008 cri.go:89] found id: ""
	I0917 18:29:20.350138   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.350146   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:20.350151   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:20.350209   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:20.386658   78008 cri.go:89] found id: ""
	I0917 18:29:20.386683   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.386692   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:20.386697   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:20.386758   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:20.427135   78008 cri.go:89] found id: ""
	I0917 18:29:20.427168   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.427180   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:20.427186   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:20.427253   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:20.464054   78008 cri.go:89] found id: ""
	I0917 18:29:20.464081   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.464091   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:20.464098   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:20.464162   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:20.503008   78008 cri.go:89] found id: ""
	I0917 18:29:20.503034   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.503043   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:20.503048   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:20.503107   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:20.539095   78008 cri.go:89] found id: ""
	I0917 18:29:20.539125   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.539137   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:20.539149   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:20.539165   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:20.552429   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:20.552457   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:20.631977   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:20.632000   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:20.632012   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:18.164593   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:20.164968   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:17.501184   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:19.503422   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:22.001605   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:19.624854   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:21.625397   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:20.709917   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:20.709950   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:20.752312   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:20.752349   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:23.310520   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:23.327230   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:23.327296   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:23.369648   78008 cri.go:89] found id: ""
	I0917 18:29:23.369677   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.369687   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:23.369692   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:23.369756   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:23.406968   78008 cri.go:89] found id: ""
	I0917 18:29:23.407002   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.407010   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:23.407017   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:23.407079   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:23.448246   78008 cri.go:89] found id: ""
	I0917 18:29:23.448275   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.448285   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:23.448290   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:23.448350   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:23.486975   78008 cri.go:89] found id: ""
	I0917 18:29:23.487006   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.487016   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:23.487024   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:23.487077   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:23.523614   78008 cri.go:89] found id: ""
	I0917 18:29:23.523645   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.523656   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:23.523672   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:23.523751   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:23.567735   78008 cri.go:89] found id: ""
	I0917 18:29:23.567763   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.567774   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:23.567781   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:23.567846   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:23.610952   78008 cri.go:89] found id: ""
	I0917 18:29:23.610985   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.610995   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:23.611002   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:23.611063   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:23.647601   78008 cri.go:89] found id: ""
	I0917 18:29:23.647633   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.647645   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:23.647657   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:23.647674   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:23.720969   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:23.720998   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:23.721014   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:23.802089   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:23.802124   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:23.847641   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:23.847673   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:23.901447   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:23.901488   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:22.663696   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:25.164022   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:24.001853   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:26.002572   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:24.124362   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:26.125485   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:26.416524   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:26.432087   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:26.432148   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:26.473403   78008 cri.go:89] found id: ""
	I0917 18:29:26.473435   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.473446   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:26.473453   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:26.473516   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:26.510736   78008 cri.go:89] found id: ""
	I0917 18:29:26.510764   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.510774   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:26.510780   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:26.510847   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:26.549732   78008 cri.go:89] found id: ""
	I0917 18:29:26.549766   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.549779   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:26.549789   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:26.549857   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:26.586548   78008 cri.go:89] found id: ""
	I0917 18:29:26.586580   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.586592   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:26.586599   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:26.586664   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:26.624246   78008 cri.go:89] found id: ""
	I0917 18:29:26.624276   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.624286   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:26.624294   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:26.624353   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:26.662535   78008 cri.go:89] found id: ""
	I0917 18:29:26.662565   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.662576   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:26.662584   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:26.662648   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:26.697775   78008 cri.go:89] found id: ""
	I0917 18:29:26.697810   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.697820   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:26.697826   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:26.697885   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:26.734181   78008 cri.go:89] found id: ""
	I0917 18:29:26.734209   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.734218   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:26.734228   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:26.734239   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:26.783128   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:26.783163   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:26.797674   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:26.797713   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:26.873548   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:26.873570   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:26.873581   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:26.954031   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:26.954066   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:29.494364   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:29.508545   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:29.508616   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:29.545854   78008 cri.go:89] found id: ""
	I0917 18:29:29.545880   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.545888   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:29.545893   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:29.545941   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:29.581646   78008 cri.go:89] found id: ""
	I0917 18:29:29.581680   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.581691   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:29.581698   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:29.581770   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:29.627071   78008 cri.go:89] found id: ""
	I0917 18:29:29.627101   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.627112   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:29.627119   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:29.627176   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:29.662514   78008 cri.go:89] found id: ""
	I0917 18:29:29.662544   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.662555   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:29.662562   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:29.662622   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:29.699246   78008 cri.go:89] found id: ""
	I0917 18:29:29.699278   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.699291   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:29.699299   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:29.699359   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:29.736018   78008 cri.go:89] found id: ""
	I0917 18:29:29.736057   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.736070   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:29.736077   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:29.736138   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:29.773420   78008 cri.go:89] found id: ""
	I0917 18:29:29.773449   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.773459   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:29.773467   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:29.773527   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:29.811530   78008 cri.go:89] found id: ""
	I0917 18:29:29.811556   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.811568   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:29.811578   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:29.811592   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:29.870083   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:29.870123   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:29.885471   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:29.885500   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:29.964699   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:29.964730   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:29.964754   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:30.048858   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:30.048899   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:27.165404   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:29.166367   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:28.500007   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:30.500594   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:28.626043   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:31.125419   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:33.125872   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:32.597013   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:32.611613   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:32.611691   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:32.648043   78008 cri.go:89] found id: ""
	I0917 18:29:32.648074   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.648086   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:32.648093   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:32.648159   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:32.686471   78008 cri.go:89] found id: ""
	I0917 18:29:32.686514   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.686526   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:32.686533   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:32.686594   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:32.721495   78008 cri.go:89] found id: ""
	I0917 18:29:32.721521   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.721530   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:32.721536   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:32.721595   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:32.757916   78008 cri.go:89] found id: ""
	I0917 18:29:32.757949   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.757960   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:32.757968   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:32.758035   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:32.793880   78008 cri.go:89] found id: ""
	I0917 18:29:32.793913   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.793925   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:32.793933   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:32.794006   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:32.834944   78008 cri.go:89] found id: ""
	I0917 18:29:32.834965   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.834973   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:32.834983   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:32.835044   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:32.872852   78008 cri.go:89] found id: ""
	I0917 18:29:32.872875   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.872883   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:32.872888   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:32.872939   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:32.913506   78008 cri.go:89] found id: ""
	I0917 18:29:32.913530   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.913538   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:32.913547   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:32.913562   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:32.928726   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:32.928751   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:33.001220   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:33.001259   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:33.001274   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:33.080268   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:33.080304   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:33.123977   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:33.124008   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:31.664513   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:34.164735   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:33.001341   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:35.500975   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:35.625484   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:37.625964   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:35.678936   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:35.692953   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:35.693036   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:35.736947   78008 cri.go:89] found id: ""
	I0917 18:29:35.736984   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.737004   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:35.737012   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:35.737076   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:35.776148   78008 cri.go:89] found id: ""
	I0917 18:29:35.776173   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.776184   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:35.776191   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:35.776253   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:35.814136   78008 cri.go:89] found id: ""
	I0917 18:29:35.814167   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.814179   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:35.814189   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:35.814252   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:35.854451   78008 cri.go:89] found id: ""
	I0917 18:29:35.854480   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.854492   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:35.854505   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:35.854573   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:35.893068   78008 cri.go:89] found id: ""
	I0917 18:29:35.893091   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.893102   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:35.893108   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:35.893174   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:35.929116   78008 cri.go:89] found id: ""
	I0917 18:29:35.929140   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.929148   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:35.929153   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:35.929211   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:35.964253   78008 cri.go:89] found id: ""
	I0917 18:29:35.964284   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.964294   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:35.964300   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:35.964364   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:36.002761   78008 cri.go:89] found id: ""
	I0917 18:29:36.002790   78008 logs.go:276] 0 containers: []
	W0917 18:29:36.002800   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:36.002810   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:36.002825   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:36.017581   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:36.017614   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:36.086982   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:36.087008   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:36.087024   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:36.169886   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:36.169919   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:36.215327   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:36.215355   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:38.768619   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:38.781979   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:38.782049   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:38.818874   78008 cri.go:89] found id: ""
	I0917 18:29:38.818903   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.818911   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:38.818918   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:38.818967   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:38.857619   78008 cri.go:89] found id: ""
	I0917 18:29:38.857648   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.857664   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:38.857670   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:38.857747   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:38.896861   78008 cri.go:89] found id: ""
	I0917 18:29:38.896896   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.896907   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:38.896914   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:38.896977   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:38.934593   78008 cri.go:89] found id: ""
	I0917 18:29:38.934616   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.934625   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:38.934632   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:38.934707   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:38.972359   78008 cri.go:89] found id: ""
	I0917 18:29:38.972383   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.972394   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:38.972400   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:38.972468   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:39.007529   78008 cri.go:89] found id: ""
	I0917 18:29:39.007554   78008 logs.go:276] 0 containers: []
	W0917 18:29:39.007561   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:39.007567   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:39.007613   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:39.042646   78008 cri.go:89] found id: ""
	I0917 18:29:39.042679   78008 logs.go:276] 0 containers: []
	W0917 18:29:39.042690   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:39.042697   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:39.042758   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:39.080077   78008 cri.go:89] found id: ""
	I0917 18:29:39.080106   78008 logs.go:276] 0 containers: []
	W0917 18:29:39.080118   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:39.080128   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:39.080144   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:39.094785   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:39.094812   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:39.168149   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:39.168173   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:39.168184   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:39.258912   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:39.258958   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:39.303103   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:39.303133   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:36.664761   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:38.664881   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:37.501339   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:40.001032   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:42.001645   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:40.124869   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:42.125730   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:41.860904   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:41.875574   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:41.875644   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:41.916576   78008 cri.go:89] found id: ""
	I0917 18:29:41.916603   78008 logs.go:276] 0 containers: []
	W0917 18:29:41.916615   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:41.916623   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:41.916674   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:41.952222   78008 cri.go:89] found id: ""
	I0917 18:29:41.952284   78008 logs.go:276] 0 containers: []
	W0917 18:29:41.952298   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:41.952307   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:41.952374   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:41.992584   78008 cri.go:89] found id: ""
	I0917 18:29:41.992611   78008 logs.go:276] 0 containers: []
	W0917 18:29:41.992621   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:41.992627   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:41.992689   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:42.030490   78008 cri.go:89] found id: ""
	I0917 18:29:42.030522   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.030534   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:42.030542   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:42.030621   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:42.067240   78008 cri.go:89] found id: ""
	I0917 18:29:42.067274   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.067287   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:42.067312   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:42.067394   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:42.106093   78008 cri.go:89] found id: ""
	I0917 18:29:42.106124   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.106137   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:42.106148   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:42.106227   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:42.148581   78008 cri.go:89] found id: ""
	I0917 18:29:42.148623   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.148635   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:42.148643   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:42.148729   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:42.188248   78008 cri.go:89] found id: ""
	I0917 18:29:42.188277   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.188286   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:42.188294   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:42.188308   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:42.276866   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:42.276906   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:42.325636   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:42.325671   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:42.379370   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:42.379406   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:42.396321   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:42.396357   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:42.481770   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:44.982800   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:44.996898   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:44.997053   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:45.036594   78008 cri.go:89] found id: ""
	I0917 18:29:45.036623   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.036632   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:45.036638   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:45.036699   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:45.073760   78008 cri.go:89] found id: ""
	I0917 18:29:45.073788   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.073799   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:45.073807   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:45.073868   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:45.111080   78008 cri.go:89] found id: ""
	I0917 18:29:45.111106   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.111116   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:45.111127   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:45.111196   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:45.149986   78008 cri.go:89] found id: ""
	I0917 18:29:45.150017   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.150027   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:45.150035   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:45.150099   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:45.187597   78008 cri.go:89] found id: ""
	I0917 18:29:45.187620   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.187629   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:45.187635   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:45.187701   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:45.234149   78008 cri.go:89] found id: ""
	I0917 18:29:45.234174   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.234182   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:45.234188   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:45.234236   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:45.269840   78008 cri.go:89] found id: ""
	I0917 18:29:45.269867   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.269875   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:45.269882   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:45.269944   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:45.306377   78008 cri.go:89] found id: ""
	I0917 18:29:45.306407   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.306418   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:45.306427   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:45.306441   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:45.388767   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:45.388788   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:45.388799   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:45.470114   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:45.470147   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:45.516157   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:45.516185   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:45.573857   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:45.573895   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:41.166141   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:43.664951   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:44.501916   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:47.000980   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:44.626656   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:47.124445   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:48.090706   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:48.105691   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:48.105776   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:48.150986   78008 cri.go:89] found id: ""
	I0917 18:29:48.151013   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.151024   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:48.151032   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:48.151100   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:48.192061   78008 cri.go:89] found id: ""
	I0917 18:29:48.192090   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.192099   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:48.192104   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:48.192161   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:48.229101   78008 cri.go:89] found id: ""
	I0917 18:29:48.229131   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.229148   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:48.229157   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:48.229220   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:48.265986   78008 cri.go:89] found id: ""
	I0917 18:29:48.266016   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.266027   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:48.266034   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:48.266095   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:48.303726   78008 cri.go:89] found id: ""
	I0917 18:29:48.303766   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.303776   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:48.303781   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:48.303830   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:48.339658   78008 cri.go:89] found id: ""
	I0917 18:29:48.339686   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.339696   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:48.339704   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:48.339774   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:48.379115   78008 cri.go:89] found id: ""
	I0917 18:29:48.379140   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.379157   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:48.379164   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:48.379218   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:48.414414   78008 cri.go:89] found id: ""
	I0917 18:29:48.414449   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.414461   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:48.414472   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:48.414488   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:48.428450   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:48.428477   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:48.514098   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:48.514125   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:48.514140   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:48.593472   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:48.593505   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:48.644071   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:48.644108   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:46.165499   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:48.166008   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:50.663751   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:49.001133   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:51.001465   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:49.125957   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:51.126670   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:51.202414   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:51.216803   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:51.216880   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:51.258947   78008 cri.go:89] found id: ""
	I0917 18:29:51.258982   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.259000   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:51.259009   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:51.259075   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:51.298904   78008 cri.go:89] found id: ""
	I0917 18:29:51.298937   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.298949   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:51.298957   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:51.299019   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:51.340714   78008 cri.go:89] found id: ""
	I0917 18:29:51.340743   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.340755   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:51.340761   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:51.340823   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:51.382480   78008 cri.go:89] found id: ""
	I0917 18:29:51.382518   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.382527   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:51.382532   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:51.382584   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:51.423788   78008 cri.go:89] found id: ""
	I0917 18:29:51.423818   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.423829   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:51.423836   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:51.423905   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:51.459714   78008 cri.go:89] found id: ""
	I0917 18:29:51.459740   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.459755   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:51.459762   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:51.459810   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:51.495817   78008 cri.go:89] found id: ""
	I0917 18:29:51.495850   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.495862   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:51.495870   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:51.495926   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:51.531481   78008 cri.go:89] found id: ""
	I0917 18:29:51.531521   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.531538   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:51.531550   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:51.531566   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:51.547085   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:51.547120   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:51.622717   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:51.622743   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:51.622758   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:51.701363   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:51.701404   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:51.749746   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:51.749779   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:54.306208   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:54.320659   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:54.320737   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:54.365488   78008 cri.go:89] found id: ""
	I0917 18:29:54.365513   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.365521   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:54.365527   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:54.365588   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:54.417659   78008 cri.go:89] found id: ""
	I0917 18:29:54.417689   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.417700   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:54.417706   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:54.417773   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:54.460760   78008 cri.go:89] found id: ""
	I0917 18:29:54.460795   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.460806   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:54.460814   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:54.460865   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:54.501371   78008 cri.go:89] found id: ""
	I0917 18:29:54.501405   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.501419   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:54.501428   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:54.501501   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:54.549810   78008 cri.go:89] found id: ""
	I0917 18:29:54.549844   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.549853   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:54.549859   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:54.549910   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:54.586837   78008 cri.go:89] found id: ""
	I0917 18:29:54.586860   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.586867   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:54.586881   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:54.586942   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:54.623858   78008 cri.go:89] found id: ""
	I0917 18:29:54.623887   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.623898   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:54.623905   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:54.623967   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:54.660913   78008 cri.go:89] found id: ""
	I0917 18:29:54.660945   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.660955   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:54.660965   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:54.660979   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:54.716523   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:54.716560   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:54.731846   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:54.731877   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:54.812288   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:54.812311   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:54.812323   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:54.892779   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:54.892819   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:52.663861   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:54.664903   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:53.501802   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:56.001407   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:53.624682   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:56.124445   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:57.440435   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:57.454886   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:57.454964   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:57.491408   78008 cri.go:89] found id: ""
	I0917 18:29:57.491440   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.491453   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:57.491461   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:57.491523   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:57.535786   78008 cri.go:89] found id: ""
	I0917 18:29:57.535814   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.535829   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:57.535837   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:57.535904   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:57.578014   78008 cri.go:89] found id: ""
	I0917 18:29:57.578043   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.578051   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:57.578057   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:57.578108   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:57.615580   78008 cri.go:89] found id: ""
	I0917 18:29:57.615615   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.615626   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:57.615634   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:57.615699   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:57.660250   78008 cri.go:89] found id: ""
	I0917 18:29:57.660285   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.660296   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:57.660305   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:57.660366   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:57.700495   78008 cri.go:89] found id: ""
	I0917 18:29:57.700526   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.700536   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:57.700542   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:57.700600   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:57.740580   78008 cri.go:89] found id: ""
	I0917 18:29:57.740616   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.740627   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:57.740635   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:57.740694   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:57.776982   78008 cri.go:89] found id: ""
	I0917 18:29:57.777012   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.777024   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:57.777035   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:57.777049   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:57.877144   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:57.877184   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:57.923875   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:57.923912   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:57.976988   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:57.977025   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:57.992196   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:57.992223   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:58.071161   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:00.571930   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:00.586999   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:00.587083   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:00.625833   78008 cri.go:89] found id: ""
	I0917 18:30:00.625856   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.625864   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:00.625869   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:00.625924   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:00.669976   78008 cri.go:89] found id: ""
	I0917 18:30:00.669999   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.670007   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:00.670012   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:00.670072   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:56.665386   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:59.163695   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:58.002576   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:00.500510   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:58.624759   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:00.633084   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:03.124695   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:00.708223   78008 cri.go:89] found id: ""
	I0917 18:30:00.708249   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.708257   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:00.708263   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:00.708315   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:00.743322   78008 cri.go:89] found id: ""
	I0917 18:30:00.743352   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.743364   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:00.743371   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:00.743508   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:00.778595   78008 cri.go:89] found id: ""
	I0917 18:30:00.778625   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.778635   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:00.778643   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:00.778706   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:00.816878   78008 cri.go:89] found id: ""
	I0917 18:30:00.816911   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.816923   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:00.816930   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:00.816983   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:00.855841   78008 cri.go:89] found id: ""
	I0917 18:30:00.855876   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.855889   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:00.855898   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:00.855974   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:00.897170   78008 cri.go:89] found id: ""
	I0917 18:30:00.897195   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.897203   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:00.897210   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:00.897236   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:00.949640   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:00.949680   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:00.963799   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:00.963825   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:01.050102   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:01.050123   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:01.050135   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:01.129012   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:01.129061   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:03.672160   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:03.687572   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:03.687648   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:03.729586   78008 cri.go:89] found id: ""
	I0917 18:30:03.729615   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.729626   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:03.729632   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:03.729692   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:03.766993   78008 cri.go:89] found id: ""
	I0917 18:30:03.767022   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.767032   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:03.767039   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:03.767104   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:03.804340   78008 cri.go:89] found id: ""
	I0917 18:30:03.804368   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.804378   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:03.804385   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:03.804451   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:03.847020   78008 cri.go:89] found id: ""
	I0917 18:30:03.847050   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.847061   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:03.847068   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:03.847158   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:03.885900   78008 cri.go:89] found id: ""
	I0917 18:30:03.885927   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.885938   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:03.885946   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:03.886009   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:03.925137   78008 cri.go:89] found id: ""
	I0917 18:30:03.925167   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.925178   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:03.925184   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:03.925259   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:03.962225   78008 cri.go:89] found id: ""
	I0917 18:30:03.962261   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.962275   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:03.962283   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:03.962352   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:04.005866   78008 cri.go:89] found id: ""
	I0917 18:30:04.005892   78008 logs.go:276] 0 containers: []
	W0917 18:30:04.005902   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:04.005909   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:04.005921   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:04.057578   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:04.057615   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:04.072178   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:04.072213   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:04.145219   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:04.145251   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:04.145285   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:04.234230   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:04.234282   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:01.165075   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:03.666085   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:05.672830   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:03.000954   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:05.501361   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:05.124840   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:07.126821   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:06.777988   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:06.793426   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:06.793500   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:06.833313   78008 cri.go:89] found id: ""
	I0917 18:30:06.833352   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.833360   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:06.833365   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:06.833424   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:06.870020   78008 cri.go:89] found id: ""
	I0917 18:30:06.870047   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.870056   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:06.870062   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:06.870124   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:06.906682   78008 cri.go:89] found id: ""
	I0917 18:30:06.906716   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.906728   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:06.906735   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:06.906810   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:06.946328   78008 cri.go:89] found id: ""
	I0917 18:30:06.946356   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.946365   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:06.946371   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:06.946418   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:06.983832   78008 cri.go:89] found id: ""
	I0917 18:30:06.983856   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.983865   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:06.983871   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:06.983918   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:07.024526   78008 cri.go:89] found id: ""
	I0917 18:30:07.024560   78008 logs.go:276] 0 containers: []
	W0917 18:30:07.024571   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:07.024579   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:07.024637   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:07.066891   78008 cri.go:89] found id: ""
	I0917 18:30:07.066917   78008 logs.go:276] 0 containers: []
	W0917 18:30:07.066928   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:07.066935   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:07.066997   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:07.105669   78008 cri.go:89] found id: ""
	I0917 18:30:07.105709   78008 logs.go:276] 0 containers: []
	W0917 18:30:07.105721   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:07.105732   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:07.105754   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:07.120771   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:07.120802   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:07.195243   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:07.195272   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:07.195287   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:07.284377   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:07.284428   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:07.326894   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:07.326924   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:09.886998   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:09.900710   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:09.900773   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:09.943198   78008 cri.go:89] found id: ""
	I0917 18:30:09.943225   78008 logs.go:276] 0 containers: []
	W0917 18:30:09.943234   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:09.943240   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:09.943300   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:09.980113   78008 cri.go:89] found id: ""
	I0917 18:30:09.980148   78008 logs.go:276] 0 containers: []
	W0917 18:30:09.980160   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:09.980167   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:09.980226   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:10.017582   78008 cri.go:89] found id: ""
	I0917 18:30:10.017613   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.017625   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:10.017632   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:10.017681   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:10.053698   78008 cri.go:89] found id: ""
	I0917 18:30:10.053722   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.053731   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:10.053736   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:10.053784   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:10.091391   78008 cri.go:89] found id: ""
	I0917 18:30:10.091421   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.091433   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:10.091439   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:10.091496   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:10.130636   78008 cri.go:89] found id: ""
	I0917 18:30:10.130668   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.130677   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:10.130682   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:10.130736   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:10.168175   78008 cri.go:89] found id: ""
	I0917 18:30:10.168203   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.168214   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:10.168222   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:10.168313   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:10.207085   78008 cri.go:89] found id: ""
	I0917 18:30:10.207109   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.207118   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:10.207126   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:10.207139   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:10.245978   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:10.246007   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:10.298522   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:10.298569   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:10.312878   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:10.312904   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:10.387530   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:10.387553   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:10.387565   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:08.165955   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:10.663887   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:08.000401   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:10.000928   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:12.001022   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:09.625405   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:12.124546   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:12.967663   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:12.982157   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:12.982215   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:13.020177   78008 cri.go:89] found id: ""
	I0917 18:30:13.020224   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.020235   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:13.020241   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:13.020310   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:13.056317   78008 cri.go:89] found id: ""
	I0917 18:30:13.056342   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.056351   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:13.056356   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:13.056404   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:13.091799   78008 cri.go:89] found id: ""
	I0917 18:30:13.091823   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.091832   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:13.091838   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:13.091888   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:13.130421   78008 cri.go:89] found id: ""
	I0917 18:30:13.130450   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.130460   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:13.130465   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:13.130518   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:13.170623   78008 cri.go:89] found id: ""
	I0917 18:30:13.170654   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.170664   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:13.170672   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:13.170732   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:13.206396   78008 cri.go:89] found id: ""
	I0917 18:30:13.206441   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.206452   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:13.206460   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:13.206514   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:13.243090   78008 cri.go:89] found id: ""
	I0917 18:30:13.243121   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.243132   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:13.243139   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:13.243192   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:13.285690   78008 cri.go:89] found id: ""
	I0917 18:30:13.285730   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.285740   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:13.285747   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:13.285759   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:13.361992   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:13.362021   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:13.362043   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:13.448424   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:13.448467   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:13.489256   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:13.489284   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:13.544698   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:13.544735   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:12.665127   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:15.164296   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:14.501748   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:17.001119   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:14.124965   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:16.625638   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:16.060014   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:16.073504   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:16.073564   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:16.110538   78008 cri.go:89] found id: ""
	I0917 18:30:16.110567   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.110579   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:16.110587   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:16.110648   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:16.148521   78008 cri.go:89] found id: ""
	I0917 18:30:16.148551   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.148562   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:16.148570   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:16.148640   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:16.182772   78008 cri.go:89] found id: ""
	I0917 18:30:16.182796   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.182804   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:16.182809   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:16.182858   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:16.219617   78008 cri.go:89] found id: ""
	I0917 18:30:16.219642   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.219653   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:16.219660   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:16.219714   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:16.257320   78008 cri.go:89] found id: ""
	I0917 18:30:16.257345   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.257354   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:16.257359   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:16.257419   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:16.295118   78008 cri.go:89] found id: ""
	I0917 18:30:16.295150   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.295161   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:16.295168   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:16.295234   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:16.332448   78008 cri.go:89] found id: ""
	I0917 18:30:16.332482   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.332493   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:16.332500   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:16.332564   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:16.370155   78008 cri.go:89] found id: ""
	I0917 18:30:16.370182   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.370189   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:16.370197   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:16.370208   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:16.410230   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:16.410260   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:16.462306   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:16.462342   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:16.476472   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:16.476506   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:16.550449   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:16.550479   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:16.550497   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:19.129550   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:19.143333   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:19.143415   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:19.184184   78008 cri.go:89] found id: ""
	I0917 18:30:19.184213   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.184224   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:19.184231   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:19.184289   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:19.219455   78008 cri.go:89] found id: ""
	I0917 18:30:19.219489   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.219501   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:19.219508   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:19.219568   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:19.257269   78008 cri.go:89] found id: ""
	I0917 18:30:19.257303   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.257315   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:19.257328   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:19.257405   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:19.293898   78008 cri.go:89] found id: ""
	I0917 18:30:19.293931   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.293943   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:19.293951   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:19.294009   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:19.339154   78008 cri.go:89] found id: ""
	I0917 18:30:19.339183   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.339194   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:19.339201   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:19.339268   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:19.378608   78008 cri.go:89] found id: ""
	I0917 18:30:19.378634   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.378646   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:19.378653   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:19.378720   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:19.415280   78008 cri.go:89] found id: ""
	I0917 18:30:19.415311   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.415322   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:19.415330   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:19.415396   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:19.454025   78008 cri.go:89] found id: ""
	I0917 18:30:19.454066   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.454079   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:19.454089   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:19.454107   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:19.505918   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:19.505950   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:19.520996   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:19.521027   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:19.597408   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:19.597431   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:19.597442   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:19.678454   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:19.678487   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:17.165495   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:19.665976   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:19.001210   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:21.001549   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:19.123461   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:21.124423   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:23.124646   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:22.223094   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:22.238644   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:22.238722   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:22.279497   78008 cri.go:89] found id: ""
	I0917 18:30:22.279529   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.279541   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:22.279554   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:22.279616   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:22.315953   78008 cri.go:89] found id: ""
	I0917 18:30:22.315980   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.315990   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:22.315997   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:22.316061   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:22.355157   78008 cri.go:89] found id: ""
	I0917 18:30:22.355191   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.355204   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:22.355212   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:22.355278   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:22.393304   78008 cri.go:89] found id: ""
	I0917 18:30:22.393335   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.393346   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:22.393353   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:22.393405   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:22.437541   78008 cri.go:89] found id: ""
	I0917 18:30:22.437567   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.437576   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:22.437582   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:22.437637   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:22.478560   78008 cri.go:89] found id: ""
	I0917 18:30:22.478588   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.478596   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:22.478601   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:22.478661   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:22.516049   78008 cri.go:89] found id: ""
	I0917 18:30:22.516084   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.516093   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:22.516099   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:22.516151   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:22.554321   78008 cri.go:89] found id: ""
	I0917 18:30:22.554350   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.554359   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:22.554367   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:22.554377   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:22.613073   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:22.613110   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:22.627768   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:22.627797   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:22.710291   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:22.710318   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:22.710333   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:22.807999   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:22.808035   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:25.350639   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:25.366302   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:25.366405   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:25.411585   78008 cri.go:89] found id: ""
	I0917 18:30:25.411613   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.411625   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:25.411632   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:25.411694   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:25.453414   78008 cri.go:89] found id: ""
	I0917 18:30:25.453441   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.453461   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:25.453467   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:25.453529   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:25.489776   78008 cri.go:89] found id: ""
	I0917 18:30:25.489803   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.489812   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:25.489817   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:25.489868   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:25.531594   78008 cri.go:89] found id: ""
	I0917 18:30:25.531624   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.531633   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:25.531638   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:25.531686   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:25.568796   78008 cri.go:89] found id: ""
	I0917 18:30:25.568820   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.568831   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:25.568837   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:25.568888   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:25.605612   78008 cri.go:89] found id: ""
	I0917 18:30:25.605643   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.605654   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:25.605661   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:25.605719   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:25.647673   78008 cri.go:89] found id: ""
	I0917 18:30:25.647698   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.647708   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:25.647713   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:25.647772   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:22.164631   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:24.165353   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:23.500355   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:25.503250   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:25.125192   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:27.125540   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:25.686943   78008 cri.go:89] found id: ""
	I0917 18:30:25.686976   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.686989   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:25.687000   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:25.687022   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:25.728440   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:25.728477   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:25.778211   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:25.778254   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:25.792519   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:25.792547   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:25.879452   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:25.879477   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:25.879492   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:28.460531   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:28.474595   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:28.474689   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:28.531065   78008 cri.go:89] found id: ""
	I0917 18:30:28.531099   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.531108   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:28.531117   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:28.531184   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:28.571952   78008 cri.go:89] found id: ""
	I0917 18:30:28.571991   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.572002   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:28.572012   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:28.572081   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:28.608315   78008 cri.go:89] found id: ""
	I0917 18:30:28.608348   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.608364   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:28.608371   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:28.608433   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:28.647882   78008 cri.go:89] found id: ""
	I0917 18:30:28.647913   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.647925   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:28.647932   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:28.647997   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:28.684998   78008 cri.go:89] found id: ""
	I0917 18:30:28.685021   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.685030   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:28.685036   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:28.685098   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:28.724249   78008 cri.go:89] found id: ""
	I0917 18:30:28.724274   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.724282   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:28.724287   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:28.724348   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:28.765932   78008 cri.go:89] found id: ""
	I0917 18:30:28.765965   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.765976   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:28.765982   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:28.766047   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:28.803857   78008 cri.go:89] found id: ""
	I0917 18:30:28.803888   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.803899   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:28.803910   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:28.803923   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:28.863667   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:28.863703   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:28.878148   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:28.878187   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:28.956714   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:28.956743   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:28.956760   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:29.036303   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:29.036342   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:26.664369   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:28.665390   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:28.001973   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:30.500284   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:29.126782   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:31.626235   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:31.581741   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:31.595509   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:31.595592   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:31.631185   78008 cri.go:89] found id: ""
	I0917 18:30:31.631215   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.631227   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:31.631234   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:31.631286   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:31.668059   78008 cri.go:89] found id: ""
	I0917 18:30:31.668091   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.668102   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:31.668109   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:31.668168   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:31.705807   78008 cri.go:89] found id: ""
	I0917 18:30:31.705838   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.705849   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:31.705856   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:31.705925   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:31.750168   78008 cri.go:89] found id: ""
	I0917 18:30:31.750198   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.750212   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:31.750220   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:31.750282   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:31.792032   78008 cri.go:89] found id: ""
	I0917 18:30:31.792054   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.792063   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:31.792069   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:31.792130   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:31.828596   78008 cri.go:89] found id: ""
	I0917 18:30:31.828632   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.828646   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:31.828654   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:31.828708   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:31.871963   78008 cri.go:89] found id: ""
	I0917 18:30:31.872000   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.872013   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:31.872023   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:31.872094   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:31.906688   78008 cri.go:89] found id: ""
	I0917 18:30:31.906718   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.906727   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:31.906735   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:31.906746   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:31.920311   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:31.920339   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:32.009966   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:32.009992   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:32.010006   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:32.088409   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:32.088447   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:32.132771   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:32.132806   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:34.686159   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:34.700133   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:34.700211   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:34.739392   78008 cri.go:89] found id: ""
	I0917 18:30:34.739431   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.739445   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:34.739453   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:34.739522   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:34.779141   78008 cri.go:89] found id: ""
	I0917 18:30:34.779175   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.779188   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:34.779195   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:34.779260   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:34.819883   78008 cri.go:89] found id: ""
	I0917 18:30:34.819907   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.819915   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:34.819920   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:34.819967   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:34.855886   78008 cri.go:89] found id: ""
	I0917 18:30:34.855912   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.855923   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:34.855931   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:34.855999   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:34.903919   78008 cri.go:89] found id: ""
	I0917 18:30:34.903956   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.903968   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:34.903975   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:34.904042   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:34.951895   78008 cri.go:89] found id: ""
	I0917 18:30:34.951925   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.951936   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:34.951943   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:34.952007   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:35.013084   78008 cri.go:89] found id: ""
	I0917 18:30:35.013124   78008 logs.go:276] 0 containers: []
	W0917 18:30:35.013132   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:35.013137   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:35.013189   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:35.051565   78008 cri.go:89] found id: ""
	I0917 18:30:35.051589   78008 logs.go:276] 0 containers: []
	W0917 18:30:35.051598   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:35.051606   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:35.051616   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:35.092723   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:35.092753   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:35.147996   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:35.148037   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:35.164989   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:35.165030   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:35.246216   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:35.246239   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:35.246252   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:31.163920   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:33.664255   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:32.500662   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:35.002015   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:34.124883   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:36.125144   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:38.125514   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:37.828811   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:37.846467   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:37.846534   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:37.884725   78008 cri.go:89] found id: ""
	I0917 18:30:37.884758   78008 logs.go:276] 0 containers: []
	W0917 18:30:37.884769   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:37.884777   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:37.884836   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:37.923485   78008 cri.go:89] found id: ""
	I0917 18:30:37.923517   78008 logs.go:276] 0 containers: []
	W0917 18:30:37.923525   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:37.923531   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:37.923597   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:37.962829   78008 cri.go:89] found id: ""
	I0917 18:30:37.962857   78008 logs.go:276] 0 containers: []
	W0917 18:30:37.962867   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:37.962873   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:37.962938   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:38.003277   78008 cri.go:89] found id: ""
	I0917 18:30:38.003305   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.003313   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:38.003319   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:38.003380   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:38.047919   78008 cri.go:89] found id: ""
	I0917 18:30:38.047952   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.047963   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:38.047971   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:38.048043   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:38.084853   78008 cri.go:89] found id: ""
	I0917 18:30:38.084883   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.084896   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:38.084904   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:38.084967   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:38.122340   78008 cri.go:89] found id: ""
	I0917 18:30:38.122369   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.122379   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:38.122387   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:38.122446   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:38.163071   78008 cri.go:89] found id: ""
	I0917 18:30:38.163101   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.163112   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:38.163121   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:38.163134   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:38.243772   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:38.243812   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:38.291744   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:38.291777   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:38.346738   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:38.346778   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:38.361908   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:38.361953   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:38.441730   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:36.165051   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:38.165173   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:40.664192   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:37.500496   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:39.501199   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:42.000608   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:40.626165   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:43.125533   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:40.942693   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:40.960643   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:40.960713   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:41.016226   78008 cri.go:89] found id: ""
	I0917 18:30:41.016255   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.016265   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:41.016270   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:41.016328   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:41.054315   78008 cri.go:89] found id: ""
	I0917 18:30:41.054342   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.054353   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:41.054360   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:41.054426   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:41.092946   78008 cri.go:89] found id: ""
	I0917 18:30:41.092978   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.092991   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:41.092998   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:41.093058   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:41.133385   78008 cri.go:89] found id: ""
	I0917 18:30:41.133415   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.133423   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:41.133430   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:41.133487   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:41.173993   78008 cri.go:89] found id: ""
	I0917 18:30:41.174017   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.174025   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:41.174030   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:41.174083   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:41.211127   78008 cri.go:89] found id: ""
	I0917 18:30:41.211154   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.211168   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:41.211174   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:41.211244   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:41.248607   78008 cri.go:89] found id: ""
	I0917 18:30:41.248632   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.248645   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:41.248652   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:41.248714   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:41.284580   78008 cri.go:89] found id: ""
	I0917 18:30:41.284612   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.284621   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:41.284629   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:41.284640   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:41.336573   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:41.336613   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:41.352134   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:41.352167   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:41.419061   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:41.419085   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:41.419099   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:41.499758   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:41.499792   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:44.043361   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:44.057270   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:44.057339   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:44.096130   78008 cri.go:89] found id: ""
	I0917 18:30:44.096165   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.096176   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:44.096184   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:44.096238   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:44.134483   78008 cri.go:89] found id: ""
	I0917 18:30:44.134514   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.134526   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:44.134534   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:44.134601   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:44.172723   78008 cri.go:89] found id: ""
	I0917 18:30:44.172759   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.172774   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:44.172782   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:44.172855   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:44.208478   78008 cri.go:89] found id: ""
	I0917 18:30:44.208506   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.208514   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:44.208519   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:44.208577   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:44.249352   78008 cri.go:89] found id: ""
	I0917 18:30:44.249381   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.249391   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:44.249398   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:44.249457   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:44.291156   78008 cri.go:89] found id: ""
	I0917 18:30:44.291180   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.291188   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:44.291194   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:44.291243   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:44.331580   78008 cri.go:89] found id: ""
	I0917 18:30:44.331612   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.331623   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:44.331632   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:44.331705   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:44.370722   78008 cri.go:89] found id: ""
	I0917 18:30:44.370750   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.370763   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:44.370774   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:44.370797   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:44.421126   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:44.421161   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:44.478581   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:44.478624   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:44.493492   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:44.493522   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:44.566317   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:44.566347   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:44.566358   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:42.664631   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:44.664871   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:44.001209   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:46.003437   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:45.625415   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:47.626515   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:47.147466   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:47.162590   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:47.162654   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:47.201382   78008 cri.go:89] found id: ""
	I0917 18:30:47.201409   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.201418   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:47.201423   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:47.201474   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:47.249536   78008 cri.go:89] found id: ""
	I0917 18:30:47.249561   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.249569   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:47.249574   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:47.249631   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:47.292337   78008 cri.go:89] found id: ""
	I0917 18:30:47.292361   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.292369   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:47.292376   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:47.292438   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:47.341387   78008 cri.go:89] found id: ""
	I0917 18:30:47.341421   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.341433   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:47.341447   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:47.341531   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:47.382687   78008 cri.go:89] found id: ""
	I0917 18:30:47.382719   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.382748   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:47.382762   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:47.382827   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:47.419598   78008 cri.go:89] found id: ""
	I0917 18:30:47.419632   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.419644   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:47.419650   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:47.419717   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:47.456104   78008 cri.go:89] found id: ""
	I0917 18:30:47.456131   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.456141   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:47.456148   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:47.456210   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:47.498610   78008 cri.go:89] found id: ""
	I0917 18:30:47.498643   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.498654   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:47.498665   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:47.498706   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:47.573796   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:47.573819   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:47.573830   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:47.651234   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:47.651271   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:47.692875   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:47.692902   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:47.747088   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:47.747128   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:50.262789   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:50.277262   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:50.277415   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:50.314866   78008 cri.go:89] found id: ""
	I0917 18:30:50.314902   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.314911   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:50.314916   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:50.314971   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:50.353490   78008 cri.go:89] found id: ""
	I0917 18:30:50.353527   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.353536   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:50.353542   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:50.353590   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:50.391922   78008 cri.go:89] found id: ""
	I0917 18:30:50.391944   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.391952   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:50.391957   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:50.392003   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:50.431088   78008 cri.go:89] found id: ""
	I0917 18:30:50.431118   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.431129   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:50.431136   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:50.431186   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:50.469971   78008 cri.go:89] found id: ""
	I0917 18:30:50.469999   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.470010   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:50.470018   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:50.470083   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:50.509121   78008 cri.go:89] found id: ""
	I0917 18:30:50.509153   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.509165   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:50.509172   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:50.509256   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:50.546569   78008 cri.go:89] found id: ""
	I0917 18:30:50.546594   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.546602   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:50.546607   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:50.546656   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:50.586045   78008 cri.go:89] found id: ""
	I0917 18:30:50.586071   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.586080   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:50.586088   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:50.586098   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:50.642994   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:50.643040   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:50.658018   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:50.658050   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 18:30:46.665597   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:49.164714   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:48.501502   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:51.001554   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:50.124526   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:52.625006   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	W0917 18:30:50.730760   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:50.730792   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:50.730808   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:50.810154   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:50.810185   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:53.356859   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:53.371313   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:53.371406   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:53.412822   78008 cri.go:89] found id: ""
	I0917 18:30:53.412847   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.412858   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:53.412865   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:53.412931   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:53.448900   78008 cri.go:89] found id: ""
	I0917 18:30:53.448932   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.448943   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:53.448950   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:53.449014   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:53.487141   78008 cri.go:89] found id: ""
	I0917 18:30:53.487167   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.487176   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:53.487182   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:53.487251   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:53.528899   78008 cri.go:89] found id: ""
	I0917 18:30:53.528928   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.528940   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:53.528947   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:53.529008   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:53.564795   78008 cri.go:89] found id: ""
	I0917 18:30:53.564827   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.564839   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:53.564847   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:53.564914   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:53.605208   78008 cri.go:89] found id: ""
	I0917 18:30:53.605257   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.605268   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:53.605277   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:53.605339   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:53.647177   78008 cri.go:89] found id: ""
	I0917 18:30:53.647205   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.647214   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:53.647219   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:53.647278   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:53.694030   78008 cri.go:89] found id: ""
	I0917 18:30:53.694057   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.694067   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:53.694075   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:53.694085   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:53.746611   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:53.746645   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:53.761563   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:53.761595   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:53.835663   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:53.835694   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:53.835709   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:53.920796   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:53.920848   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:51.166015   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:53.665173   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:53.001959   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:55.501150   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:54.625124   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:56.626246   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:56.468452   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:56.482077   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:56.482148   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:56.518569   78008 cri.go:89] found id: ""
	I0917 18:30:56.518593   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.518601   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:56.518607   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:56.518665   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:56.560000   78008 cri.go:89] found id: ""
	I0917 18:30:56.560033   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.560045   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:56.560054   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:56.560117   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:56.600391   78008 cri.go:89] found id: ""
	I0917 18:30:56.600423   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.600435   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:56.600442   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:56.600519   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:56.637674   78008 cri.go:89] found id: ""
	I0917 18:30:56.637706   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.637720   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:56.637728   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:56.637781   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:56.673297   78008 cri.go:89] found id: ""
	I0917 18:30:56.673329   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.673340   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:56.673348   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:56.673414   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:56.708863   78008 cri.go:89] found id: ""
	I0917 18:30:56.708898   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.708910   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:56.708917   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:56.708979   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:56.745165   78008 cri.go:89] found id: ""
	I0917 18:30:56.745199   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.745211   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:56.745220   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:56.745297   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:56.793206   78008 cri.go:89] found id: ""
	I0917 18:30:56.793260   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.793273   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:56.793284   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:56.793297   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:56.880661   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:56.880699   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:56.926789   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:56.926820   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:56.978914   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:56.978965   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:56.993199   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:56.993236   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:57.065180   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:59.565927   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:59.579838   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:59.579921   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:59.616623   78008 cri.go:89] found id: ""
	I0917 18:30:59.616648   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.616656   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:59.616662   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:59.616716   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:59.659048   78008 cri.go:89] found id: ""
	I0917 18:30:59.659074   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.659084   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:59.659091   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:59.659153   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:59.694874   78008 cri.go:89] found id: ""
	I0917 18:30:59.694899   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.694910   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:59.694921   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:59.694988   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:59.732858   78008 cri.go:89] found id: ""
	I0917 18:30:59.732889   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.732902   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:59.732909   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:59.732972   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:59.771178   78008 cri.go:89] found id: ""
	I0917 18:30:59.771203   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.771212   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:59.771218   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:59.771271   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:59.812456   78008 cri.go:89] found id: ""
	I0917 18:30:59.812481   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.812490   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:59.812498   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:59.812560   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:59.849876   78008 cri.go:89] found id: ""
	I0917 18:30:59.849906   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.849917   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:59.849924   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:59.849988   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:59.889796   78008 cri.go:89] found id: ""
	I0917 18:30:59.889827   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.889839   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:59.889850   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:59.889865   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:59.942735   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:59.942774   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:59.957159   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:59.957186   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:00.030497   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:00.030522   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:00.030537   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:00.112077   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:00.112134   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:56.164011   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:58.164643   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:00.164831   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:57.502585   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:00.002013   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:02.002047   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:59.125188   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:01.127691   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:02.656525   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:02.671313   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:02.671379   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:02.710779   78008 cri.go:89] found id: ""
	I0917 18:31:02.710807   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.710820   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:02.710827   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:02.710890   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:02.750285   78008 cri.go:89] found id: ""
	I0917 18:31:02.750315   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.750326   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:02.750335   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:02.750399   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:02.790676   78008 cri.go:89] found id: ""
	I0917 18:31:02.790704   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.790712   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:02.790718   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:02.790766   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:02.832124   78008 cri.go:89] found id: ""
	I0917 18:31:02.832154   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.832166   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:02.832174   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:02.832236   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:02.868769   78008 cri.go:89] found id: ""
	I0917 18:31:02.868801   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.868813   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:02.868820   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:02.868886   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:02.910482   78008 cri.go:89] found id: ""
	I0917 18:31:02.910512   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.910524   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:02.910533   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:02.910587   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:02.948128   78008 cri.go:89] found id: ""
	I0917 18:31:02.948154   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.948165   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:02.948172   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:02.948239   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:02.987981   78008 cri.go:89] found id: ""
	I0917 18:31:02.988007   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.988018   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:02.988028   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:02.988042   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:03.044116   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:03.044157   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:03.059837   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:03.059866   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:03.134048   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:03.134073   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:03.134086   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:03.214751   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:03.214792   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:02.169026   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:04.664829   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:04.501493   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:07.001722   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:03.625165   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:06.126203   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:05.768145   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:05.782375   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:05.782455   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:05.820083   78008 cri.go:89] found id: ""
	I0917 18:31:05.820116   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.820127   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:05.820134   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:05.820188   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:05.856626   78008 cri.go:89] found id: ""
	I0917 18:31:05.856655   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.856666   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:05.856673   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:05.856737   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:05.893119   78008 cri.go:89] found id: ""
	I0917 18:31:05.893149   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.893162   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:05.893172   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:05.893299   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:05.931892   78008 cri.go:89] found id: ""
	I0917 18:31:05.931916   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.931924   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:05.931930   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:05.931991   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:05.968770   78008 cri.go:89] found id: ""
	I0917 18:31:05.968802   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.968814   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:05.968820   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:05.968888   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:06.008183   78008 cri.go:89] found id: ""
	I0917 18:31:06.008208   78008 logs.go:276] 0 containers: []
	W0917 18:31:06.008217   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:06.008222   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:06.008267   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:06.043161   78008 cri.go:89] found id: ""
	I0917 18:31:06.043189   78008 logs.go:276] 0 containers: []
	W0917 18:31:06.043199   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:06.043204   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:06.043271   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:06.079285   78008 cri.go:89] found id: ""
	I0917 18:31:06.079315   78008 logs.go:276] 0 containers: []
	W0917 18:31:06.079326   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:06.079336   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:06.079347   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:06.160863   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:06.160913   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:06.202101   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:06.202127   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:06.255482   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:06.255517   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:06.271518   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:06.271545   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:06.344034   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:08.844243   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:08.859312   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:08.859381   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:08.896915   78008 cri.go:89] found id: ""
	I0917 18:31:08.896942   78008 logs.go:276] 0 containers: []
	W0917 18:31:08.896952   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:08.896959   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:08.897022   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:08.937979   78008 cri.go:89] found id: ""
	I0917 18:31:08.938005   78008 logs.go:276] 0 containers: []
	W0917 18:31:08.938014   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:08.938022   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:08.938072   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:08.978502   78008 cri.go:89] found id: ""
	I0917 18:31:08.978536   78008 logs.go:276] 0 containers: []
	W0917 18:31:08.978548   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:08.978556   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:08.978616   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:09.044664   78008 cri.go:89] found id: ""
	I0917 18:31:09.044699   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.044711   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:09.044719   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:09.044796   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:09.082888   78008 cri.go:89] found id: ""
	I0917 18:31:09.082923   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.082944   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:09.082954   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:09.083027   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:09.120314   78008 cri.go:89] found id: ""
	I0917 18:31:09.120339   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.120350   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:09.120357   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:09.120418   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:09.160137   78008 cri.go:89] found id: ""
	I0917 18:31:09.160165   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.160176   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:09.160183   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:09.160241   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:09.198711   78008 cri.go:89] found id: ""
	I0917 18:31:09.198741   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.198749   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:09.198756   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:09.198766   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:09.253431   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:09.253485   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:09.270520   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:09.270554   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:09.349865   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:09.349889   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:09.349909   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:09.436606   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:09.436650   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:07.165101   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:09.165704   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:09.001786   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:11.500557   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:08.625085   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:11.124817   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:13.125531   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:11.981998   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:11.995472   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:11.995556   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:12.035854   78008 cri.go:89] found id: ""
	I0917 18:31:12.035883   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.035894   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:12.035902   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:12.035953   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:12.070923   78008 cri.go:89] found id: ""
	I0917 18:31:12.070953   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.070965   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:12.070973   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:12.071041   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:12.108151   78008 cri.go:89] found id: ""
	I0917 18:31:12.108176   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.108185   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:12.108190   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:12.108238   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:12.146050   78008 cri.go:89] found id: ""
	I0917 18:31:12.146081   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.146092   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:12.146100   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:12.146158   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:12.185355   78008 cri.go:89] found id: ""
	I0917 18:31:12.185387   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.185396   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:12.185402   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:12.185449   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:12.222377   78008 cri.go:89] found id: ""
	I0917 18:31:12.222403   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.222412   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:12.222418   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:12.222488   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:12.258190   78008 cri.go:89] found id: ""
	I0917 18:31:12.258231   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.258242   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:12.258249   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:12.258326   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:12.295674   78008 cri.go:89] found id: ""
	I0917 18:31:12.295710   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.295722   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:12.295731   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:12.295742   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:12.348185   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:12.348223   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:12.363961   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:12.363992   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:12.438630   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:12.438661   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:12.438676   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:12.520086   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:12.520133   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:15.061926   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:15.079141   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:15.079206   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:15.122722   78008 cri.go:89] found id: ""
	I0917 18:31:15.122812   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.122828   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:15.122837   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:15.122895   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:15.168184   78008 cri.go:89] found id: ""
	I0917 18:31:15.168209   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.168218   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:15.168225   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:15.168288   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:15.208219   78008 cri.go:89] found id: ""
	I0917 18:31:15.208246   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.208259   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:15.208266   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:15.208318   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:15.248082   78008 cri.go:89] found id: ""
	I0917 18:31:15.248114   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.248126   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:15.248133   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:15.248197   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:15.285215   78008 cri.go:89] found id: ""
	I0917 18:31:15.285263   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.285274   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:15.285281   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:15.285339   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:15.328617   78008 cri.go:89] found id: ""
	I0917 18:31:15.328650   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.328669   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:15.328675   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:15.328738   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:15.371869   78008 cri.go:89] found id: ""
	I0917 18:31:15.371895   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.371903   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:15.371909   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:15.371955   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:15.418109   78008 cri.go:89] found id: ""
	I0917 18:31:15.418136   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.418145   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:15.418153   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:15.418166   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:15.443709   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:15.443741   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:15.540475   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:15.540499   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:15.540511   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:15.627751   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:15.627781   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:15.671027   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:15.671056   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:11.664755   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:14.164563   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:14.001567   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:16.500724   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:15.127715   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:17.624831   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:18.223732   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:18.239161   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:18.239242   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:18.280252   78008 cri.go:89] found id: ""
	I0917 18:31:18.280282   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.280294   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:18.280301   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:18.280350   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:18.318774   78008 cri.go:89] found id: ""
	I0917 18:31:18.318805   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.318815   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:18.318821   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:18.318877   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:18.354755   78008 cri.go:89] found id: ""
	I0917 18:31:18.354785   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.354796   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:18.354804   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:18.354862   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:18.391283   78008 cri.go:89] found id: ""
	I0917 18:31:18.391310   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.391318   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:18.391324   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:18.391372   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:18.429026   78008 cri.go:89] found id: ""
	I0917 18:31:18.429062   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.429074   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:18.429081   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:18.429135   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:18.468318   78008 cri.go:89] found id: ""
	I0917 18:31:18.468351   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.468365   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:18.468372   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:18.468421   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:18.509871   78008 cri.go:89] found id: ""
	I0917 18:31:18.509903   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.509914   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:18.509922   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:18.509979   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:18.548662   78008 cri.go:89] found id: ""
	I0917 18:31:18.548694   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.548705   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:18.548714   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:18.548729   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:18.587633   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:18.587662   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:18.640867   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:18.640910   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:18.658020   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:18.658054   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:18.729643   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:18.729674   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:18.729686   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:16.664372   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:18.666834   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:18.501952   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:21.001547   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:20.125423   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:22.626597   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:21.313013   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:21.329702   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:21.329768   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:21.378972   78008 cri.go:89] found id: ""
	I0917 18:31:21.378996   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.379004   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:21.379010   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:21.379065   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:21.433355   78008 cri.go:89] found id: ""
	I0917 18:31:21.433382   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.433393   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:21.433400   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:21.433462   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:21.489030   78008 cri.go:89] found id: ""
	I0917 18:31:21.489055   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.489063   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:21.489068   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:21.489124   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:21.529089   78008 cri.go:89] found id: ""
	I0917 18:31:21.529119   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.529131   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:21.529138   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:21.529188   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:21.566892   78008 cri.go:89] found id: ""
	I0917 18:31:21.566919   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.566929   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:21.566935   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:21.566985   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:21.605453   78008 cri.go:89] found id: ""
	I0917 18:31:21.605484   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.605496   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:21.605504   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:21.605569   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:21.647710   78008 cri.go:89] found id: ""
	I0917 18:31:21.647732   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.647740   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:21.647745   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:21.647804   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:21.687002   78008 cri.go:89] found id: ""
	I0917 18:31:21.687036   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.687048   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:21.687058   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:21.687074   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:21.738591   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:21.738631   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:21.752950   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:21.752987   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:21.826268   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:21.826292   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:21.826306   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:21.906598   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:21.906646   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:24.453057   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:24.468867   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:24.468930   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:24.511103   78008 cri.go:89] found id: ""
	I0917 18:31:24.511129   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.511140   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:24.511147   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:24.511200   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:24.546392   78008 cri.go:89] found id: ""
	I0917 18:31:24.546423   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.546434   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:24.546443   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:24.546505   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:24.583266   78008 cri.go:89] found id: ""
	I0917 18:31:24.583299   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.583310   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:24.583320   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:24.583381   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:24.620018   78008 cri.go:89] found id: ""
	I0917 18:31:24.620051   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.620063   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:24.620070   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:24.620133   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:24.659528   78008 cri.go:89] found id: ""
	I0917 18:31:24.659556   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.659566   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:24.659573   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:24.659636   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:24.699115   78008 cri.go:89] found id: ""
	I0917 18:31:24.699153   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.699167   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:24.699175   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:24.699234   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:24.745358   78008 cri.go:89] found id: ""
	I0917 18:31:24.745392   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.745404   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:24.745414   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:24.745483   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:24.786606   78008 cri.go:89] found id: ""
	I0917 18:31:24.786635   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.786644   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:24.786657   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:24.786671   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:24.838417   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:24.838462   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:24.852959   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:24.852988   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:24.927013   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:24.927039   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:24.927058   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:25.008679   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:25.008720   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:21.164500   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:23.165380   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:25.665618   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:23.501265   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:26.002113   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:25.126406   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:27.627599   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:27.549945   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:27.565336   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:27.565450   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:27.605806   78008 cri.go:89] found id: ""
	I0917 18:31:27.605844   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.605853   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:27.605860   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:27.605909   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:27.652915   78008 cri.go:89] found id: ""
	I0917 18:31:27.652955   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.652968   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:27.652977   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:27.653044   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:27.701732   78008 cri.go:89] found id: ""
	I0917 18:31:27.701759   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.701771   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:27.701778   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:27.701841   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:27.744587   78008 cri.go:89] found id: ""
	I0917 18:31:27.744616   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.744628   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:27.744635   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:27.744705   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:27.789161   78008 cri.go:89] found id: ""
	I0917 18:31:27.789196   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.789207   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:27.789214   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:27.789296   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:27.833484   78008 cri.go:89] found id: ""
	I0917 18:31:27.833513   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.833525   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:27.833532   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:27.833591   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:27.873669   78008 cri.go:89] found id: ""
	I0917 18:31:27.873703   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.873715   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:27.873722   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:27.873792   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:27.911270   78008 cri.go:89] found id: ""
	I0917 18:31:27.911301   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.911313   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:27.911323   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:27.911336   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:27.951769   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:27.951798   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:28.002220   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:28.002254   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:28.017358   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:28.017392   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:28.091456   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:28.091481   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:28.091492   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:27.666003   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:30.164548   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:28.501094   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:31.005569   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:30.124439   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:32.126247   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:30.679643   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:30.693877   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:30.693948   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:30.732196   78008 cri.go:89] found id: ""
	I0917 18:31:30.732228   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.732240   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:30.732247   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:30.732320   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:30.774700   78008 cri.go:89] found id: ""
	I0917 18:31:30.774730   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.774742   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:30.774749   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:30.774838   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:30.814394   78008 cri.go:89] found id: ""
	I0917 18:31:30.814420   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.814428   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:30.814434   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:30.814487   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:30.854746   78008 cri.go:89] found id: ""
	I0917 18:31:30.854788   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.854801   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:30.854830   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:30.854899   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:30.893533   78008 cri.go:89] found id: ""
	I0917 18:31:30.893564   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.893574   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:30.893580   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:30.893649   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:30.932719   78008 cri.go:89] found id: ""
	I0917 18:31:30.932746   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.932757   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:30.932763   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:30.932811   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:30.974004   78008 cri.go:89] found id: ""
	I0917 18:31:30.974047   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.974056   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:30.974061   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:30.974117   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:31.017469   78008 cri.go:89] found id: ""
	I0917 18:31:31.017498   78008 logs.go:276] 0 containers: []
	W0917 18:31:31.017509   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:31.017517   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:31.017529   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:31.094385   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:31.094409   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:31.094424   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:31.177975   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:31.178012   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:31.218773   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:31.218804   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:31.272960   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:31.272996   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:33.788825   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:33.804904   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:33.804985   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:33.847149   78008 cri.go:89] found id: ""
	I0917 18:31:33.847178   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.847190   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:33.847198   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:33.847259   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:33.883548   78008 cri.go:89] found id: ""
	I0917 18:31:33.883573   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.883581   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:33.883586   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:33.883632   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:33.917495   78008 cri.go:89] found id: ""
	I0917 18:31:33.917523   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.917535   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:33.917542   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:33.917634   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:33.954931   78008 cri.go:89] found id: ""
	I0917 18:31:33.954955   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.954963   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:33.954969   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:33.955019   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:33.991535   78008 cri.go:89] found id: ""
	I0917 18:31:33.991568   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.991577   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:33.991582   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:33.991639   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:34.039451   78008 cri.go:89] found id: ""
	I0917 18:31:34.039478   78008 logs.go:276] 0 containers: []
	W0917 18:31:34.039489   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:34.039497   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:34.039557   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:34.081258   78008 cri.go:89] found id: ""
	I0917 18:31:34.081300   78008 logs.go:276] 0 containers: []
	W0917 18:31:34.081311   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:34.081317   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:34.081379   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:34.119557   78008 cri.go:89] found id: ""
	I0917 18:31:34.119586   78008 logs.go:276] 0 containers: []
	W0917 18:31:34.119597   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:34.119608   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:34.119623   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:34.163345   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:34.163379   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:34.218399   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:34.218454   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:34.232705   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:34.232736   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:34.309948   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:34.309972   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:34.309984   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:32.164688   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:34.165267   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:33.500604   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:35.501094   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:34.624847   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:36.624971   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:36.896504   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:36.913784   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:36.913870   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:36.954525   78008 cri.go:89] found id: ""
	I0917 18:31:36.954557   78008 logs.go:276] 0 containers: []
	W0917 18:31:36.954568   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:36.954578   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:36.954648   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:36.989379   78008 cri.go:89] found id: ""
	I0917 18:31:36.989408   78008 logs.go:276] 0 containers: []
	W0917 18:31:36.989419   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:36.989426   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:36.989491   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:37.029078   78008 cri.go:89] found id: ""
	I0917 18:31:37.029107   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.029119   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:37.029126   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:37.029180   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:37.066636   78008 cri.go:89] found id: ""
	I0917 18:31:37.066670   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.066683   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:37.066691   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:37.066754   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:37.109791   78008 cri.go:89] found id: ""
	I0917 18:31:37.109827   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.109838   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:37.109849   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:37.109925   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:37.153415   78008 cri.go:89] found id: ""
	I0917 18:31:37.153448   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.153459   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:37.153467   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:37.153527   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:37.192826   78008 cri.go:89] found id: ""
	I0917 18:31:37.192853   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.192864   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:37.192871   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:37.192930   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:37.230579   78008 cri.go:89] found id: ""
	I0917 18:31:37.230632   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.230647   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:37.230665   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:37.230677   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:37.315392   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:37.315430   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:37.356521   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:37.356554   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:37.410552   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:37.410591   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:37.426013   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:37.426040   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:37.499352   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:39.999538   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:40.014515   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:40.014590   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:40.051511   78008 cri.go:89] found id: ""
	I0917 18:31:40.051548   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.051558   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:40.051564   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:40.051623   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:40.089707   78008 cri.go:89] found id: ""
	I0917 18:31:40.089733   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.089747   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:40.089752   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:40.089802   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:40.137303   78008 cri.go:89] found id: ""
	I0917 18:31:40.137326   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.137335   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:40.137341   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:40.137389   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:40.176721   78008 cri.go:89] found id: ""
	I0917 18:31:40.176746   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.176755   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:40.176761   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:40.176809   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:40.212369   78008 cri.go:89] found id: ""
	I0917 18:31:40.212401   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.212412   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:40.212421   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:40.212494   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:40.255798   78008 cri.go:89] found id: ""
	I0917 18:31:40.255828   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.255838   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:40.255847   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:40.255982   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:40.293643   78008 cri.go:89] found id: ""
	I0917 18:31:40.293672   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.293682   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:40.293689   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:40.293752   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:40.332300   78008 cri.go:89] found id: ""
	I0917 18:31:40.332330   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.332340   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:40.332350   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:40.332365   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:40.389068   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:40.389115   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:40.403118   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:40.403149   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:40.476043   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:40.476067   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:40.476081   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:40.563164   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:40.563204   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:36.664291   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:38.666750   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:37.501943   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:40.000891   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:42.001550   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:38.625406   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:41.124655   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:43.126544   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:43.112107   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:43.127968   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:43.128034   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:43.166351   78008 cri.go:89] found id: ""
	I0917 18:31:43.166371   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.166379   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:43.166384   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:43.166433   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:43.201124   78008 cri.go:89] found id: ""
	I0917 18:31:43.201160   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.201173   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:43.201181   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:43.201265   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:43.245684   78008 cri.go:89] found id: ""
	I0917 18:31:43.245717   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.245728   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:43.245735   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:43.245796   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:43.282751   78008 cri.go:89] found id: ""
	I0917 18:31:43.282777   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.282785   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:43.282791   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:43.282844   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:43.322180   78008 cri.go:89] found id: ""
	I0917 18:31:43.322212   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.322223   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:43.322230   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:43.322294   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:43.359575   78008 cri.go:89] found id: ""
	I0917 18:31:43.359608   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.359620   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:43.359627   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:43.359689   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:43.398782   78008 cri.go:89] found id: ""
	I0917 18:31:43.398811   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.398824   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:43.398833   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:43.398913   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:43.437747   78008 cri.go:89] found id: ""
	I0917 18:31:43.437771   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.437779   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:43.437787   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:43.437800   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:43.477986   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:43.478019   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:43.532637   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:43.532674   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:43.547552   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:43.547577   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:43.632556   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:43.632578   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:43.632592   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:41.163988   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:43.165378   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:45.664803   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:44.500302   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:46.500489   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:45.128136   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:47.626024   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:46.214890   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:46.229327   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:46.229408   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:46.268605   78008 cri.go:89] found id: ""
	I0917 18:31:46.268632   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.268642   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:46.268649   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:46.268711   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:46.309508   78008 cri.go:89] found id: ""
	I0917 18:31:46.309539   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.309549   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:46.309558   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:46.309620   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:46.352610   78008 cri.go:89] found id: ""
	I0917 18:31:46.352639   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.352648   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:46.352654   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:46.352723   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:46.398702   78008 cri.go:89] found id: ""
	I0917 18:31:46.398738   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.398747   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:46.398753   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:46.398811   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:46.437522   78008 cri.go:89] found id: ""
	I0917 18:31:46.437545   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.437554   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:46.437559   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:46.437641   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:46.474865   78008 cri.go:89] found id: ""
	I0917 18:31:46.474893   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.474902   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:46.474909   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:46.474957   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:46.514497   78008 cri.go:89] found id: ""
	I0917 18:31:46.514525   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.514536   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:46.514543   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:46.514605   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:46.556948   78008 cri.go:89] found id: ""
	I0917 18:31:46.556979   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.556988   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:46.556997   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:46.557008   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:46.609300   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:46.609337   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:46.626321   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:46.626351   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:46.707669   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:46.707701   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:46.707714   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:46.789774   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:46.789815   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:49.332780   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:49.347262   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:49.347334   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:49.388368   78008 cri.go:89] found id: ""
	I0917 18:31:49.388411   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.388423   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:49.388431   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:49.388493   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:49.423664   78008 cri.go:89] found id: ""
	I0917 18:31:49.423694   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.423707   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:49.423714   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:49.423776   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:49.462882   78008 cri.go:89] found id: ""
	I0917 18:31:49.462911   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.462924   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:49.462931   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:49.462988   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:49.524014   78008 cri.go:89] found id: ""
	I0917 18:31:49.524047   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.524056   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:49.524062   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:49.524114   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:49.564703   78008 cri.go:89] found id: ""
	I0917 18:31:49.564737   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.564748   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:49.564762   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:49.564827   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:49.609460   78008 cri.go:89] found id: ""
	I0917 18:31:49.609484   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.609493   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:49.609499   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:49.609554   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:49.651008   78008 cri.go:89] found id: ""
	I0917 18:31:49.651032   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.651040   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:49.651045   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:49.651106   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:49.693928   78008 cri.go:89] found id: ""
	I0917 18:31:49.693954   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.693961   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:49.693969   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:49.693981   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:49.774940   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:49.774977   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:49.820362   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:49.820398   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:49.875508   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:49.875549   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:49.890690   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:49.890723   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:49.967803   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:47.664890   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:49.664943   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:48.502246   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:51.001296   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:50.125915   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:52.625169   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:52.468533   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:52.483749   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:52.483812   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:52.523017   78008 cri.go:89] found id: ""
	I0917 18:31:52.523040   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.523048   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:52.523055   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:52.523101   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:52.559848   78008 cri.go:89] found id: ""
	I0917 18:31:52.559879   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.559889   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:52.559895   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:52.559955   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:52.597168   78008 cri.go:89] found id: ""
	I0917 18:31:52.597192   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.597200   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:52.597207   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:52.597278   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:52.634213   78008 cri.go:89] found id: ""
	I0917 18:31:52.634241   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.634252   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:52.634265   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:52.634326   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:52.673842   78008 cri.go:89] found id: ""
	I0917 18:31:52.673865   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.673873   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:52.673878   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:52.673926   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:52.711568   78008 cri.go:89] found id: ""
	I0917 18:31:52.711596   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.711609   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:52.711617   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:52.711676   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:52.757002   78008 cri.go:89] found id: ""
	I0917 18:31:52.757030   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.757038   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:52.757043   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:52.757092   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:52.793092   78008 cri.go:89] found id: ""
	I0917 18:31:52.793126   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.793135   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:52.793143   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:52.793155   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:52.847641   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:52.847682   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:52.862287   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:52.862314   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:52.941307   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:52.941331   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:52.941344   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:53.026114   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:53.026155   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:55.573116   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:55.588063   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:55.588125   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:55.633398   78008 cri.go:89] found id: ""
	I0917 18:31:55.633422   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.633430   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:55.633437   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:55.633511   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:55.669754   78008 cri.go:89] found id: ""
	I0917 18:31:55.669785   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.669796   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:55.669803   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:55.669876   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:52.165645   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:54.166228   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:53.500688   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:55.501849   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:55.126327   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:57.624683   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:55.711492   78008 cri.go:89] found id: ""
	I0917 18:31:55.711521   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.711533   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:55.711541   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:55.711603   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:55.749594   78008 cri.go:89] found id: ""
	I0917 18:31:55.749628   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.749638   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:55.749643   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:55.749695   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:55.786114   78008 cri.go:89] found id: ""
	I0917 18:31:55.786143   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.786155   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:55.786162   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:55.786222   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:55.824254   78008 cri.go:89] found id: ""
	I0917 18:31:55.824282   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.824293   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:55.824301   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:55.824361   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:55.861690   78008 cri.go:89] found id: ""
	I0917 18:31:55.861718   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.861728   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:55.861733   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:55.861794   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:55.913729   78008 cri.go:89] found id: ""
	I0917 18:31:55.913754   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.913766   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:55.913775   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:55.913788   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:55.976835   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:55.976880   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:56.003201   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:56.003236   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:56.092101   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:56.092123   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:56.092137   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:56.170498   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:56.170533   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:58.714212   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:58.730997   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:58.731072   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:58.775640   78008 cri.go:89] found id: ""
	I0917 18:31:58.775678   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.775693   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:58.775701   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:58.775770   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:58.811738   78008 cri.go:89] found id: ""
	I0917 18:31:58.811764   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.811776   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:58.811783   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:58.811852   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:58.849803   78008 cri.go:89] found id: ""
	I0917 18:31:58.849827   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.849836   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:58.849841   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:58.849898   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:58.885827   78008 cri.go:89] found id: ""
	I0917 18:31:58.885856   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.885871   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:58.885878   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:58.885943   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:58.925539   78008 cri.go:89] found id: ""
	I0917 18:31:58.925565   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.925574   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:58.925579   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:58.925628   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:58.961074   78008 cri.go:89] found id: ""
	I0917 18:31:58.961104   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.961116   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:58.961123   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:58.961190   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:58.997843   78008 cri.go:89] found id: ""
	I0917 18:31:58.997878   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.997889   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:58.997896   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:58.997962   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:59.034836   78008 cri.go:89] found id: ""
	I0917 18:31:59.034866   78008 logs.go:276] 0 containers: []
	W0917 18:31:59.034876   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:59.034884   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:59.034899   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:59.049085   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:59.049116   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:59.126143   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:59.126168   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:59.126183   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:59.210043   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:59.210096   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:59.258546   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:59.258575   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:56.664145   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:58.664990   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:58.000809   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:58.494554   77433 pod_ready.go:82] duration metric: took 4m0.000545882s for pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace to be "Ready" ...
	E0917 18:31:58.494588   77433 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace to be "Ready" (will not retry!)
	I0917 18:31:58.494611   77433 pod_ready.go:39] duration metric: took 4m9.313096637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:31:58.494638   77433 kubeadm.go:597] duration metric: took 4m19.208089477s to restartPrimaryControlPlane
	W0917 18:31:58.494716   77433 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 18:31:58.494760   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:31:59.625888   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:02.125831   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:01.811930   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:01.833160   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:32:01.833223   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:32:01.891148   78008 cri.go:89] found id: ""
	I0917 18:32:01.891178   78008 logs.go:276] 0 containers: []
	W0917 18:32:01.891189   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:32:01.891197   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:32:01.891260   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:32:01.954367   78008 cri.go:89] found id: ""
	I0917 18:32:01.954407   78008 logs.go:276] 0 containers: []
	W0917 18:32:01.954418   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:32:01.954425   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:32:01.954490   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:32:01.998154   78008 cri.go:89] found id: ""
	I0917 18:32:01.998187   78008 logs.go:276] 0 containers: []
	W0917 18:32:01.998199   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:32:01.998206   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:32:01.998267   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:32:02.035412   78008 cri.go:89] found id: ""
	I0917 18:32:02.035446   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.035457   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:32:02.035464   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:32:02.035539   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:32:02.070552   78008 cri.go:89] found id: ""
	I0917 18:32:02.070586   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.070599   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:32:02.070604   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:32:02.070673   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:32:02.108680   78008 cri.go:89] found id: ""
	I0917 18:32:02.108717   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.108729   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:32:02.108737   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:32:02.108787   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:32:02.148560   78008 cri.go:89] found id: ""
	I0917 18:32:02.148585   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.148594   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:32:02.148600   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:32:02.148647   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:32:02.186398   78008 cri.go:89] found id: ""
	I0917 18:32:02.186434   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.186445   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:32:02.186454   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:32:02.186468   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:32:02.273674   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:32:02.273695   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:32:02.273708   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:32:02.359656   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:32:02.359704   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:32:02.405465   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:32:02.405494   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:32:02.466534   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:32:02.466568   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:32:04.983572   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:04.998711   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:32:04.998796   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:32:05.038080   78008 cri.go:89] found id: ""
	I0917 18:32:05.038111   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.038121   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:32:05.038129   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:32:05.038189   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:32:05.074542   78008 cri.go:89] found id: ""
	I0917 18:32:05.074571   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.074582   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:32:05.074588   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:32:05.074652   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:32:05.113115   78008 cri.go:89] found id: ""
	I0917 18:32:05.113140   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.113149   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:32:05.113156   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:32:05.113216   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:32:05.151752   78008 cri.go:89] found id: ""
	I0917 18:32:05.151777   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.151786   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:32:05.151791   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:32:05.151840   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:32:05.191014   78008 cri.go:89] found id: ""
	I0917 18:32:05.191044   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.191056   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:32:05.191064   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:32:05.191126   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:32:05.226738   78008 cri.go:89] found id: ""
	I0917 18:32:05.226774   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.226787   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:32:05.226794   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:32:05.226856   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:32:05.263072   78008 cri.go:89] found id: ""
	I0917 18:32:05.263102   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.263115   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:32:05.263124   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:32:05.263184   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:32:05.302622   78008 cri.go:89] found id: ""
	I0917 18:32:05.302651   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.302666   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:32:05.302677   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:32:05.302691   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:32:05.358101   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:32:05.358150   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:32:05.373289   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:32:05.373326   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:32:05.451451   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:32:05.451484   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:32:05.451496   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:32:05.532529   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:32:05.532570   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:32:01.165911   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:03.665523   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:04.126090   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:06.625207   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:08.079204   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:08.093914   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:32:08.093996   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:32:08.131132   78008 cri.go:89] found id: ""
	I0917 18:32:08.131164   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.131173   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:32:08.131178   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:32:08.131230   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:32:08.168017   78008 cri.go:89] found id: ""
	I0917 18:32:08.168044   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.168055   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:32:08.168062   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:32:08.168124   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:32:08.210190   78008 cri.go:89] found id: ""
	I0917 18:32:08.210212   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.210221   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:32:08.210226   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:32:08.210279   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:32:08.250264   78008 cri.go:89] found id: ""
	I0917 18:32:08.250291   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.250299   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:32:08.250304   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:32:08.250352   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:32:08.287732   78008 cri.go:89] found id: ""
	I0917 18:32:08.287760   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.287768   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:32:08.287775   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:32:08.287826   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:32:08.325131   78008 cri.go:89] found id: ""
	I0917 18:32:08.325161   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.325170   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:32:08.325176   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:32:08.325243   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:32:08.365979   78008 cri.go:89] found id: ""
	I0917 18:32:08.366008   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.366019   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:32:08.366027   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:32:08.366088   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:32:08.403430   78008 cri.go:89] found id: ""
	I0917 18:32:08.403472   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.403484   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:32:08.403495   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:32:08.403511   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:32:08.444834   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:32:08.444869   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:32:08.500363   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:32:08.500408   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:32:08.516624   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:32:08.516653   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:32:08.591279   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:32:08.591304   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:32:08.591317   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:32:06.165279   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:08.168012   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:10.665050   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:11.173345   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:11.187689   78008 kubeadm.go:597] duration metric: took 4m1.808927826s to restartPrimaryControlPlane
	W0917 18:32:11.187762   78008 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 18:32:11.187786   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:32:12.794262   78008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.606454478s)
	I0917 18:32:12.794344   78008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:32:12.809379   78008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:32:12.821912   78008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:32:12.833176   78008 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:32:12.833201   78008 kubeadm.go:157] found existing configuration files:
	
	I0917 18:32:12.833279   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:32:12.843175   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:32:12.843245   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:32:12.855310   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:32:12.866777   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:32:12.866846   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:32:12.878436   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:32:12.889677   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:32:12.889763   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:32:12.900141   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:32:12.909916   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:32:12.909994   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:32:12.920578   78008 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:32:12.993663   78008 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0917 18:32:12.993743   78008 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:32:13.145113   78008 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:32:13.145321   78008 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:32:13.145451   78008 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0917 18:32:13.346279   78008 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:32:08.627002   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:09.118558   77819 pod_ready.go:82] duration metric: took 4m0.00024297s for pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace to be "Ready" ...
	E0917 18:32:09.118584   77819 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0917 18:32:09.118600   77819 pod_ready.go:39] duration metric: took 4m13.424544466s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:09.118628   77819 kubeadm.go:597] duration metric: took 4m21.847475999s to restartPrimaryControlPlane
	W0917 18:32:09.118695   77819 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 18:32:09.118723   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:32:13.348308   78008 out.go:235]   - Generating certificates and keys ...
	I0917 18:32:13.348411   78008 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:32:13.348505   78008 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:32:13.348622   78008 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:32:13.348719   78008 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:32:13.348814   78008 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:32:13.348895   78008 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:32:13.348991   78008 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:32:13.349126   78008 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:32:13.349595   78008 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:32:13.349939   78008 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:32:13.350010   78008 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:32:13.350096   78008 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:32:13.677314   78008 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:32:13.840807   78008 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:32:13.886801   78008 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:32:13.937675   78008 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:32:13.956057   78008 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:32:13.957185   78008 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:32:13.957266   78008 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:32:14.099317   78008 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:32:14.101339   78008 out.go:235]   - Booting up control plane ...
	I0917 18:32:14.101446   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:32:14.107518   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:32:14.107630   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:32:14.107964   78008 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:32:14.118995   78008 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0917 18:32:13.164003   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:15.165309   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:17.664956   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:20.165073   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:24.890884   77433 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.396095322s)
	I0917 18:32:24.890966   77433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:32:24.915367   77433 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:32:24.928191   77433 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:32:24.945924   77433 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:32:24.945943   77433 kubeadm.go:157] found existing configuration files:
	
	I0917 18:32:24.945988   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:32:24.961382   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:32:24.961454   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:32:24.977324   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:32:24.989771   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:32:24.989861   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:32:25.001342   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:32:25.035933   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:32:25.036004   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:32:25.047185   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:32:25.058299   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:32:25.058358   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:32:25.070264   77433 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:32:25.124517   77433 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 18:32:25.124634   77433 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:32:25.257042   77433 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:32:25.257211   77433 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:32:25.257378   77433 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 18:32:25.267568   77433 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:32:22.663592   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:24.665849   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:25.269902   77433 out.go:235]   - Generating certificates and keys ...
	I0917 18:32:25.270012   77433 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:32:25.270115   77433 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:32:25.270221   77433 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:32:25.270288   77433 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:32:25.270379   77433 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:32:25.270462   77433 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:32:25.270563   77433 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:32:25.270664   77433 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:32:25.270747   77433 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:32:25.270810   77433 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:32:25.270844   77433 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:32:25.270892   77433 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:32:25.425276   77433 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:32:25.498604   77433 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 18:32:25.848094   77433 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:32:26.011742   77433 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:32:26.097462   77433 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:32:26.097929   77433 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:32:26.100735   77433 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:32:26.102662   77433 out.go:235]   - Booting up control plane ...
	I0917 18:32:26.102777   77433 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:32:26.102880   77433 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:32:26.102954   77433 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:32:26.123221   77433 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:32:26.130932   77433 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:32:26.131021   77433 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:32:26.291311   77433 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 18:32:26.291462   77433 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 18:32:27.164870   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:29.165716   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:27.298734   77433 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00350356s
	I0917 18:32:27.298851   77433 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 18:32:32.298994   77433 kubeadm.go:310] [api-check] The API server is healthy after 5.002867585s
	I0917 18:32:32.319430   77433 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 18:32:32.345527   77433 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 18:32:32.381518   77433 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 18:32:32.381817   77433 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-328741 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 18:32:32.398185   77433 kubeadm.go:310] [bootstrap-token] Using token: jgy27g.uvhet1w3psx1hofx
	I0917 18:32:32.399853   77433 out.go:235]   - Configuring RBAC rules ...
	I0917 18:32:32.400009   77433 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 18:32:32.407740   77433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 18:32:32.421320   77433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 18:32:32.427046   77433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 18:32:32.434506   77433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 18:32:32.438950   77433 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 18:32:32.705233   77433 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 18:32:33.140761   77433 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 18:32:33.720560   77433 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 18:32:33.720589   77433 kubeadm.go:310] 
	I0917 18:32:33.720679   77433 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 18:32:33.720690   77433 kubeadm.go:310] 
	I0917 18:32:33.720803   77433 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 18:32:33.720823   77433 kubeadm.go:310] 
	I0917 18:32:33.720869   77433 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 18:32:33.720932   77433 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 18:32:33.721010   77433 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 18:32:33.721021   77433 kubeadm.go:310] 
	I0917 18:32:33.721094   77433 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 18:32:33.721103   77433 kubeadm.go:310] 
	I0917 18:32:33.721168   77433 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 18:32:33.721176   77433 kubeadm.go:310] 
	I0917 18:32:33.721291   77433 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 18:32:33.721406   77433 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 18:32:33.721515   77433 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 18:32:33.721527   77433 kubeadm.go:310] 
	I0917 18:32:33.721653   77433 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 18:32:33.721780   77433 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 18:32:33.721797   77433 kubeadm.go:310] 
	I0917 18:32:33.721923   77433 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jgy27g.uvhet1w3psx1hofx \
	I0917 18:32:33.722093   77433 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 \
	I0917 18:32:33.722131   77433 kubeadm.go:310] 	--control-plane 
	I0917 18:32:33.722140   77433 kubeadm.go:310] 
	I0917 18:32:33.722267   77433 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 18:32:33.722278   77433 kubeadm.go:310] 
	I0917 18:32:33.722389   77433 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jgy27g.uvhet1w3psx1hofx \
	I0917 18:32:33.722565   77433 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 
	I0917 18:32:33.723290   77433 kubeadm.go:310] W0917 18:32:25.090856    2941 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:32:33.723705   77433 kubeadm.go:310] W0917 18:32:25.092716    2941 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:32:33.723861   77433 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:32:33.723883   77433 cni.go:84] Creating CNI manager for ""
	I0917 18:32:33.723896   77433 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:32:33.725956   77433 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:32:31.665048   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:34.166586   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:33.727153   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:32:33.739127   77433 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:32:33.759704   77433 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 18:32:33.759766   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:33.759799   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-328741 minikube.k8s.io/updated_at=2024_09_17T18_32_33_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=no-preload-328741 minikube.k8s.io/primary=true
	I0917 18:32:33.977462   77433 ops.go:34] apiserver oom_adj: -16
	I0917 18:32:33.977485   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:34.477572   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:34.977644   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:35.477829   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:35.977732   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:36.477549   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:36.978147   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:37.477629   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:37.977554   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:38.125930   77433 kubeadm.go:1113] duration metric: took 4.366225265s to wait for elevateKubeSystemPrivileges
	I0917 18:32:38.125973   77433 kubeadm.go:394] duration metric: took 4m58.899335742s to StartCluster
	I0917 18:32:38.125999   77433 settings.go:142] acquiring lock: {Name:mk34898861b3ed534da4bd8404feab95fe690001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:32:38.126117   77433 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:32:38.128667   77433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:32:38.129071   77433 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 18:32:38.129134   77433 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 18:32:38.129258   77433 addons.go:69] Setting storage-provisioner=true in profile "no-preload-328741"
	I0917 18:32:38.129284   77433 addons.go:234] Setting addon storage-provisioner=true in "no-preload-328741"
	W0917 18:32:38.129295   77433 addons.go:243] addon storage-provisioner should already be in state true
	I0917 18:32:38.129331   77433 host.go:66] Checking if "no-preload-328741" exists ...
	I0917 18:32:38.129364   77433 config.go:182] Loaded profile config "no-preload-328741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:32:38.129374   77433 addons.go:69] Setting default-storageclass=true in profile "no-preload-328741"
	I0917 18:32:38.129397   77433 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-328741"
	I0917 18:32:38.129397   77433 addons.go:69] Setting metrics-server=true in profile "no-preload-328741"
	I0917 18:32:38.129440   77433 addons.go:234] Setting addon metrics-server=true in "no-preload-328741"
	W0917 18:32:38.129451   77433 addons.go:243] addon metrics-server should already be in state true
	I0917 18:32:38.129491   77433 host.go:66] Checking if "no-preload-328741" exists ...
	I0917 18:32:38.129831   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.129832   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.129875   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.129965   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.129980   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.129992   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.130833   77433 out.go:177] * Verifying Kubernetes components...
	I0917 18:32:38.132232   77433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:32:38.151440   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37047
	I0917 18:32:38.151521   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43717
	I0917 18:32:38.151524   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46631
	I0917 18:32:38.152003   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.152216   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.152574   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.152591   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.152728   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.152743   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.153076   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.153077   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.153304   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetState
	I0917 18:32:38.153689   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.153731   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.156960   77433 addons.go:234] Setting addon default-storageclass=true in "no-preload-328741"
	W0917 18:32:38.156980   77433 addons.go:243] addon default-storageclass should already be in state true
	I0917 18:32:38.157007   77433 host.go:66] Checking if "no-preload-328741" exists ...
	I0917 18:32:38.157358   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.157404   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.157700   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.158314   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.158332   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.158738   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.159296   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.159332   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.179409   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41819
	I0917 18:32:38.179948   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.180402   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.180433   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.180922   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.181082   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetState
	I0917 18:32:38.183522   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35935
	I0917 18:32:38.183904   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.184373   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.184389   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.184750   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.184911   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetState
	I0917 18:32:38.187520   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37647
	I0917 18:32:38.187560   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:32:38.187560   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:32:38.188071   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.188750   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.188768   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.189208   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.189573   77433 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:32:38.189597   77433 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0917 18:32:35.488250   77819 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.369501216s)
	I0917 18:32:35.488328   77819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:32:35.507245   77819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:32:35.522739   77819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:32:35.537981   77819 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:32:35.538002   77819 kubeadm.go:157] found existing configuration files:
	
	I0917 18:32:35.538060   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0917 18:32:35.552269   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:32:35.552346   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:32:35.566005   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0917 18:32:35.577402   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:32:35.577482   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:32:35.588633   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0917 18:32:35.600487   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:32:35.600559   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:32:35.612243   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0917 18:32:35.623548   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:32:35.623630   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:32:35.635837   77819 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:32:35.690169   77819 kubeadm.go:310] W0917 18:32:35.657767    2589 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:32:35.690728   77819 kubeadm.go:310] W0917 18:32:35.658500    2589 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:32:35.819945   77819 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:32:38.189867   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.189904   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.191297   77433 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 18:32:38.191318   77433 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 18:32:38.191340   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:32:38.191421   77433 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:32:38.191441   77433 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 18:32:38.191467   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:32:38.195617   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.196040   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:32:38.196070   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.196098   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.196292   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:32:38.196554   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:32:38.196633   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:32:38.196645   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.196829   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:32:38.196868   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:32:38.196999   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:32:38.197320   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:32:38.197549   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:32:38.197724   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:32:38.211021   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34023
	I0917 18:32:38.211713   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.212330   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.212349   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.212900   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.213161   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetState
	I0917 18:32:38.214937   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:32:38.215252   77433 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 18:32:38.215267   77433 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 18:32:38.215284   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:32:38.218542   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.219120   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:32:38.219141   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.219398   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:32:38.219649   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:32:38.219795   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:32:38.219983   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:32:38.350631   77433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:32:38.420361   77433 node_ready.go:35] waiting up to 6m0s for node "no-preload-328741" to be "Ready" ...
	I0917 18:32:38.445121   77433 node_ready.go:49] node "no-preload-328741" has status "Ready":"True"
	I0917 18:32:38.445147   77433 node_ready.go:38] duration metric: took 24.749282ms for node "no-preload-328741" to be "Ready" ...
	I0917 18:32:38.445159   77433 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:38.468481   77433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:38.473593   77433 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 18:32:38.529563   77433 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 18:32:38.529592   77433 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0917 18:32:38.569714   77433 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:32:38.611817   77433 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 18:32:38.611845   77433 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 18:32:38.681763   77433 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:32:38.681791   77433 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 18:32:38.754936   77433 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:32:38.771115   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:38.771142   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:38.771564   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:38.771583   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:38.771603   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:38.771612   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:38.773362   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:38.773370   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:38.773381   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:38.782423   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:38.782468   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:38.782821   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:38.782877   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:38.782889   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:39.826176   77433 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.256415127s)
	I0917 18:32:39.826230   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:39.826241   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:39.826591   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:39.826618   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:39.826619   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:39.826627   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:39.826638   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:39.826905   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:39.828259   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:39.828279   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:40.095498   77433 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.340502717s)
	I0917 18:32:40.095562   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:40.095578   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:40.096020   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:40.096018   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:40.096047   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:40.096056   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:40.096064   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:40.096372   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:40.096391   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:40.097299   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:40.097317   77433 addons.go:475] Verifying addon metrics-server=true in "no-preload-328741"
	I0917 18:32:40.099125   77433 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0917 18:32:36.663739   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:38.666621   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:40.100317   77433 addons.go:510] duration metric: took 1.971194765s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0917 18:32:40.481646   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:44.319473   77819 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 18:32:44.319570   77819 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:32:44.319698   77819 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:32:44.319793   77819 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:32:44.319888   77819 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 18:32:44.319977   77819 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:32:44.322424   77819 out.go:235]   - Generating certificates and keys ...
	I0917 18:32:44.322509   77819 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:32:44.322570   77819 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:32:44.322640   77819 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:32:44.322732   77819 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:32:44.322806   77819 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:32:44.322854   77819 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:32:44.322911   77819 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:32:44.322993   77819 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:32:44.323071   77819 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:32:44.323150   77819 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:32:44.323197   77819 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:32:44.323246   77819 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:32:44.323289   77819 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:32:44.323337   77819 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 18:32:44.323390   77819 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:32:44.323456   77819 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:32:44.323521   77819 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:32:44.323613   77819 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:32:44.323704   77819 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:32:44.324959   77819 out.go:235]   - Booting up control plane ...
	I0917 18:32:44.325043   77819 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:32:44.325120   77819 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:32:44.325187   77819 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:32:44.325303   77819 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:32:44.325371   77819 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:32:44.325404   77819 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:32:44.325533   77819 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 18:32:44.325635   77819 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 18:32:44.325710   77819 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001958745s
	I0917 18:32:44.325774   77819 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 18:32:44.325830   77819 kubeadm.go:310] [api-check] The API server is healthy after 5.002835169s
	I0917 18:32:44.325919   77819 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 18:32:44.326028   77819 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 18:32:44.326086   77819 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 18:32:44.326239   77819 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-438836 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 18:32:44.326311   77819 kubeadm.go:310] [bootstrap-token] Using token: xgap2f.3rz1qjyfivkbqx8u
	I0917 18:32:44.327661   77819 out.go:235]   - Configuring RBAC rules ...
	I0917 18:32:44.327770   77819 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 18:32:44.327838   77819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 18:32:44.328050   77819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 18:32:44.328166   77819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 18:32:44.328266   77819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 18:32:44.328337   77819 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 18:32:44.328483   77819 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 18:32:44.328519   77819 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 18:32:44.328564   77819 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 18:32:44.328573   77819 kubeadm.go:310] 
	I0917 18:32:44.328628   77819 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 18:32:44.328634   77819 kubeadm.go:310] 
	I0917 18:32:44.328702   77819 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 18:32:44.328710   77819 kubeadm.go:310] 
	I0917 18:32:44.328736   77819 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 18:32:44.328798   77819 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 18:32:44.328849   77819 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 18:32:44.328858   77819 kubeadm.go:310] 
	I0917 18:32:44.328940   77819 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 18:32:44.328949   77819 kubeadm.go:310] 
	I0917 18:32:44.328997   77819 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 18:32:44.329011   77819 kubeadm.go:310] 
	I0917 18:32:44.329054   77819 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 18:32:44.329122   77819 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 18:32:44.329184   77819 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 18:32:44.329191   77819 kubeadm.go:310] 
	I0917 18:32:44.329281   77819 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 18:32:44.329359   77819 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 18:32:44.329372   77819 kubeadm.go:310] 
	I0917 18:32:44.329487   77819 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token xgap2f.3rz1qjyfivkbqx8u \
	I0917 18:32:44.329599   77819 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 \
	I0917 18:32:44.329619   77819 kubeadm.go:310] 	--control-plane 
	I0917 18:32:44.329625   77819 kubeadm.go:310] 
	I0917 18:32:44.329709   77819 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 18:32:44.329716   77819 kubeadm.go:310] 
	I0917 18:32:44.329784   77819 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token xgap2f.3rz1qjyfivkbqx8u \
	I0917 18:32:44.329896   77819 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 
	I0917 18:32:44.329910   77819 cni.go:84] Creating CNI manager for ""
	I0917 18:32:44.329916   77819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:32:44.331403   77819 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:32:41.165452   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:43.167184   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:45.664612   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:42.976970   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:45.475620   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:44.332786   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:32:44.344553   77819 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:32:44.365355   77819 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 18:32:44.365417   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:44.365457   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-438836 minikube.k8s.io/updated_at=2024_09_17T18_32_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=default-k8s-diff-port-438836 minikube.k8s.io/primary=true
	I0917 18:32:44.393987   77819 ops.go:34] apiserver oom_adj: -16
	I0917 18:32:44.608512   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:45.109295   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:45.609455   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:46.108538   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:46.609062   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:47.108933   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:47.608565   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:48.109355   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:48.609390   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:49.109204   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:49.305574   77819 kubeadm.go:1113] duration metric: took 4.940218828s to wait for elevateKubeSystemPrivileges
	I0917 18:32:49.305616   77819 kubeadm.go:394] duration metric: took 5m2.086280483s to StartCluster
	I0917 18:32:49.305640   77819 settings.go:142] acquiring lock: {Name:mk34898861b3ed534da4bd8404feab95fe690001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:32:49.305743   77819 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:32:49.308226   77819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:32:49.308590   77819 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.58 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 18:32:49.308755   77819 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 18:32:49.308838   77819 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-438836"
	I0917 18:32:49.308861   77819 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-438836"
	I0917 18:32:49.308863   77819 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-438836"
	I0917 18:32:49.308882   77819 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-438836"
	I0917 18:32:49.308881   77819 config.go:182] Loaded profile config "default-k8s-diff-port-438836": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:32:49.308895   77819 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-438836"
	W0917 18:32:49.308946   77819 addons.go:243] addon metrics-server should already be in state true
	I0917 18:32:49.309006   77819 host.go:66] Checking if "default-k8s-diff-port-438836" exists ...
	I0917 18:32:49.308895   77819 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-438836"
	W0917 18:32:49.308873   77819 addons.go:243] addon storage-provisioner should already be in state true
	I0917 18:32:49.309151   77819 host.go:66] Checking if "default-k8s-diff-port-438836" exists ...
	I0917 18:32:49.309458   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.309509   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.309544   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.309580   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.309585   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.309613   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.310410   77819 out.go:177] * Verifying Kubernetes components...
	I0917 18:32:49.311819   77819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:32:49.326762   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40995
	I0917 18:32:49.327055   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41171
	I0917 18:32:49.327287   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.327615   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.327869   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.327888   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.328171   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.328194   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.328215   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.328403   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetState
	I0917 18:32:49.328622   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.329285   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.329330   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.329573   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42919
	I0917 18:32:49.330145   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.330651   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.330674   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.331084   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.331715   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.331767   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.332232   77819 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-438836"
	W0917 18:32:49.332250   77819 addons.go:243] addon default-storageclass should already be in state true
	I0917 18:32:49.332278   77819 host.go:66] Checking if "default-k8s-diff-port-438836" exists ...
	I0917 18:32:49.332550   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.332595   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.346536   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35171
	I0917 18:32:49.347084   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.347712   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.347737   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.348229   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.348469   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetState
	I0917 18:32:49.350631   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35929
	I0917 18:32:49.351520   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.351581   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:32:49.352110   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.352138   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.352297   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35853
	I0917 18:32:49.352720   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.352736   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.353270   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.353310   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.353318   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.353334   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.353707   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.353861   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetState
	I0917 18:32:49.354855   77819 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0917 18:32:49.356031   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:32:49.356123   77819 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 18:32:49.356153   77819 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 18:32:49.356181   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:32:49.358023   77819 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:32:47.475181   77433 pod_ready.go:93] pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:47.475212   77433 pod_ready.go:82] duration metric: took 9.006699747s for pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:47.475230   77433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qv4pq" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.483276   77433 pod_ready.go:93] pod "coredns-7c65d6cfc9-qv4pq" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:48.483301   77433 pod_ready.go:82] duration metric: took 1.008063055s for pod "coredns-7c65d6cfc9-qv4pq" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.483310   77433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.488897   77433 pod_ready.go:93] pod "etcd-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:48.488922   77433 pod_ready.go:82] duration metric: took 5.605818ms for pod "etcd-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.488931   77433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.493809   77433 pod_ready.go:93] pod "kube-apiserver-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:48.493840   77433 pod_ready.go:82] duration metric: took 4.899361ms for pod "kube-apiserver-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.493853   77433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.498703   77433 pod_ready.go:93] pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:48.498730   77433 pod_ready.go:82] duration metric: took 4.869599ms for pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.498741   77433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2945m" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.673260   77433 pod_ready.go:93] pod "kube-proxy-2945m" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:48.673288   77433 pod_ready.go:82] duration metric: took 174.539603ms for pod "kube-proxy-2945m" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.673300   77433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:49.073094   77433 pod_ready.go:93] pod "kube-scheduler-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:49.073121   77433 pod_ready.go:82] duration metric: took 399.810804ms for pod "kube-scheduler-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:49.073132   77433 pod_ready.go:39] duration metric: took 10.627960333s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:49.073148   77433 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:32:49.073220   77433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:49.089310   77433 api_server.go:72] duration metric: took 10.960186006s to wait for apiserver process to appear ...
	I0917 18:32:49.089337   77433 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:32:49.089360   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:32:49.094838   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 200:
	ok
	I0917 18:32:49.095838   77433 api_server.go:141] control plane version: v1.31.1
	I0917 18:32:49.095862   77433 api_server.go:131] duration metric: took 6.516706ms to wait for apiserver health ...
	I0917 18:32:49.095872   77433 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:32:49.278262   77433 system_pods.go:59] 9 kube-system pods found
	I0917 18:32:49.278306   77433 system_pods.go:61] "coredns-7c65d6cfc9-gddwk" [57f85dd3-be48-4648-8d70-7a06aeaecdc2] Running
	I0917 18:32:49.278312   77433 system_pods.go:61] "coredns-7c65d6cfc9-qv4pq" [31f7e4b5-3870-41a1-96f8-8e13511fe684] Running
	I0917 18:32:49.278315   77433 system_pods.go:61] "etcd-no-preload-328741" [42b632f3-5576-4779-8895-3adcecfb278c] Running
	I0917 18:32:49.278319   77433 system_pods.go:61] "kube-apiserver-no-preload-328741" [ff2d44e3-dad5-4c24-a24d-2df425466747] Running
	I0917 18:32:49.278323   77433 system_pods.go:61] "kube-controller-manager-no-preload-328741" [eec3bebd-16ed-428e-8411-bca31800b36c] Running
	I0917 18:32:49.278326   77433 system_pods.go:61] "kube-proxy-2945m" [8a7b75b4-28c5-476a-b002-05313976c138] Running
	I0917 18:32:49.278329   77433 system_pods.go:61] "kube-scheduler-no-preload-328741" [06c97bf5-3ad3-45c5-8eaa-aa3cdbf51f12] Running
	I0917 18:32:49.278337   77433 system_pods.go:61] "metrics-server-6867b74b74-cvttg" [1b2d6700-2e3c-4a35-9794-0ec095eed0d4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:32:49.278341   77433 system_pods.go:61] "storage-provisioner" [03a8e7f5-ea70-4653-837b-5ad54de48136] Running
	I0917 18:32:49.278348   77433 system_pods.go:74] duration metric: took 182.470522ms to wait for pod list to return data ...
	I0917 18:32:49.278355   77433 default_sa.go:34] waiting for default service account to be created ...
	I0917 18:32:49.474126   77433 default_sa.go:45] found service account: "default"
	I0917 18:32:49.474155   77433 default_sa.go:55] duration metric: took 195.79307ms for default service account to be created ...
	I0917 18:32:49.474166   77433 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 18:32:49.678032   77433 system_pods.go:86] 9 kube-system pods found
	I0917 18:32:49.678062   77433 system_pods.go:89] "coredns-7c65d6cfc9-gddwk" [57f85dd3-be48-4648-8d70-7a06aeaecdc2] Running
	I0917 18:32:49.678068   77433 system_pods.go:89] "coredns-7c65d6cfc9-qv4pq" [31f7e4b5-3870-41a1-96f8-8e13511fe684] Running
	I0917 18:32:49.678072   77433 system_pods.go:89] "etcd-no-preload-328741" [42b632f3-5576-4779-8895-3adcecfb278c] Running
	I0917 18:32:49.678076   77433 system_pods.go:89] "kube-apiserver-no-preload-328741" [ff2d44e3-dad5-4c24-a24d-2df425466747] Running
	I0917 18:32:49.678080   77433 system_pods.go:89] "kube-controller-manager-no-preload-328741" [eec3bebd-16ed-428e-8411-bca31800b36c] Running
	I0917 18:32:49.678083   77433 system_pods.go:89] "kube-proxy-2945m" [8a7b75b4-28c5-476a-b002-05313976c138] Running
	I0917 18:32:49.678086   77433 system_pods.go:89] "kube-scheduler-no-preload-328741" [06c97bf5-3ad3-45c5-8eaa-aa3cdbf51f12] Running
	I0917 18:32:49.678095   77433 system_pods.go:89] "metrics-server-6867b74b74-cvttg" [1b2d6700-2e3c-4a35-9794-0ec095eed0d4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:32:49.678101   77433 system_pods.go:89] "storage-provisioner" [03a8e7f5-ea70-4653-837b-5ad54de48136] Running
	I0917 18:32:49.678111   77433 system_pods.go:126] duration metric: took 203.938016ms to wait for k8s-apps to be running ...
	I0917 18:32:49.678120   77433 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 18:32:49.678169   77433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:32:49.698139   77433 system_svc.go:56] duration metric: took 20.008261ms WaitForService to wait for kubelet
	I0917 18:32:49.698169   77433 kubeadm.go:582] duration metric: took 11.569050863s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 18:32:49.698188   77433 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:32:49.873214   77433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:32:49.873286   77433 node_conditions.go:123] node cpu capacity is 2
	I0917 18:32:49.873304   77433 node_conditions.go:105] duration metric: took 175.108582ms to run NodePressure ...
	I0917 18:32:49.873319   77433 start.go:241] waiting for startup goroutines ...
	I0917 18:32:49.873329   77433 start.go:246] waiting for cluster config update ...
	I0917 18:32:49.873342   77433 start.go:255] writing updated cluster config ...
	I0917 18:32:49.873719   77433 ssh_runner.go:195] Run: rm -f paused
	I0917 18:32:49.928157   77433 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 18:32:49.930136   77433 out.go:177] * Done! kubectl is now configured to use "no-preload-328741" cluster and "default" namespace by default
	I0917 18:32:47.158355   77264 pod_ready.go:82] duration metric: took 4m0.000722561s for pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace to be "Ready" ...
	E0917 18:32:47.158398   77264 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0917 18:32:47.158416   77264 pod_ready.go:39] duration metric: took 4m11.016184959s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:47.158443   77264 kubeadm.go:597] duration metric: took 4m19.974943276s to restartPrimaryControlPlane
	W0917 18:32:47.158508   77264 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 18:32:47.158539   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:32:49.359450   77819 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:32:49.359475   77819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 18:32:49.359496   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:32:49.360356   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.361125   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:32:49.360783   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:32:49.361427   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:32:49.361439   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.361615   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:32:49.361803   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:32:49.363091   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.363388   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:32:49.363420   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.363601   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:32:49.363803   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:32:49.363956   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:32:49.364108   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:32:49.374395   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40731
	I0917 18:32:49.374937   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.375474   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.375506   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.375858   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.376073   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetState
	I0917 18:32:49.377667   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:32:49.377884   77819 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 18:32:49.377899   77819 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 18:32:49.377912   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:32:49.381821   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.381992   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:32:49.382009   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.382202   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:32:49.382366   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:32:49.382534   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:32:49.382855   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:32:49.601072   77819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:32:49.657872   77819 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-438836" to be "Ready" ...
	I0917 18:32:49.669721   77819 node_ready.go:49] node "default-k8s-diff-port-438836" has status "Ready":"True"
	I0917 18:32:49.669750   77819 node_ready.go:38] duration metric: took 11.838649ms for node "default-k8s-diff-port-438836" to be "Ready" ...
	I0917 18:32:49.669761   77819 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:49.692344   77819 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8nrnc" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:49.774555   77819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:32:49.821754   77819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 18:32:49.826676   77819 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 18:32:49.826694   77819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0917 18:32:49.941685   77819 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 18:32:49.941712   77819 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 18:32:50.121418   77819 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:32:50.121444   77819 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 18:32:50.233586   77819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:32:50.948870   77819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.174278798s)
	I0917 18:32:50.948915   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:50.948926   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:50.948941   77819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.12715113s)
	I0917 18:32:50.948983   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:50.948997   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:50.949213   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:50.949240   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:50.949249   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:50.949257   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:50.949335   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:50.949346   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Closing plugin on server side
	I0917 18:32:50.949349   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:50.949367   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:50.949375   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:50.949484   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Closing plugin on server side
	I0917 18:32:50.949517   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:50.949530   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:50.949689   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Closing plugin on server side
	I0917 18:32:50.949700   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:50.949720   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:50.971989   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:50.972009   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:50.972307   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:50.972326   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:51.167019   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:51.167041   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:51.167324   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:51.167350   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:51.167358   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:51.167356   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Closing plugin on server side
	I0917 18:32:51.167366   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:51.167581   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:51.167593   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:51.167605   77819 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-438836"
	I0917 18:32:51.170208   77819 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0917 18:32:51.171345   77819 addons.go:510] duration metric: took 1.86260047s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0917 18:32:51.701056   77819 pod_ready.go:103] pod "coredns-7c65d6cfc9-8nrnc" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:53.199802   77819 pod_ready.go:93] pod "coredns-7c65d6cfc9-8nrnc" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:53.199832   77819 pod_ready.go:82] duration metric: took 3.507449551s for pod "coredns-7c65d6cfc9-8nrnc" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:53.199846   77819 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x4l48" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:54.116602   78008 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0917 18:32:54.116783   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:32:54.117004   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:32:55.207337   77819 pod_ready.go:103] pod "coredns-7c65d6cfc9-x4l48" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:56.207361   77819 pod_ready.go:93] pod "coredns-7c65d6cfc9-x4l48" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:56.207390   77819 pod_ready.go:82] duration metric: took 3.007535449s for pod "coredns-7c65d6cfc9-x4l48" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.207403   77819 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.212003   77819 pod_ready.go:93] pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:56.212025   77819 pod_ready.go:82] duration metric: took 4.613897ms for pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.212034   77819 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.216625   77819 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:56.216645   77819 pod_ready.go:82] duration metric: took 4.604444ms for pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.216654   77819 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.724223   77819 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:56.724257   77819 pod_ready.go:82] duration metric: took 507.594976ms for pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.724277   77819 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xwqtr" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.729284   77819 pod_ready.go:93] pod "kube-proxy-xwqtr" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:56.729312   77819 pod_ready.go:82] duration metric: took 5.025818ms for pod "kube-proxy-xwqtr" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.729324   77819 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:57.004900   77819 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:57.004926   77819 pod_ready.go:82] duration metric: took 275.593421ms for pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:57.004935   77819 pod_ready.go:39] duration metric: took 7.335162837s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:57.004951   77819 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:32:57.004999   77819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:57.020042   77819 api_server.go:72] duration metric: took 7.711410338s to wait for apiserver process to appear ...
	I0917 18:32:57.020070   77819 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:32:57.020095   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:32:57.024504   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 200:
	ok
	I0917 18:32:57.025722   77819 api_server.go:141] control plane version: v1.31.1
	I0917 18:32:57.025749   77819 api_server.go:131] duration metric: took 5.670742ms to wait for apiserver health ...
	I0917 18:32:57.025759   77819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:32:57.206512   77819 system_pods.go:59] 9 kube-system pods found
	I0917 18:32:57.206548   77819 system_pods.go:61] "coredns-7c65d6cfc9-8nrnc" [96eeb328-605e-468b-a022-dbb7b5b44501] Running
	I0917 18:32:57.206555   77819 system_pods.go:61] "coredns-7c65d6cfc9-x4l48" [12a20eeb-edd1-4f5b-bf64-ba3d2c8ae05b] Running
	I0917 18:32:57.206561   77819 system_pods.go:61] "etcd-default-k8s-diff-port-438836" [091ba47e-1133-4557-b3d7-eb39578840ab] Running
	I0917 18:32:57.206567   77819 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-438836" [cbb0e5fe-7583-4f3e-a0cd-dc32b00bb161] Running
	I0917 18:32:57.206573   77819 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-438836" [fe0a5927-2747-4e04-b9fc-c3071cb01ceb] Running
	I0917 18:32:57.206577   77819 system_pods.go:61] "kube-proxy-xwqtr" [5875ff28-7e41-4887-94da-d7632d8141e8] Running
	I0917 18:32:57.206582   77819 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-438836" [b25c5a55-a0e5-432a-a490-69b75d3a48d8] Running
	I0917 18:32:57.206593   77819 system_pods.go:61] "metrics-server-6867b74b74-qnfv2" [75be5ed8-b62d-42c8-8ea9-5809187be05a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:32:57.206599   77819 system_pods.go:61] "storage-provisioner" [a1ae1ecf-9311-4d61-a56d-9147876d4a9d] Running
	I0917 18:32:57.206609   77819 system_pods.go:74] duration metric: took 180.842325ms to wait for pod list to return data ...
	I0917 18:32:57.206619   77819 default_sa.go:34] waiting for default service account to be created ...
	I0917 18:32:57.404368   77819 default_sa.go:45] found service account: "default"
	I0917 18:32:57.404395   77819 default_sa.go:55] duration metric: took 197.770326ms for default service account to be created ...
	I0917 18:32:57.404404   77819 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 18:32:57.607472   77819 system_pods.go:86] 9 kube-system pods found
	I0917 18:32:57.607504   77819 system_pods.go:89] "coredns-7c65d6cfc9-8nrnc" [96eeb328-605e-468b-a022-dbb7b5b44501] Running
	I0917 18:32:57.607513   77819 system_pods.go:89] "coredns-7c65d6cfc9-x4l48" [12a20eeb-edd1-4f5b-bf64-ba3d2c8ae05b] Running
	I0917 18:32:57.607519   77819 system_pods.go:89] "etcd-default-k8s-diff-port-438836" [091ba47e-1133-4557-b3d7-eb39578840ab] Running
	I0917 18:32:57.607523   77819 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-438836" [cbb0e5fe-7583-4f3e-a0cd-dc32b00bb161] Running
	I0917 18:32:57.607529   77819 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-438836" [fe0a5927-2747-4e04-b9fc-c3071cb01ceb] Running
	I0917 18:32:57.607536   77819 system_pods.go:89] "kube-proxy-xwqtr" [5875ff28-7e41-4887-94da-d7632d8141e8] Running
	I0917 18:32:57.607542   77819 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-438836" [b25c5a55-a0e5-432a-a490-69b75d3a48d8] Running
	I0917 18:32:57.607552   77819 system_pods.go:89] "metrics-server-6867b74b74-qnfv2" [75be5ed8-b62d-42c8-8ea9-5809187be05a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:32:57.607558   77819 system_pods.go:89] "storage-provisioner" [a1ae1ecf-9311-4d61-a56d-9147876d4a9d] Running
	I0917 18:32:57.607573   77819 system_pods.go:126] duration metric: took 203.161716ms to wait for k8s-apps to be running ...
	I0917 18:32:57.607584   77819 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 18:32:57.607642   77819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:32:57.623570   77819 system_svc.go:56] duration metric: took 15.976138ms WaitForService to wait for kubelet
	I0917 18:32:57.623607   77819 kubeadm.go:582] duration metric: took 8.314980472s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 18:32:57.623629   77819 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:32:57.804485   77819 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:32:57.804510   77819 node_conditions.go:123] node cpu capacity is 2
	I0917 18:32:57.804520   77819 node_conditions.go:105] duration metric: took 180.885929ms to run NodePressure ...
	I0917 18:32:57.804532   77819 start.go:241] waiting for startup goroutines ...
	I0917 18:32:57.804539   77819 start.go:246] waiting for cluster config update ...
	I0917 18:32:57.804549   77819 start.go:255] writing updated cluster config ...
	I0917 18:32:57.804868   77819 ssh_runner.go:195] Run: rm -f paused
	I0917 18:32:57.854248   77819 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 18:32:57.856295   77819 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-438836" cluster and "default" namespace by default
	I0917 18:32:59.116802   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:32:59.117073   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:33:09.116772   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:33:09.117022   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:33:13.480418   77264 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.32185403s)
	I0917 18:33:13.480497   77264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:33:13.497676   77264 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:33:13.509036   77264 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:33:13.519901   77264 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:33:13.519927   77264 kubeadm.go:157] found existing configuration files:
	
	I0917 18:33:13.519985   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:33:13.530704   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:33:13.530784   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:33:13.541442   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:33:13.553771   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:33:13.553844   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:33:13.566357   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:33:13.576787   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:33:13.576871   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:33:13.587253   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:33:13.597253   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:33:13.597331   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:33:13.607686   77264 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:33:13.657294   77264 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 18:33:13.657416   77264 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:33:13.784063   77264 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:33:13.784228   77264 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:33:13.784388   77264 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 18:33:13.797531   77264 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:33:13.799464   77264 out.go:235]   - Generating certificates and keys ...
	I0917 18:33:13.799555   77264 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:33:13.799626   77264 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:33:13.799735   77264 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:33:13.799849   77264 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:33:13.799964   77264 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:33:13.800059   77264 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:33:13.800305   77264 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:33:13.800620   77264 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:33:13.800843   77264 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:33:13.801056   77264 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:33:13.801220   77264 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:33:13.801361   77264 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:33:13.949574   77264 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:33:14.002216   77264 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 18:33:14.113507   77264 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:33:14.328861   77264 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:33:14.452448   77264 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:33:14.452956   77264 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:33:14.456029   77264 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:33:14.458085   77264 out.go:235]   - Booting up control plane ...
	I0917 18:33:14.458197   77264 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:33:14.458298   77264 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:33:14.458418   77264 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:33:14.480556   77264 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:33:14.490011   77264 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:33:14.490108   77264 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:33:14.641550   77264 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 18:33:14.641680   77264 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 18:33:16.163986   77264 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.521637216s
	I0917 18:33:16.164081   77264 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 18:33:21.167283   77264 kubeadm.go:310] [api-check] The API server is healthy after 5.003926265s
	I0917 18:33:21.187439   77264 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 18:33:21.214590   77264 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 18:33:21.256056   77264 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 18:33:21.256319   77264 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-081863 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 18:33:21.274920   77264 kubeadm.go:310] [bootstrap-token] Using token: tkf10q.2xx4v0n14dywt5kc
	I0917 18:33:21.276557   77264 out.go:235]   - Configuring RBAC rules ...
	I0917 18:33:21.276707   77264 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 18:33:21.286544   77264 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 18:33:21.299514   77264 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 18:33:21.304466   77264 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 18:33:21.309218   77264 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 18:33:21.315113   77264 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 18:33:21.575303   77264 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 18:33:22.022249   77264 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 18:33:22.576184   77264 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 18:33:22.576211   77264 kubeadm.go:310] 
	I0917 18:33:22.576279   77264 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 18:33:22.576291   77264 kubeadm.go:310] 
	I0917 18:33:22.576360   77264 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 18:33:22.576367   77264 kubeadm.go:310] 
	I0917 18:33:22.576388   77264 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 18:33:22.576480   77264 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 18:33:22.576565   77264 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 18:33:22.576576   77264 kubeadm.go:310] 
	I0917 18:33:22.576640   77264 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 18:33:22.576649   77264 kubeadm.go:310] 
	I0917 18:33:22.576725   77264 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 18:33:22.576742   77264 kubeadm.go:310] 
	I0917 18:33:22.576802   77264 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 18:33:22.576884   77264 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 18:33:22.576987   77264 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 18:33:22.577008   77264 kubeadm.go:310] 
	I0917 18:33:22.577111   77264 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 18:33:22.577221   77264 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 18:33:22.577246   77264 kubeadm.go:310] 
	I0917 18:33:22.577361   77264 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tkf10q.2xx4v0n14dywt5kc \
	I0917 18:33:22.577505   77264 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 \
	I0917 18:33:22.577543   77264 kubeadm.go:310] 	--control-plane 
	I0917 18:33:22.577552   77264 kubeadm.go:310] 
	I0917 18:33:22.577660   77264 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 18:33:22.577671   77264 kubeadm.go:310] 
	I0917 18:33:22.577778   77264 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tkf10q.2xx4v0n14dywt5kc \
	I0917 18:33:22.577908   77264 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 
	I0917 18:33:22.579092   77264 kubeadm.go:310] W0917 18:33:13.630065    2521 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:33:22.579481   77264 kubeadm.go:310] W0917 18:33:13.630936    2521 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:33:22.579593   77264 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:33:22.579621   77264 cni.go:84] Creating CNI manager for ""
	I0917 18:33:22.579630   77264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:33:22.581566   77264 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:33:22.582849   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:33:22.595489   77264 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:33:22.627349   77264 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 18:33:22.627411   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:22.627448   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-081863 minikube.k8s.io/updated_at=2024_09_17T18_33_22_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=embed-certs-081863 minikube.k8s.io/primary=true
	I0917 18:33:22.862361   77264 ops.go:34] apiserver oom_adj: -16
	I0917 18:33:22.862491   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:23.362641   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:23.863054   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:24.363374   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:24.862744   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:25.362644   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:25.863065   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:25.974152   77264 kubeadm.go:1113] duration metric: took 3.346801442s to wait for elevateKubeSystemPrivileges
	I0917 18:33:25.974185   77264 kubeadm.go:394] duration metric: took 4m58.844504582s to StartCluster
	I0917 18:33:25.974203   77264 settings.go:142] acquiring lock: {Name:mk34898861b3ed534da4bd8404feab95fe690001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:33:25.974289   77264 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:33:25.976039   77264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:33:25.976296   77264 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.61 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 18:33:25.976407   77264 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 18:33:25.976517   77264 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-081863"
	I0917 18:33:25.976528   77264 addons.go:69] Setting default-storageclass=true in profile "embed-certs-081863"
	I0917 18:33:25.976535   77264 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-081863"
	W0917 18:33:25.976543   77264 addons.go:243] addon storage-provisioner should already be in state true
	I0917 18:33:25.976547   77264 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-081863"
	I0917 18:33:25.976573   77264 host.go:66] Checking if "embed-certs-081863" exists ...
	I0917 18:33:25.976624   77264 config.go:182] Loaded profile config "embed-certs-081863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:33:25.976662   77264 addons.go:69] Setting metrics-server=true in profile "embed-certs-081863"
	I0917 18:33:25.976672   77264 addons.go:234] Setting addon metrics-server=true in "embed-certs-081863"
	W0917 18:33:25.976679   77264 addons.go:243] addon metrics-server should already be in state true
	I0917 18:33:25.976698   77264 host.go:66] Checking if "embed-certs-081863" exists ...
	I0917 18:33:25.976964   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.976984   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.976997   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:25.977013   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:25.977030   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.977050   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:25.978439   77264 out.go:177] * Verifying Kubernetes components...
	I0917 18:33:25.980250   77264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:33:25.993034   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46451
	I0917 18:33:25.993038   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45343
	I0917 18:33:25.993551   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38487
	I0917 18:33:25.993589   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:25.993625   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:25.993887   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:25.994098   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:25.994122   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:25.994193   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:25.994211   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:25.994442   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:25.994466   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:25.994523   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:25.994523   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:25.994762   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetState
	I0917 18:33:25.994791   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:25.995118   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.995168   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:25.995251   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.995284   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:25.998228   77264 addons.go:234] Setting addon default-storageclass=true in "embed-certs-081863"
	W0917 18:33:25.998260   77264 addons.go:243] addon default-storageclass should already be in state true
	I0917 18:33:25.998301   77264 host.go:66] Checking if "embed-certs-081863" exists ...
	I0917 18:33:25.998642   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.998688   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:26.011862   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0917 18:33:26.012556   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:26.013142   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:26.013168   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:26.013578   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:26.014129   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36901
	I0917 18:33:26.014246   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43409
	I0917 18:33:26.014331   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetState
	I0917 18:33:26.014633   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:26.014703   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:26.015086   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:26.015108   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:26.015379   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:26.015396   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:26.015451   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:26.015895   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:26.016078   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:26.016113   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:26.016486   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetState
	I0917 18:33:26.016525   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:33:26.018385   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:33:26.019139   77264 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0917 18:33:26.020119   77264 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:33:26.020991   77264 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 18:33:26.021013   77264 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 18:33:26.021035   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:33:26.021810   77264 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:33:26.021825   77264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 18:33:26.021839   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:33:26.025804   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.026074   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:33:26.026097   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.025803   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.026468   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:33:26.026649   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:33:26.026937   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:33:26.026982   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:33:26.026991   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.027025   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:33:26.027114   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:33:26.027232   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:33:26.027417   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:33:26.027580   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:33:26.035905   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42799
	I0917 18:33:26.036621   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:26.037566   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:26.037597   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:26.038013   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:26.038317   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetState
	I0917 18:33:26.040464   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:33:26.040887   77264 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 18:33:26.040908   77264 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 18:33:26.040922   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:33:26.043857   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.044291   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:33:26.044325   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.044488   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:33:26.044682   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:33:26.044838   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:33:26.045034   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:33:26.155880   77264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:33:26.182293   77264 node_ready.go:35] waiting up to 6m0s for node "embed-certs-081863" to be "Ready" ...
	I0917 18:33:26.191336   77264 node_ready.go:49] node "embed-certs-081863" has status "Ready":"True"
	I0917 18:33:26.191358   77264 node_ready.go:38] duration metric: took 9.032061ms for node "embed-certs-081863" to be "Ready" ...
	I0917 18:33:26.191366   77264 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:33:26.196333   77264 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:26.260819   77264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:33:26.270678   77264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 18:33:26.270701   77264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0917 18:33:26.306169   77264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 18:33:26.310271   77264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 18:33:26.310300   77264 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 18:33:26.367576   77264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:33:26.367603   77264 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 18:33:26.424838   77264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:33:27.088293   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.088326   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.088329   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.088352   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.088726   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.088759   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.088782   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Closing plugin on server side
	I0917 18:33:27.088794   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.088831   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.088845   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.088853   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.088798   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.088923   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.089075   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.089088   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.089200   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.089210   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.089242   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Closing plugin on server side
	I0917 18:33:27.162204   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.162227   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.162597   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.162619   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.423795   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.423824   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.424110   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.424127   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.424136   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.424145   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.424369   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.424385   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.424395   77264 addons.go:475] Verifying addon metrics-server=true in "embed-certs-081863"
	I0917 18:33:27.424390   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Closing plugin on server side
	I0917 18:33:27.426548   77264 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0917 18:33:29.116398   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:33:29.116681   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:33:27.427684   77264 addons.go:510] duration metric: took 1.451280405s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0917 18:33:28.311561   77264 pod_ready.go:103] pod "etcd-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:33:30.703554   77264 pod_ready.go:103] pod "etcd-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:33:31.203018   77264 pod_ready.go:93] pod "etcd-embed-certs-081863" in "kube-system" namespace has status "Ready":"True"
	I0917 18:33:31.203047   77264 pod_ready.go:82] duration metric: took 5.006684537s for pod "etcd-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:31.203057   77264 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:31.207921   77264 pod_ready.go:93] pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace has status "Ready":"True"
	I0917 18:33:31.207949   77264 pod_ready.go:82] duration metric: took 4.88424ms for pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:31.207964   77264 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:31.212804   77264 pod_ready.go:93] pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace has status "Ready":"True"
	I0917 18:33:31.212830   77264 pod_ready.go:82] duration metric: took 4.856814ms for pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:31.212842   77264 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:32.221895   77264 pod_ready.go:93] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"True"
	I0917 18:33:32.221921   77264 pod_ready.go:82] duration metric: took 1.009071567s for pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:32.221929   77264 pod_ready.go:39] duration metric: took 6.030554324s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:33:32.221942   77264 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:33:32.221991   77264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:33:32.242087   77264 api_server.go:72] duration metric: took 6.265747566s to wait for apiserver process to appear ...
	I0917 18:33:32.242113   77264 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:33:32.242129   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:33:32.246960   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 200:
	ok
	I0917 18:33:32.248201   77264 api_server.go:141] control plane version: v1.31.1
	I0917 18:33:32.248223   77264 api_server.go:131] duration metric: took 6.105102ms to wait for apiserver health ...
	I0917 18:33:32.248231   77264 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:33:32.257513   77264 system_pods.go:59] 9 kube-system pods found
	I0917 18:33:32.257546   77264 system_pods.go:61] "coredns-7c65d6cfc9-662sf" [dc7d0fc2-f0dd-420c-b2f9-16ddb84bf3c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:33:32.257557   77264 system_pods.go:61] "coredns-7c65d6cfc9-dxjr7" [16ebe197-5fcf-4988-968b-c9edd71886ff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:33:32.257563   77264 system_pods.go:61] "etcd-embed-certs-081863" [305d6255-3a64-42e2-ad46-cfb94470289d] Running
	I0917 18:33:32.257569   77264 system_pods.go:61] "kube-apiserver-embed-certs-081863" [693ee853-314d-49fc-884c-aaaa2ac17a59] Running
	I0917 18:33:32.257575   77264 system_pods.go:61] "kube-controller-manager-embed-certs-081863" [ff8d98db-0214-405a-858d-e720dccd0492] Running
	I0917 18:33:32.257579   77264 system_pods.go:61] "kube-proxy-7w64h" [46f3bcbd-64c9-4b30-9aa9-6f6e1eb9833b] Running
	I0917 18:33:32.257585   77264 system_pods.go:61] "kube-scheduler-embed-certs-081863" [fb3b40eb-5f37-486c-a897-c7d3574ea408] Running
	I0917 18:33:32.257593   77264 system_pods.go:61] "metrics-server-6867b74b74-98t8z" [941996a1-2109-4c06-88d1-19c6987f81bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:33:32.257602   77264 system_pods.go:61] "storage-provisioner" [107868ba-cf29-42b0-bb0d-c0da9b6b4c8c] Running
	I0917 18:33:32.257612   77264 system_pods.go:74] duration metric: took 9.373269ms to wait for pod list to return data ...
	I0917 18:33:32.257625   77264 default_sa.go:34] waiting for default service account to be created ...
	I0917 18:33:32.264675   77264 default_sa.go:45] found service account: "default"
	I0917 18:33:32.264700   77264 default_sa.go:55] duration metric: took 7.064658ms for default service account to be created ...
	I0917 18:33:32.264711   77264 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 18:33:32.270932   77264 system_pods.go:86] 9 kube-system pods found
	I0917 18:33:32.270964   77264 system_pods.go:89] "coredns-7c65d6cfc9-662sf" [dc7d0fc2-f0dd-420c-b2f9-16ddb84bf3c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:33:32.270975   77264 system_pods.go:89] "coredns-7c65d6cfc9-dxjr7" [16ebe197-5fcf-4988-968b-c9edd71886ff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:33:32.270983   77264 system_pods.go:89] "etcd-embed-certs-081863" [305d6255-3a64-42e2-ad46-cfb94470289d] Running
	I0917 18:33:32.270990   77264 system_pods.go:89] "kube-apiserver-embed-certs-081863" [693ee853-314d-49fc-884c-aaaa2ac17a59] Running
	I0917 18:33:32.270996   77264 system_pods.go:89] "kube-controller-manager-embed-certs-081863" [ff8d98db-0214-405a-858d-e720dccd0492] Running
	I0917 18:33:32.271002   77264 system_pods.go:89] "kube-proxy-7w64h" [46f3bcbd-64c9-4b30-9aa9-6f6e1eb9833b] Running
	I0917 18:33:32.271009   77264 system_pods.go:89] "kube-scheduler-embed-certs-081863" [fb3b40eb-5f37-486c-a897-c7d3574ea408] Running
	I0917 18:33:32.271018   77264 system_pods.go:89] "metrics-server-6867b74b74-98t8z" [941996a1-2109-4c06-88d1-19c6987f81bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:33:32.271024   77264 system_pods.go:89] "storage-provisioner" [107868ba-cf29-42b0-bb0d-c0da9b6b4c8c] Running
	I0917 18:33:32.271037   77264 system_pods.go:126] duration metric: took 6.318783ms to wait for k8s-apps to be running ...
	I0917 18:33:32.271049   77264 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 18:33:32.271102   77264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:33:32.287483   77264 system_svc.go:56] duration metric: took 16.427006ms WaitForService to wait for kubelet
	I0917 18:33:32.287516   77264 kubeadm.go:582] duration metric: took 6.311184714s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 18:33:32.287535   77264 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:33:32.406700   77264 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:33:32.406738   77264 node_conditions.go:123] node cpu capacity is 2
	I0917 18:33:32.406754   77264 node_conditions.go:105] duration metric: took 119.213403ms to run NodePressure ...
	I0917 18:33:32.406767   77264 start.go:241] waiting for startup goroutines ...
	I0917 18:33:32.406777   77264 start.go:246] waiting for cluster config update ...
	I0917 18:33:32.406791   77264 start.go:255] writing updated cluster config ...
	I0917 18:33:32.407061   77264 ssh_runner.go:195] Run: rm -f paused
	I0917 18:33:32.455606   77264 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 18:33:32.457636   77264 out.go:177] * Done! kubectl is now configured to use "embed-certs-081863" cluster and "default" namespace by default
	I0917 18:34:09.116050   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:34:09.116348   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:34:09.116382   78008 kubeadm.go:310] 
	I0917 18:34:09.116437   78008 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0917 18:34:09.116522   78008 kubeadm.go:310] 		timed out waiting for the condition
	I0917 18:34:09.116546   78008 kubeadm.go:310] 
	I0917 18:34:09.116595   78008 kubeadm.go:310] 	This error is likely caused by:
	I0917 18:34:09.116645   78008 kubeadm.go:310] 		- The kubelet is not running
	I0917 18:34:09.116792   78008 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0917 18:34:09.116804   78008 kubeadm.go:310] 
	I0917 18:34:09.116949   78008 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0917 18:34:09.116993   78008 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0917 18:34:09.117053   78008 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0917 18:34:09.117070   78008 kubeadm.go:310] 
	I0917 18:34:09.117199   78008 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0917 18:34:09.117318   78008 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0917 18:34:09.117331   78008 kubeadm.go:310] 
	I0917 18:34:09.117467   78008 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0917 18:34:09.117585   78008 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0917 18:34:09.117689   78008 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0917 18:34:09.117782   78008 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0917 18:34:09.117793   78008 kubeadm.go:310] 
	I0917 18:34:09.118509   78008 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:34:09.118613   78008 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0917 18:34:09.118740   78008 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0917 18:34:09.118821   78008 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0917 18:34:09.118869   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:34:09.597153   78008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:34:09.614431   78008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:34:09.627627   78008 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:34:09.627653   78008 kubeadm.go:157] found existing configuration files:
	
	I0917 18:34:09.627702   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:34:09.639927   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:34:09.639997   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:34:09.651694   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:34:09.662886   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:34:09.662951   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:34:09.675194   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:34:09.686971   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:34:09.687040   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:34:09.699343   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:34:09.711202   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:34:09.711259   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:34:09.722049   78008 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:34:09.800536   78008 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0917 18:34:09.800589   78008 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:34:09.951244   78008 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:34:09.951389   78008 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:34:09.951517   78008 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0917 18:34:10.148311   78008 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:34:10.150656   78008 out.go:235]   - Generating certificates and keys ...
	I0917 18:34:10.150769   78008 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:34:10.150858   78008 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:34:10.150978   78008 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:34:10.151065   78008 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:34:10.151169   78008 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:34:10.151256   78008 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:34:10.151519   78008 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:34:10.151757   78008 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:34:10.152388   78008 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:34:10.152908   78008 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:34:10.153071   78008 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:34:10.153159   78008 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:34:10.298790   78008 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:34:10.463403   78008 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:34:10.699997   78008 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:34:10.983279   78008 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:34:11.006708   78008 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:34:11.008239   78008 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:34:11.008306   78008 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:34:11.173261   78008 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:34:11.175163   78008 out.go:235]   - Booting up control plane ...
	I0917 18:34:11.175324   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:34:11.188834   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:34:11.189874   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:34:11.190719   78008 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:34:11.193221   78008 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0917 18:34:51.193814   78008 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0917 18:34:51.194231   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:34:51.194466   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:34:56.194972   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:34:56.195214   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:35:06.195454   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:35:06.195700   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:35:26.196645   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:35:26.196867   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:36:06.199013   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:36:06.199291   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:36:06.199313   78008 kubeadm.go:310] 
	I0917 18:36:06.199374   78008 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0917 18:36:06.199427   78008 kubeadm.go:310] 		timed out waiting for the condition
	I0917 18:36:06.199434   78008 kubeadm.go:310] 
	I0917 18:36:06.199481   78008 kubeadm.go:310] 	This error is likely caused by:
	I0917 18:36:06.199514   78008 kubeadm.go:310] 		- The kubelet is not running
	I0917 18:36:06.199643   78008 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0917 18:36:06.199663   78008 kubeadm.go:310] 
	I0917 18:36:06.199785   78008 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0917 18:36:06.199835   78008 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0917 18:36:06.199878   78008 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0917 18:36:06.199882   78008 kubeadm.go:310] 
	I0917 18:36:06.200017   78008 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0917 18:36:06.200218   78008 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0917 18:36:06.200235   78008 kubeadm.go:310] 
	I0917 18:36:06.200391   78008 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0917 18:36:06.200515   78008 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0917 18:36:06.200640   78008 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0917 18:36:06.200746   78008 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0917 18:36:06.200763   78008 kubeadm.go:310] 
	I0917 18:36:06.201520   78008 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:36:06.201636   78008 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0917 18:36:06.201798   78008 kubeadm.go:394] duration metric: took 7m56.884157814s to StartCluster
	I0917 18:36:06.201852   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:36:06.201800   78008 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0917 18:36:06.201920   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:36:06.251742   78008 cri.go:89] found id: ""
	I0917 18:36:06.251773   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.251781   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:36:06.251787   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:36:06.251853   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:36:06.292437   78008 cri.go:89] found id: ""
	I0917 18:36:06.292471   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.292483   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:36:06.292490   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:36:06.292548   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:36:06.334539   78008 cri.go:89] found id: ""
	I0917 18:36:06.334571   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.334580   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:36:06.334590   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:36:06.334641   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:36:06.372231   78008 cri.go:89] found id: ""
	I0917 18:36:06.372267   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.372279   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:36:06.372287   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:36:06.372346   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:36:06.411995   78008 cri.go:89] found id: ""
	I0917 18:36:06.412023   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.412031   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:36:06.412036   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:36:06.412100   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:36:06.450809   78008 cri.go:89] found id: ""
	I0917 18:36:06.450834   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.450842   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:36:06.450848   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:36:06.450897   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:36:06.486772   78008 cri.go:89] found id: ""
	I0917 18:36:06.486802   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.486814   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:36:06.486831   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:36:06.486886   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:36:06.528167   78008 cri.go:89] found id: ""
	I0917 18:36:06.528198   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.528210   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:36:06.528222   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:36:06.528234   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:36:06.610415   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:36:06.610445   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:36:06.610461   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:36:06.745881   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:36:06.745921   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:36:06.788764   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:36:06.788802   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:36:06.843477   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:36:06.843514   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0917 18:36:06.858338   78008 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0917 18:36:06.858388   78008 out.go:270] * 
	W0917 18:36:06.858456   78008 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0917 18:36:06.858485   78008 out.go:270] * 
	W0917 18:36:06.859898   78008 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 18:36:06.863606   78008 out.go:201] 
	W0917 18:36:06.865246   78008 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0917 18:36:06.865293   78008 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0917 18:36:06.865313   78008 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0917 18:36:06.866942   78008 out.go:201] 
	
	
	==> CRI-O <==
	Sep 17 18:45:12 old-k8s-version-190698 crio[631]: time="2024-09-17 18:45:12.461290636Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598712461268650,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=71317740-b14b-467b-8127-452fb9142184 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:45:12 old-k8s-version-190698 crio[631]: time="2024-09-17 18:45:12.462095920Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1149dbab-d3a8-4bb9-80fc-7ddebddfac74 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:45:12 old-k8s-version-190698 crio[631]: time="2024-09-17 18:45:12.462160008Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1149dbab-d3a8-4bb9-80fc-7ddebddfac74 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:45:12 old-k8s-version-190698 crio[631]: time="2024-09-17 18:45:12.462243558Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1149dbab-d3a8-4bb9-80fc-7ddebddfac74 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:45:12 old-k8s-version-190698 crio[631]: time="2024-09-17 18:45:12.501548214Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f4ce6873-13fe-48e7-bc45-a199c69ecb17 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:45:12 old-k8s-version-190698 crio[631]: time="2024-09-17 18:45:12.501734277Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f4ce6873-13fe-48e7-bc45-a199c69ecb17 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:45:12 old-k8s-version-190698 crio[631]: time="2024-09-17 18:45:12.503217334Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4bcbf694-749e-4356-893a-1bb566d5974d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:45:12 old-k8s-version-190698 crio[631]: time="2024-09-17 18:45:12.503747249Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598712503704214,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4bcbf694-749e-4356-893a-1bb566d5974d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:45:12 old-k8s-version-190698 crio[631]: time="2024-09-17 18:45:12.504556147Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c898b5bb-27ce-4582-a76c-e553ce3a417c name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:45:12 old-k8s-version-190698 crio[631]: time="2024-09-17 18:45:12.504714265Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c898b5bb-27ce-4582-a76c-e553ce3a417c name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:45:12 old-k8s-version-190698 crio[631]: time="2024-09-17 18:45:12.504766637Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c898b5bb-27ce-4582-a76c-e553ce3a417c name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:45:12 old-k8s-version-190698 crio[631]: time="2024-09-17 18:45:12.542748958Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ac35fe8e-ad06-4db1-82c6-4f28088c0519 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:45:12 old-k8s-version-190698 crio[631]: time="2024-09-17 18:45:12.542836379Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ac35fe8e-ad06-4db1-82c6-4f28088c0519 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:45:12 old-k8s-version-190698 crio[631]: time="2024-09-17 18:45:12.544132238Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=97310e32-36ba-4d9a-b9cf-86bf3466c1ef name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:45:12 old-k8s-version-190698 crio[631]: time="2024-09-17 18:45:12.544647221Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598712544546585,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=97310e32-36ba-4d9a-b9cf-86bf3466c1ef name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:45:12 old-k8s-version-190698 crio[631]: time="2024-09-17 18:45:12.545632507Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b4f89ef-2f4b-45d9-8dfc-e87dddcce6dd name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:45:12 old-k8s-version-190698 crio[631]: time="2024-09-17 18:45:12.545714907Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b4f89ef-2f4b-45d9-8dfc-e87dddcce6dd name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:45:12 old-k8s-version-190698 crio[631]: time="2024-09-17 18:45:12.545758836Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2b4f89ef-2f4b-45d9-8dfc-e87dddcce6dd name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:45:12 old-k8s-version-190698 crio[631]: time="2024-09-17 18:45:12.580367770Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b4ae1b9c-5b93-43bb-abbb-6b1c788103db name=/runtime.v1.RuntimeService/Version
	Sep 17 18:45:12 old-k8s-version-190698 crio[631]: time="2024-09-17 18:45:12.580452316Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b4ae1b9c-5b93-43bb-abbb-6b1c788103db name=/runtime.v1.RuntimeService/Version
	Sep 17 18:45:12 old-k8s-version-190698 crio[631]: time="2024-09-17 18:45:12.582056176Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=58dee337-c2c5-4f75-8f75-85e95598aa32 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:45:12 old-k8s-version-190698 crio[631]: time="2024-09-17 18:45:12.582537602Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598712582511556,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=58dee337-c2c5-4f75-8f75-85e95598aa32 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:45:12 old-k8s-version-190698 crio[631]: time="2024-09-17 18:45:12.583144872Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=71dd6bd7-5797-4a87-a840-14be8578a840 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:45:12 old-k8s-version-190698 crio[631]: time="2024-09-17 18:45:12.583200062Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=71dd6bd7-5797-4a87-a840-14be8578a840 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:45:12 old-k8s-version-190698 crio[631]: time="2024-09-17 18:45:12.583242622Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=71dd6bd7-5797-4a87-a840-14be8578a840 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep17 18:27] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054866] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.046421] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.149899] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.871080] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.681331] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep17 18:28] systemd-fstab-generator[559]: Ignoring "noauto" option for root device
	[  +0.066256] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072788] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.186947] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.145789] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.292905] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +6.819811] systemd-fstab-generator[885]: Ignoring "noauto" option for root device
	[  +0.084662] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.874135] systemd-fstab-generator[1008]: Ignoring "noauto" option for root device
	[ +13.062004] kauditd_printk_skb: 46 callbacks suppressed
	[Sep17 18:32] systemd-fstab-generator[5014]: Ignoring "noauto" option for root device
	[Sep17 18:34] systemd-fstab-generator[5292]: Ignoring "noauto" option for root device
	[  +0.068770] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:45:12 up 17 min,  0 users,  load average: 0.16, 0.10, 0.08
	Linux old-k8s-version-190698 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 17 18:45:07 old-k8s-version-190698 kubelet[6460]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Sep 17 18:45:07 old-k8s-version-190698 kubelet[6460]: goroutine 163 [chan receive]:
	Sep 17 18:45:07 old-k8s-version-190698 kubelet[6460]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*sharedProcessor).run(0xc000cf4230, 0xc000b67380)
	Sep 17 18:45:07 old-k8s-version-190698 kubelet[6460]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:628 +0x53
	Sep 17 18:45:07 old-k8s-version-190698 kubelet[6460]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Sep 17 18:45:07 old-k8s-version-190698 kubelet[6460]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Sep 17 18:45:07 old-k8s-version-190698 kubelet[6460]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000b68fa0, 0xc000cf0fc0)
	Sep 17 18:45:07 old-k8s-version-190698 kubelet[6460]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Sep 17 18:45:07 old-k8s-version-190698 kubelet[6460]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Sep 17 18:45:07 old-k8s-version-190698 kubelet[6460]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Sep 17 18:45:07 old-k8s-version-190698 kubelet[6460]: goroutine 164 [chan receive]:
	Sep 17 18:45:07 old-k8s-version-190698 kubelet[6460]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001020c0, 0xc000d305a0)
	Sep 17 18:45:07 old-k8s-version-190698 kubelet[6460]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Sep 17 18:45:07 old-k8s-version-190698 kubelet[6460]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Sep 17 18:45:07 old-k8s-version-190698 kubelet[6460]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Sep 17 18:45:07 old-k8s-version-190698 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 17 18:45:07 old-k8s-version-190698 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 17 18:45:08 old-k8s-version-190698 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Sep 17 18:45:08 old-k8s-version-190698 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 17 18:45:08 old-k8s-version-190698 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 17 18:45:08 old-k8s-version-190698 kubelet[6470]: I0917 18:45:08.698902    6470 server.go:416] Version: v1.20.0
	Sep 17 18:45:08 old-k8s-version-190698 kubelet[6470]: I0917 18:45:08.699263    6470 server.go:837] Client rotation is on, will bootstrap in background
	Sep 17 18:45:08 old-k8s-version-190698 kubelet[6470]: I0917 18:45:08.701447    6470 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 17 18:45:08 old-k8s-version-190698 kubelet[6470]: W0917 18:45:08.702469    6470 manager.go:159] Cannot detect current cgroup on cgroup v2
	Sep 17 18:45:08 old-k8s-version-190698 kubelet[6470]: I0917 18:45:08.702816    6470 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-190698 -n old-k8s-version-190698
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-190698 -n old-k8s-version-190698: exit status 2 (260.848541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-190698" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (338.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-328741 -n no-preload-328741
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-17 18:47:30.750774063 +0000 UTC m=+6719.825346336
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-328741 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-328741 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.967µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-328741 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-328741 -n no-preload-328741
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-328741 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-328741 logs -n 25: (1.371994309s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	| delete  | -p                                                     | disable-driver-mounts-671774 | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | disable-driver-mounts-671774                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:20 UTC |
	|         | default-k8s-diff-port-438836                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-081863            | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-081863                                  | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-328741             | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:20 UTC | 17 Sep 24 18:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-328741                                   | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:20 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-438836  | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:21 UTC | 17 Sep 24 18:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:21 UTC |                     |
	|         | default-k8s-diff-port-438836                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-081863                 | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-190698        | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-081863                                  | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC | 17 Sep 24 18:33 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-328741                  | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-328741                                   | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC | 17 Sep 24 18:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-438836       | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC | 17 Sep 24 18:32 UTC |
	|         | default-k8s-diff-port-438836                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-190698                              | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC | 17 Sep 24 18:23 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-190698             | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC | 17 Sep 24 18:23 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-190698                              | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-190698                              | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:47 UTC | 17 Sep 24 18:47 UTC |
	| start   | -p newest-cni-089562 --memory=2200 --alsologtostderr   | newest-cni-089562            | jenkins | v1.34.0 | 17 Sep 24 18:47 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 18:47:29
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 18:47:29.140757   84259 out.go:345] Setting OutFile to fd 1 ...
	I0917 18:47:29.141015   84259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:47:29.141025   84259 out.go:358] Setting ErrFile to fd 2...
	I0917 18:47:29.141029   84259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:47:29.141246   84259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 18:47:29.141847   84259 out.go:352] Setting JSON to false
	I0917 18:47:29.142798   84259 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":8964,"bootTime":1726589885,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 18:47:29.142891   84259 start.go:139] virtualization: kvm guest
	I0917 18:47:29.145168   84259 out.go:177] * [newest-cni-089562] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 18:47:29.146914   84259 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 18:47:29.146926   84259 notify.go:220] Checking for updates...
	I0917 18:47:29.150005   84259 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 18:47:29.151517   84259 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:47:29.152925   84259 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 18:47:29.154332   84259 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 18:47:29.155503   84259 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 18:47:29.157172   84259 config.go:182] Loaded profile config "default-k8s-diff-port-438836": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:47:29.157293   84259 config.go:182] Loaded profile config "embed-certs-081863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:47:29.157384   84259 config.go:182] Loaded profile config "no-preload-328741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:47:29.157468   84259 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 18:47:29.196108   84259 out.go:177] * Using the kvm2 driver based on user configuration
	I0917 18:47:29.197557   84259 start.go:297] selected driver: kvm2
	I0917 18:47:29.197571   84259 start.go:901] validating driver "kvm2" against <nil>
	I0917 18:47:29.197582   84259 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 18:47:29.198630   84259 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:47:29.198739   84259 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19662-11085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0917 18:47:29.215099   84259 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0917 18:47:29.215147   84259 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0917 18:47:29.215211   84259 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0917 18:47:29.215431   84259 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0917 18:47:29.215461   84259 cni.go:84] Creating CNI manager for ""
	I0917 18:47:29.215509   84259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:47:29.215516   84259 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 18:47:29.215566   84259 start.go:340] cluster config:
	{Name:newest-cni-089562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-089562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:47:29.215664   84259 iso.go:125] acquiring lock: {Name:mkee7131e09b1c5eb94d106bda884bcbdbf6f906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:47:29.218087   84259 out.go:177] * Starting "newest-cni-089562" primary control-plane node in "newest-cni-089562" cluster
	I0917 18:47:29.219357   84259 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 18:47:29.219411   84259 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0917 18:47:29.219436   84259 cache.go:56] Caching tarball of preloaded images
	I0917 18:47:29.219521   84259 preload.go:172] Found /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 18:47:29.219532   84259 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0917 18:47:29.219631   84259 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/newest-cni-089562/config.json ...
	I0917 18:47:29.219647   84259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/newest-cni-089562/config.json: {Name:mk959b4b8e8a969bfd3d07777a12471ed75af3af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:47:29.219823   84259 start.go:360] acquireMachinesLock for newest-cni-089562: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 18:47:29.219858   84259 start.go:364] duration metric: took 20.221µs to acquireMachinesLock for "newest-cni-089562"
	I0917 18:47:29.219876   84259 start.go:93] Provisioning new machine with config: &{Name:newest-cni-089562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:newest-cni-089562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 18:47:29.219939   84259 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	Sep 17 18:47:31 no-preload-328741 crio[703]: time="2024-09-17 18:47:31.359951094Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598851359928115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=29ef16f9-021a-465b-aad1-09c1db7ed96c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:47:31 no-preload-328741 crio[703]: time="2024-09-17 18:47:31.361195021Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6c3073ac-d5cc-4d11-adae-67050ea05f03 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:47:31 no-preload-328741 crio[703]: time="2024-09-17 18:47:31.361252534Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6c3073ac-d5cc-4d11-adae-67050ea05f03 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:47:31 no-preload-328741 crio[703]: time="2024-09-17 18:47:31.361471477Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e25a907995d3828299c33ad3073839ba30163d24795684a0b4aeaeba2183d2b6,PodSandboxId:288779327f47e6ca273a51536feafb6635a867a22806556906b66eed182a3e10,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726597960312259345,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03a8e7f5-ea70-4653-837b-5ad54de48136,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f85a4e2d08da43dbe670d19d96aa06781add66c8376d2f9e232c67764cd73292,PodSandboxId:6b63e0f9c879ea63c0b44fec1d9189cc25b1c291d957937b00a49d550a5996bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597959282174778,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gddwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57f85dd3-be48-4648-8d70-7a06aeaecdc2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c2c1757b5010ce5dd4c18226be5ba9dbac3aef31541d44404ba77109a340985,PodSandboxId:dce3e4d35ce3334eac74ee57b6df8c41b455f9f83b5902c71565ae834feab740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597959359057212,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qv4pq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31
f7e4b5-3870-41a1-96f8-8e13511fe684,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:808a6dceb70633f9f5388cad26d46c756126eb16dca9e626a592a5498a96de64,PodSandboxId:ada4f57161adf25f5621b4b2d120ad81751a027ec3875894b33acfdd00807480,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726597958943487687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2945m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a7b75b4-28c5-476a-b002-05313976c138,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e528cc460f1008cbb0e3528a6ff5ef106aeddd775cf1a915b8ae6d511d1959,PodSandboxId:c0bc1617cf8d0b7b093f166452216af03951825a2a2245534f60f18dd359ffd5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726597947484439622,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a7f5bfd03d56b992d0996fc63641b99,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91b94fc0010bc21e78fe4163bdc09bbba44c0d8f5a62d66aae52653816ad8b6,PodSandboxId:530a84d7c7b43240fd5b68b35730b00c7c74171098c9c63e194b585ec6666c33,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726597947543461772,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64eabf95ac53e177e5c6b586c85b9274,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e54abd262e2695f398247112d6e385326a4419d9c9f6744d7c80478ea5abe131,PodSandboxId:fd02b06c4661d51a0c1c1e3ab21cc2363ee892f13f478155001e44b53d83848a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726597947501637454,Labels:map[string]string{io.kubernetes
.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34afefdc5cd5dfffb05860cfe10789d3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976b4c709134ce601c00b084d7057f3af82f641b9700602c3b971204e0fdd136,PodSandboxId:53ac81c8f4a083199505dfcac4dc0075877c7719c345daea72847f9410c08792,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726597947465569478,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88e3c9e3660a4a2fc689695534e4c55,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e46c0fa82cfcfd9610da4d10f59632ea78de09d0e9da381b6552d2ae89e1db9,PodSandboxId:2c2bd6cac468611364bef6a0d230ff1b8952207263562842678794b3b5857856,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726597661747011364,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34afefdc5cd5dfffb05860cfe10789d3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6c3073ac-d5cc-4d11-adae-67050ea05f03 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:47:31 no-preload-328741 crio[703]: time="2024-09-17 18:47:31.404249424Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=279fb6ac-a5a9-4f25-912c-38b275ebf8fe name=/runtime.v1.RuntimeService/Version
	Sep 17 18:47:31 no-preload-328741 crio[703]: time="2024-09-17 18:47:31.404331283Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=279fb6ac-a5a9-4f25-912c-38b275ebf8fe name=/runtime.v1.RuntimeService/Version
	Sep 17 18:47:31 no-preload-328741 crio[703]: time="2024-09-17 18:47:31.405767629Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=71849262-9b71-48bf-b2c5-605886a51e6a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:47:31 no-preload-328741 crio[703]: time="2024-09-17 18:47:31.406212040Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598851406185970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=71849262-9b71-48bf-b2c5-605886a51e6a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:47:31 no-preload-328741 crio[703]: time="2024-09-17 18:47:31.407057163Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a983c94a-b81d-4eaf-850c-d859f945ab4c name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:47:31 no-preload-328741 crio[703]: time="2024-09-17 18:47:31.407203507Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a983c94a-b81d-4eaf-850c-d859f945ab4c name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:47:31 no-preload-328741 crio[703]: time="2024-09-17 18:47:31.407481338Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e25a907995d3828299c33ad3073839ba30163d24795684a0b4aeaeba2183d2b6,PodSandboxId:288779327f47e6ca273a51536feafb6635a867a22806556906b66eed182a3e10,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726597960312259345,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03a8e7f5-ea70-4653-837b-5ad54de48136,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f85a4e2d08da43dbe670d19d96aa06781add66c8376d2f9e232c67764cd73292,PodSandboxId:6b63e0f9c879ea63c0b44fec1d9189cc25b1c291d957937b00a49d550a5996bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597959282174778,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gddwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57f85dd3-be48-4648-8d70-7a06aeaecdc2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c2c1757b5010ce5dd4c18226be5ba9dbac3aef31541d44404ba77109a340985,PodSandboxId:dce3e4d35ce3334eac74ee57b6df8c41b455f9f83b5902c71565ae834feab740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597959359057212,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qv4pq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31
f7e4b5-3870-41a1-96f8-8e13511fe684,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:808a6dceb70633f9f5388cad26d46c756126eb16dca9e626a592a5498a96de64,PodSandboxId:ada4f57161adf25f5621b4b2d120ad81751a027ec3875894b33acfdd00807480,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726597958943487687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2945m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a7b75b4-28c5-476a-b002-05313976c138,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e528cc460f1008cbb0e3528a6ff5ef106aeddd775cf1a915b8ae6d511d1959,PodSandboxId:c0bc1617cf8d0b7b093f166452216af03951825a2a2245534f60f18dd359ffd5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726597947484439622,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a7f5bfd03d56b992d0996fc63641b99,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91b94fc0010bc21e78fe4163bdc09bbba44c0d8f5a62d66aae52653816ad8b6,PodSandboxId:530a84d7c7b43240fd5b68b35730b00c7c74171098c9c63e194b585ec6666c33,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726597947543461772,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64eabf95ac53e177e5c6b586c85b9274,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e54abd262e2695f398247112d6e385326a4419d9c9f6744d7c80478ea5abe131,PodSandboxId:fd02b06c4661d51a0c1c1e3ab21cc2363ee892f13f478155001e44b53d83848a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726597947501637454,Labels:map[string]string{io.kubernetes
.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34afefdc5cd5dfffb05860cfe10789d3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976b4c709134ce601c00b084d7057f3af82f641b9700602c3b971204e0fdd136,PodSandboxId:53ac81c8f4a083199505dfcac4dc0075877c7719c345daea72847f9410c08792,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726597947465569478,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88e3c9e3660a4a2fc689695534e4c55,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e46c0fa82cfcfd9610da4d10f59632ea78de09d0e9da381b6552d2ae89e1db9,PodSandboxId:2c2bd6cac468611364bef6a0d230ff1b8952207263562842678794b3b5857856,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726597661747011364,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34afefdc5cd5dfffb05860cfe10789d3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a983c94a-b81d-4eaf-850c-d859f945ab4c name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:47:31 no-preload-328741 crio[703]: time="2024-09-17 18:47:31.446975760Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=53f342e0-9b4a-4cea-b821-d763eb9e4074 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:47:31 no-preload-328741 crio[703]: time="2024-09-17 18:47:31.447063171Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=53f342e0-9b4a-4cea-b821-d763eb9e4074 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:47:31 no-preload-328741 crio[703]: time="2024-09-17 18:47:31.448752876Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=74bf3d8d-0e6d-4477-9184-97acbed02995 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:47:31 no-preload-328741 crio[703]: time="2024-09-17 18:47:31.449289908Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598851449258299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74bf3d8d-0e6d-4477-9184-97acbed02995 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:47:31 no-preload-328741 crio[703]: time="2024-09-17 18:47:31.449954139Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b1789be-eab8-4d5e-8439-bc3ecb012107 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:47:31 no-preload-328741 crio[703]: time="2024-09-17 18:47:31.450053417Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b1789be-eab8-4d5e-8439-bc3ecb012107 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:47:31 no-preload-328741 crio[703]: time="2024-09-17 18:47:31.450381364Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e25a907995d3828299c33ad3073839ba30163d24795684a0b4aeaeba2183d2b6,PodSandboxId:288779327f47e6ca273a51536feafb6635a867a22806556906b66eed182a3e10,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726597960312259345,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03a8e7f5-ea70-4653-837b-5ad54de48136,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f85a4e2d08da43dbe670d19d96aa06781add66c8376d2f9e232c67764cd73292,PodSandboxId:6b63e0f9c879ea63c0b44fec1d9189cc25b1c291d957937b00a49d550a5996bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597959282174778,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gddwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57f85dd3-be48-4648-8d70-7a06aeaecdc2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c2c1757b5010ce5dd4c18226be5ba9dbac3aef31541d44404ba77109a340985,PodSandboxId:dce3e4d35ce3334eac74ee57b6df8c41b455f9f83b5902c71565ae834feab740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597959359057212,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qv4pq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31
f7e4b5-3870-41a1-96f8-8e13511fe684,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:808a6dceb70633f9f5388cad26d46c756126eb16dca9e626a592a5498a96de64,PodSandboxId:ada4f57161adf25f5621b4b2d120ad81751a027ec3875894b33acfdd00807480,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726597958943487687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2945m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a7b75b4-28c5-476a-b002-05313976c138,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e528cc460f1008cbb0e3528a6ff5ef106aeddd775cf1a915b8ae6d511d1959,PodSandboxId:c0bc1617cf8d0b7b093f166452216af03951825a2a2245534f60f18dd359ffd5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726597947484439622,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a7f5bfd03d56b992d0996fc63641b99,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91b94fc0010bc21e78fe4163bdc09bbba44c0d8f5a62d66aae52653816ad8b6,PodSandboxId:530a84d7c7b43240fd5b68b35730b00c7c74171098c9c63e194b585ec6666c33,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726597947543461772,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64eabf95ac53e177e5c6b586c85b9274,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e54abd262e2695f398247112d6e385326a4419d9c9f6744d7c80478ea5abe131,PodSandboxId:fd02b06c4661d51a0c1c1e3ab21cc2363ee892f13f478155001e44b53d83848a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726597947501637454,Labels:map[string]string{io.kubernetes
.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34afefdc5cd5dfffb05860cfe10789d3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976b4c709134ce601c00b084d7057f3af82f641b9700602c3b971204e0fdd136,PodSandboxId:53ac81c8f4a083199505dfcac4dc0075877c7719c345daea72847f9410c08792,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726597947465569478,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88e3c9e3660a4a2fc689695534e4c55,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e46c0fa82cfcfd9610da4d10f59632ea78de09d0e9da381b6552d2ae89e1db9,PodSandboxId:2c2bd6cac468611364bef6a0d230ff1b8952207263562842678794b3b5857856,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726597661747011364,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34afefdc5cd5dfffb05860cfe10789d3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9b1789be-eab8-4d5e-8439-bc3ecb012107 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:47:31 no-preload-328741 crio[703]: time="2024-09-17 18:47:31.486822344Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8d323c0d-9a27-4e99-b6d7-8114e8b68270 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:47:31 no-preload-328741 crio[703]: time="2024-09-17 18:47:31.486919747Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8d323c0d-9a27-4e99-b6d7-8114e8b68270 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:47:31 no-preload-328741 crio[703]: time="2024-09-17 18:47:31.488562001Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ea36435d-0c5b-4ece-b1d9-b863506a50e2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:47:31 no-preload-328741 crio[703]: time="2024-09-17 18:47:31.488956336Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598851488931284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ea36435d-0c5b-4ece-b1d9-b863506a50e2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:47:31 no-preload-328741 crio[703]: time="2024-09-17 18:47:31.489642945Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac77cfe0-9270-483b-a2e4-ebbbb7dc010b name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:47:31 no-preload-328741 crio[703]: time="2024-09-17 18:47:31.489717517Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac77cfe0-9270-483b-a2e4-ebbbb7dc010b name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:47:31 no-preload-328741 crio[703]: time="2024-09-17 18:47:31.489920985Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e25a907995d3828299c33ad3073839ba30163d24795684a0b4aeaeba2183d2b6,PodSandboxId:288779327f47e6ca273a51536feafb6635a867a22806556906b66eed182a3e10,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726597960312259345,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03a8e7f5-ea70-4653-837b-5ad54de48136,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f85a4e2d08da43dbe670d19d96aa06781add66c8376d2f9e232c67764cd73292,PodSandboxId:6b63e0f9c879ea63c0b44fec1d9189cc25b1c291d957937b00a49d550a5996bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597959282174778,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gddwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57f85dd3-be48-4648-8d70-7a06aeaecdc2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c2c1757b5010ce5dd4c18226be5ba9dbac3aef31541d44404ba77109a340985,PodSandboxId:dce3e4d35ce3334eac74ee57b6df8c41b455f9f83b5902c71565ae834feab740,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597959359057212,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qv4pq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31
f7e4b5-3870-41a1-96f8-8e13511fe684,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:808a6dceb70633f9f5388cad26d46c756126eb16dca9e626a592a5498a96de64,PodSandboxId:ada4f57161adf25f5621b4b2d120ad81751a027ec3875894b33acfdd00807480,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726597958943487687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2945m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a7b75b4-28c5-476a-b002-05313976c138,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e528cc460f1008cbb0e3528a6ff5ef106aeddd775cf1a915b8ae6d511d1959,PodSandboxId:c0bc1617cf8d0b7b093f166452216af03951825a2a2245534f60f18dd359ffd5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726597947484439622,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a7f5bfd03d56b992d0996fc63641b99,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91b94fc0010bc21e78fe4163bdc09bbba44c0d8f5a62d66aae52653816ad8b6,PodSandboxId:530a84d7c7b43240fd5b68b35730b00c7c74171098c9c63e194b585ec6666c33,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726597947543461772,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64eabf95ac53e177e5c6b586c85b9274,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e54abd262e2695f398247112d6e385326a4419d9c9f6744d7c80478ea5abe131,PodSandboxId:fd02b06c4661d51a0c1c1e3ab21cc2363ee892f13f478155001e44b53d83848a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726597947501637454,Labels:map[string]string{io.kubernetes
.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34afefdc5cd5dfffb05860cfe10789d3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976b4c709134ce601c00b084d7057f3af82f641b9700602c3b971204e0fdd136,PodSandboxId:53ac81c8f4a083199505dfcac4dc0075877c7719c345daea72847f9410c08792,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726597947465569478,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88e3c9e3660a4a2fc689695534e4c55,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e46c0fa82cfcfd9610da4d10f59632ea78de09d0e9da381b6552d2ae89e1db9,PodSandboxId:2c2bd6cac468611364bef6a0d230ff1b8952207263562842678794b3b5857856,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726597661747011364,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-328741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34afefdc5cd5dfffb05860cfe10789d3,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ac77cfe0-9270-483b-a2e4-ebbbb7dc010b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e25a907995d38       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   288779327f47e       storage-provisioner
	0c2c1757b5010       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 minutes ago      Running             coredns                   0                   dce3e4d35ce33       coredns-7c65d6cfc9-qv4pq
	f85a4e2d08da4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 minutes ago      Running             coredns                   0                   6b63e0f9c879e       coredns-7c65d6cfc9-gddwk
	808a6dceb7063       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   14 minutes ago      Running             kube-proxy                0                   ada4f57161adf       kube-proxy-2945m
	b91b94fc0010b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   15 minutes ago      Running             kube-controller-manager   2                   530a84d7c7b43       kube-controller-manager-no-preload-328741
	e54abd262e269       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   15 minutes ago      Running             kube-apiserver            2                   fd02b06c4661d       kube-apiserver-no-preload-328741
	49e528cc460f1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   15 minutes ago      Running             etcd                      2                   c0bc1617cf8d0       etcd-no-preload-328741
	976b4c709134c       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   15 minutes ago      Running             kube-scheduler            2                   53ac81c8f4a08       kube-scheduler-no-preload-328741
	4e46c0fa82cfc       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   19 minutes ago      Exited              kube-apiserver            1                   2c2bd6cac4686       kube-apiserver-no-preload-328741
	
	
	==> coredns [0c2c1757b5010ce5dd4c18226be5ba9dbac3aef31541d44404ba77109a340985] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [f85a4e2d08da43dbe670d19d96aa06781add66c8376d2f9e232c67764cd73292] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-328741
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-328741
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=no-preload-328741
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T18_32_33_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 18:32:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-328741
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 18:47:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 18:42:56 +0000   Tue, 17 Sep 2024 18:32:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 18:42:56 +0000   Tue, 17 Sep 2024 18:32:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 18:42:56 +0000   Tue, 17 Sep 2024 18:32:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 18:42:56 +0000   Tue, 17 Sep 2024 18:32:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.182
	  Hostname:    no-preload-328741
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 159bd2b5f8b94daca6c02b7ffef2b2e6
	  System UUID:                159bd2b5-f8b9-4dac-a6c0-2b7ffef2b2e6
	  Boot ID:                    e330fa09-6d35-43d5-8b23-1c8e7bf952a0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-gddwk                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-7c65d6cfc9-qv4pq                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-no-preload-328741                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kube-apiserver-no-preload-328741             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-no-preload-328741    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-2945m                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-no-preload-328741             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-6867b74b74-cvttg              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         14m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node no-preload-328741 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node no-preload-328741 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node no-preload-328741 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m   node-controller  Node no-preload-328741 event: Registered Node no-preload-328741 in Controller
	
	
	==> dmesg <==
	[  +0.052080] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041143] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.862010] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.638613] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.597641] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.929271] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.057298] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067527] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.195977] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.125163] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.298048] systemd-fstab-generator[696]: Ignoring "noauto" option for root device
	[ +16.021909] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	[  +0.059667] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.021214] systemd-fstab-generator[1352]: Ignoring "noauto" option for root device
	[  +4.577933] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.122932] kauditd_printk_skb: 82 callbacks suppressed
	[Sep17 18:32] systemd-fstab-generator[2968]: Ignoring "noauto" option for root device
	[  +0.071057] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.503827] systemd-fstab-generator[3288]: Ignoring "noauto" option for root device
	[  +0.096771] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.363111] systemd-fstab-generator[3419]: Ignoring "noauto" option for root device
	[  +0.107648] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.775165] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [49e528cc460f1008cbb0e3528a6ff5ef106aeddd775cf1a915b8ae6d511d1959] <==
	{"level":"info","ts":"2024-09-17T18:32:27.942439Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.182:2380"}
	{"level":"info","ts":"2024-09-17T18:32:27.942407Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-17T18:32:27.945362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ff4c26660998c2c8 elected leader ff4c26660998c2c8 at term 2"}
	{"level":"info","ts":"2024-09-17T18:32:27.945771Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"ff4c26660998c2c8","initial-advertise-peer-urls":["https://192.168.72.182:2380"],"listen-peer-urls":["https://192.168.72.182:2380"],"advertise-client-urls":["https://192.168.72.182:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.182:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-17T18:32:27.948597Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.182:2380"}
	{"level":"info","ts":"2024-09-17T18:32:27.948979Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-17T18:32:27.955389Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:32:27.963640Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ff4c26660998c2c8","local-member-attributes":"{Name:no-preload-328741 ClientURLs:[https://192.168.72.182:2379]}","request-path":"/0/members/ff4c26660998c2c8/attributes","cluster-id":"1c15affd5c0f3dba","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T18:32:27.963742Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T18:32:27.964723Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T18:32:27.965691Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T18:32:27.979280Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.182:2379"}
	{"level":"info","ts":"2024-09-17T18:32:27.979414Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1c15affd5c0f3dba","local-member-id":"ff4c26660998c2c8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:32:27.979544Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:32:27.979606Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:32:27.968617Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T18:32:27.987458Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-17T18:32:28.025157Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T18:32:28.025326Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T18:42:28.440300Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":686}
	{"level":"info","ts":"2024-09-17T18:42:28.449955Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":686,"took":"8.73323ms","hash":2509722936,"current-db-size-bytes":2113536,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2113536,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-09-17T18:42:28.450188Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2509722936,"revision":686,"compact-revision":-1}
	{"level":"info","ts":"2024-09-17T18:47:28.448253Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":929}
	{"level":"info","ts":"2024-09-17T18:47:28.452628Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":929,"took":"3.682329ms","hash":3189857150,"current-db-size-bytes":2113536,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1507328,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-09-17T18:47:28.452731Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3189857150,"revision":929,"compact-revision":686}
	
	
	==> kernel <==
	 18:47:31 up 20 min,  0 users,  load average: 0.13, 0.18, 0.16
	Linux no-preload-328741 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4e46c0fa82cfcfd9610da4d10f59632ea78de09d0e9da381b6552d2ae89e1db9] <==
	W0917 18:32:21.610030       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:21.611490       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:21.613934       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:21.615280       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:21.636446       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:21.669464       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:21.672990       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:21.704066       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:21.777509       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:21.779905       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:21.791885       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:21.830607       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:21.927152       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:21.960453       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:21.990708       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:22.025508       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:22.105164       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:22.189991       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:22.307464       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:22.347524       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:22.411682       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:22.449574       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:22.450941       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:22.483585       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:22.749813       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e54abd262e2695f398247112d6e385326a4419d9c9f6744d7c80478ea5abe131] <==
	I0917 18:43:31.197409       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0917 18:43:31.197569       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0917 18:45:31.198514       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 18:45:31.198620       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0917 18:45:31.200223       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0917 18:45:31.201561       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 18:45:31.201649       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0917 18:45:31.202860       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0917 18:47:30.200603       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 18:47:30.201289       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0917 18:47:31.203253       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 18:47:31.203329       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0917 18:47:31.203488       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 18:47:31.203572       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0917 18:47:31.204510       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0917 18:47:31.205688       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b91b94fc0010bc21e78fe4163bdc09bbba44c0d8f5a62d66aae52653816ad8b6] <==
	E0917 18:42:07.244187       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:42:07.724230       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:42:37.251230       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:42:37.732993       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0917 18:42:56.137834       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-328741"
	E0917 18:43:07.258214       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:43:07.743055       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:43:37.266240       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:43:37.752873       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0917 18:43:44.108240       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="219.024µs"
	I0917 18:43:55.116356       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="104.879µs"
	E0917 18:44:07.274651       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:44:07.761258       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:44:37.281194       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:44:37.770527       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:45:07.288729       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:45:07.779843       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:45:37.296855       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:45:37.787808       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:46:07.303447       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:46:07.796872       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:46:37.310749       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:46:37.805393       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:47:07.318236       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:47:07.814717       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [808a6dceb70633f9f5388cad26d46c756126eb16dca9e626a592a5498a96de64] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 18:32:39.515289       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 18:32:39.542858       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.182"]
	E0917 18:32:39.542969       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 18:32:39.813964       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 18:32:39.814053       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 18:32:39.814148       1 server_linux.go:169] "Using iptables Proxier"
	I0917 18:32:39.816811       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 18:32:39.817289       1 server.go:483] "Version info" version="v1.31.1"
	I0917 18:32:39.817338       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 18:32:39.818822       1 config.go:199] "Starting service config controller"
	I0917 18:32:39.818903       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 18:32:39.818957       1 config.go:105] "Starting endpoint slice config controller"
	I0917 18:32:39.818978       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 18:32:39.819676       1 config.go:328] "Starting node config controller"
	I0917 18:32:39.819766       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 18:32:39.920266       1 shared_informer.go:320] Caches are synced for service config
	I0917 18:32:39.923263       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 18:32:39.962193       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [976b4c709134ce601c00b084d7057f3af82f641b9700602c3b971204e0fdd136] <==
	W0917 18:32:31.114635       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 18:32:31.114747       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:31.127308       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 18:32:31.127865       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:31.145408       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0917 18:32:31.146248       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:31.167005       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0917 18:32:31.167161       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0917 18:32:31.260506       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0917 18:32:31.261018       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:31.315068       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 18:32:31.315331       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:31.392117       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0917 18:32:31.392268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:31.413854       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0917 18:32:31.414040       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:31.489902       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0917 18:32:31.490478       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:31.518510       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 18:32:31.519380       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:31.554004       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 18:32:31.554216       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:31.554490       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0917 18:32:31.554779       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0917 18:32:33.251420       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 17 18:46:16 no-preload-328741 kubelet[3296]: E0917 18:46:16.090854    3296 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cvttg" podUID="1b2d6700-2e3c-4a35-9794-0ec095eed0d4"
	Sep 17 18:46:23 no-preload-328741 kubelet[3296]: E0917 18:46:23.291744    3296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598783291311395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:46:23 no-preload-328741 kubelet[3296]: E0917 18:46:23.292242    3296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598783291311395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:46:31 no-preload-328741 kubelet[3296]: E0917 18:46:31.090750    3296 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cvttg" podUID="1b2d6700-2e3c-4a35-9794-0ec095eed0d4"
	Sep 17 18:46:33 no-preload-328741 kubelet[3296]: E0917 18:46:33.150913    3296 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 17 18:46:33 no-preload-328741 kubelet[3296]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 17 18:46:33 no-preload-328741 kubelet[3296]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 17 18:46:33 no-preload-328741 kubelet[3296]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 17 18:46:33 no-preload-328741 kubelet[3296]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 17 18:46:33 no-preload-328741 kubelet[3296]: E0917 18:46:33.294071    3296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598793293705908,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:46:33 no-preload-328741 kubelet[3296]: E0917 18:46:33.294179    3296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598793293705908,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:46:43 no-preload-328741 kubelet[3296]: E0917 18:46:43.295773    3296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598803295431472,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:46:43 no-preload-328741 kubelet[3296]: E0917 18:46:43.295833    3296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598803295431472,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:46:46 no-preload-328741 kubelet[3296]: E0917 18:46:46.089644    3296 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cvttg" podUID="1b2d6700-2e3c-4a35-9794-0ec095eed0d4"
	Sep 17 18:46:53 no-preload-328741 kubelet[3296]: E0917 18:46:53.297782    3296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598813297437481,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:46:53 no-preload-328741 kubelet[3296]: E0917 18:46:53.297837    3296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598813297437481,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:47:01 no-preload-328741 kubelet[3296]: E0917 18:47:01.090327    3296 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cvttg" podUID="1b2d6700-2e3c-4a35-9794-0ec095eed0d4"
	Sep 17 18:47:03 no-preload-328741 kubelet[3296]: E0917 18:47:03.300178    3296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598823299655983,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:47:03 no-preload-328741 kubelet[3296]: E0917 18:47:03.300823    3296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598823299655983,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:47:13 no-preload-328741 kubelet[3296]: E0917 18:47:13.302725    3296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598833302409736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:47:13 no-preload-328741 kubelet[3296]: E0917 18:47:13.302779    3296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598833302409736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:47:15 no-preload-328741 kubelet[3296]: E0917 18:47:15.091954    3296 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cvttg" podUID="1b2d6700-2e3c-4a35-9794-0ec095eed0d4"
	Sep 17 18:47:23 no-preload-328741 kubelet[3296]: E0917 18:47:23.305053    3296 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598843304616660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:47:23 no-preload-328741 kubelet[3296]: E0917 18:47:23.305476    3296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598843304616660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:47:26 no-preload-328741 kubelet[3296]: E0917 18:47:26.090926    3296 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cvttg" podUID="1b2d6700-2e3c-4a35-9794-0ec095eed0d4"
	
	
	==> storage-provisioner [e25a907995d3828299c33ad3073839ba30163d24795684a0b4aeaeba2183d2b6] <==
	I0917 18:32:40.426772       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 18:32:40.444804       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 18:32:40.444898       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0917 18:32:40.459137       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 18:32:40.460047       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"caff1387-3893-41de-a0f4-a5fcc852dbf2", APIVersion:"v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-328741_f404a796-99f9-45f2-9b4b-86fe000126d1 became leader
	I0917 18:32:40.460531       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-328741_f404a796-99f9-45f2-9b4b-86fe000126d1!
	I0917 18:32:40.561457       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-328741_f404a796-99f9-45f2-9b4b-86fe000126d1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-328741 -n no-preload-328741
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-328741 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-cvttg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-328741 describe pod metrics-server-6867b74b74-cvttg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-328741 describe pod metrics-server-6867b74b74-cvttg: exit status 1 (67.121144ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-cvttg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-328741 describe pod metrics-server-6867b74b74-cvttg: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (338.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (447.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-438836 -n default-k8s-diff-port-438836
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-17 18:49:28.513023067 +0000 UTC m=+6837.587595337
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-438836 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-438836 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.35µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-438836 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-438836 -n default-k8s-diff-port-438836
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-438836 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-438836 logs -n 25: (1.324718391s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p default-k8s-diff-port-438836  | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:21 UTC | 17 Sep 24 18:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:21 UTC |                     |
	|         | default-k8s-diff-port-438836                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-081863                 | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-190698        | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-081863                                  | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC | 17 Sep 24 18:33 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-328741                  | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-328741                                   | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC | 17 Sep 24 18:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-438836       | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC | 17 Sep 24 18:32 UTC |
	|         | default-k8s-diff-port-438836                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-190698                              | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC | 17 Sep 24 18:23 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-190698             | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC | 17 Sep 24 18:23 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-190698                              | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-190698                              | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:47 UTC | 17 Sep 24 18:47 UTC |
	| start   | -p newest-cni-089562 --memory=2200 --alsologtostderr   | newest-cni-089562            | jenkins | v1.34.0 | 17 Sep 24 18:47 UTC | 17 Sep 24 18:48 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-328741                                   | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:47 UTC | 17 Sep 24 18:47 UTC |
	| addons  | enable metrics-server -p newest-cni-089562             | newest-cni-089562            | jenkins | v1.34.0 | 17 Sep 24 18:48 UTC | 17 Sep 24 18:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-089562                                   | newest-cni-089562            | jenkins | v1.34.0 | 17 Sep 24 18:48 UTC | 17 Sep 24 18:48 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-089562                  | newest-cni-089562            | jenkins | v1.34.0 | 17 Sep 24 18:48 UTC | 17 Sep 24 18:48 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-089562 --memory=2200 --alsologtostderr   | newest-cni-089562            | jenkins | v1.34.0 | 17 Sep 24 18:48 UTC | 17 Sep 24 18:49 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p embed-certs-081863                                  | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:48 UTC | 17 Sep 24 18:48 UTC |
	| image   | newest-cni-089562 image list                           | newest-cni-089562            | jenkins | v1.34.0 | 17 Sep 24 18:49 UTC | 17 Sep 24 18:49 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-089562                                   | newest-cni-089562            | jenkins | v1.34.0 | 17 Sep 24 18:49 UTC | 17 Sep 24 18:49 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-089562                                   | newest-cni-089562            | jenkins | v1.34.0 | 17 Sep 24 18:49 UTC | 17 Sep 24 18:49 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-089562                                   | newest-cni-089562            | jenkins | v1.34.0 | 17 Sep 24 18:49 UTC | 17 Sep 24 18:49 UTC |
	| delete  | -p newest-cni-089562                                   | newest-cni-089562            | jenkins | v1.34.0 | 17 Sep 24 18:49 UTC | 17 Sep 24 18:49 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 18:48:29
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 18:48:29.991251   84998 out.go:345] Setting OutFile to fd 1 ...
	I0917 18:48:29.991383   84998 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:48:29.991394   84998 out.go:358] Setting ErrFile to fd 2...
	I0917 18:48:29.991399   84998 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:48:29.991634   84998 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 18:48:29.992448   84998 out.go:352] Setting JSON to false
	I0917 18:48:29.993539   84998 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":9025,"bootTime":1726589885,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 18:48:29.993653   84998 start.go:139] virtualization: kvm guest
	I0917 18:48:29.995997   84998 out.go:177] * [newest-cni-089562] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 18:48:29.997691   84998 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 18:48:29.997701   84998 notify.go:220] Checking for updates...
	I0917 18:48:30.000364   84998 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 18:48:30.002173   84998 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:48:30.003677   84998 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 18:48:30.004973   84998 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 18:48:30.006482   84998 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 18:48:30.008149   84998 config.go:182] Loaded profile config "newest-cni-089562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:48:30.008615   84998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:48:30.008694   84998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:48:30.025310   84998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46345
	I0917 18:48:30.025881   84998 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:48:30.026448   84998 main.go:141] libmachine: Using API Version  1
	I0917 18:48:30.026468   84998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:48:30.026835   84998 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:48:30.027021   84998 main.go:141] libmachine: (newest-cni-089562) Calling .DriverName
	I0917 18:48:30.027369   84998 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 18:48:30.027691   84998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:48:30.027725   84998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:48:30.044172   84998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42987
	I0917 18:48:30.044583   84998 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:48:30.045051   84998 main.go:141] libmachine: Using API Version  1
	I0917 18:48:30.045073   84998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:48:30.045422   84998 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:48:30.045819   84998 main.go:141] libmachine: (newest-cni-089562) Calling .DriverName
	I0917 18:48:30.082856   84998 out.go:177] * Using the kvm2 driver based on existing profile
	I0917 18:48:30.084207   84998 start.go:297] selected driver: kvm2
	I0917 18:48:30.084224   84998 start.go:901] validating driver "kvm2" against &{Name:newest-cni-089562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:newest-cni-089562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:48:30.084359   84998 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 18:48:30.085100   84998 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:48:30.085186   84998 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19662-11085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0917 18:48:30.101359   84998 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0917 18:48:30.101806   84998 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0917 18:48:30.101840   84998 cni.go:84] Creating CNI manager for ""
	I0917 18:48:30.101882   84998 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:48:30.101924   84998 start.go:340] cluster config:
	{Name:newest-cni-089562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-089562 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:48:30.102022   84998 iso.go:125] acquiring lock: {Name:mkee7131e09b1c5eb94d106bda884bcbdbf6f906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:48:30.104131   84998 out.go:177] * Starting "newest-cni-089562" primary control-plane node in "newest-cni-089562" cluster
	I0917 18:48:30.105728   84998 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 18:48:30.105779   84998 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0917 18:48:30.105788   84998 cache.go:56] Caching tarball of preloaded images
	I0917 18:48:30.105869   84998 preload.go:172] Found /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 18:48:30.105881   84998 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0917 18:48:30.105993   84998 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/newest-cni-089562/config.json ...
	I0917 18:48:30.106207   84998 start.go:360] acquireMachinesLock for newest-cni-089562: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 18:48:30.106279   84998 start.go:364] duration metric: took 48.55µs to acquireMachinesLock for "newest-cni-089562"
	I0917 18:48:30.106301   84998 start.go:96] Skipping create...Using existing machine configuration
	I0917 18:48:30.106308   84998 fix.go:54] fixHost starting: 
	I0917 18:48:30.106614   84998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:48:30.106649   84998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:48:30.122861   84998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43543
	I0917 18:48:30.123284   84998 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:48:30.123874   84998 main.go:141] libmachine: Using API Version  1
	I0917 18:48:30.123913   84998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:48:30.124220   84998 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:48:30.124400   84998 main.go:141] libmachine: (newest-cni-089562) Calling .DriverName
	I0917 18:48:30.124565   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetState
	I0917 18:48:30.126172   84998 fix.go:112] recreateIfNeeded on newest-cni-089562: state=Stopped err=<nil>
	I0917 18:48:30.126198   84998 main.go:141] libmachine: (newest-cni-089562) Calling .DriverName
	W0917 18:48:30.126354   84998 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 18:48:30.128491   84998 out.go:177] * Restarting existing kvm2 VM for "newest-cni-089562" ...
	I0917 18:48:30.129817   84998 main.go:141] libmachine: (newest-cni-089562) Calling .Start
	I0917 18:48:30.130007   84998 main.go:141] libmachine: (newest-cni-089562) Ensuring networks are active...
	I0917 18:48:30.130919   84998 main.go:141] libmachine: (newest-cni-089562) Ensuring network default is active
	I0917 18:48:30.131456   84998 main.go:141] libmachine: (newest-cni-089562) Ensuring network mk-newest-cni-089562 is active
	I0917 18:48:30.131963   84998 main.go:141] libmachine: (newest-cni-089562) Getting domain xml...
	I0917 18:48:30.132948   84998 main.go:141] libmachine: (newest-cni-089562) Creating domain...
	I0917 18:48:31.399517   84998 main.go:141] libmachine: (newest-cni-089562) Waiting to get IP...
	I0917 18:48:31.400318   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:31.400846   84998 main.go:141] libmachine: (newest-cni-089562) DBG | unable to find current IP address of domain newest-cni-089562 in network mk-newest-cni-089562
	I0917 18:48:31.400917   84998 main.go:141] libmachine: (newest-cni-089562) DBG | I0917 18:48:31.400814   85033 retry.go:31] will retry after 303.138391ms: waiting for machine to come up
	I0917 18:48:31.705472   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:31.706000   84998 main.go:141] libmachine: (newest-cni-089562) DBG | unable to find current IP address of domain newest-cni-089562 in network mk-newest-cni-089562
	I0917 18:48:31.706027   84998 main.go:141] libmachine: (newest-cni-089562) DBG | I0917 18:48:31.705950   85033 retry.go:31] will retry after 357.628795ms: waiting for machine to come up
	I0917 18:48:32.065687   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:32.066177   84998 main.go:141] libmachine: (newest-cni-089562) DBG | unable to find current IP address of domain newest-cni-089562 in network mk-newest-cni-089562
	I0917 18:48:32.066201   84998 main.go:141] libmachine: (newest-cni-089562) DBG | I0917 18:48:32.066126   85033 retry.go:31] will retry after 468.730442ms: waiting for machine to come up
	I0917 18:48:32.536718   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:32.537149   84998 main.go:141] libmachine: (newest-cni-089562) DBG | unable to find current IP address of domain newest-cni-089562 in network mk-newest-cni-089562
	I0917 18:48:32.537178   84998 main.go:141] libmachine: (newest-cni-089562) DBG | I0917 18:48:32.537120   85033 retry.go:31] will retry after 492.831284ms: waiting for machine to come up
	I0917 18:48:33.031366   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:33.031851   84998 main.go:141] libmachine: (newest-cni-089562) DBG | unable to find current IP address of domain newest-cni-089562 in network mk-newest-cni-089562
	I0917 18:48:33.031875   84998 main.go:141] libmachine: (newest-cni-089562) DBG | I0917 18:48:33.031804   85033 retry.go:31] will retry after 645.10896ms: waiting for machine to come up
	I0917 18:48:33.678340   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:33.678872   84998 main.go:141] libmachine: (newest-cni-089562) DBG | unable to find current IP address of domain newest-cni-089562 in network mk-newest-cni-089562
	I0917 18:48:33.678894   84998 main.go:141] libmachine: (newest-cni-089562) DBG | I0917 18:48:33.678823   85033 retry.go:31] will retry after 604.007171ms: waiting for machine to come up
	I0917 18:48:34.284798   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:34.285359   84998 main.go:141] libmachine: (newest-cni-089562) DBG | unable to find current IP address of domain newest-cni-089562 in network mk-newest-cni-089562
	I0917 18:48:34.285382   84998 main.go:141] libmachine: (newest-cni-089562) DBG | I0917 18:48:34.285304   85033 retry.go:31] will retry after 971.834239ms: waiting for machine to come up
	I0917 18:48:35.258438   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:35.258979   84998 main.go:141] libmachine: (newest-cni-089562) DBG | unable to find current IP address of domain newest-cni-089562 in network mk-newest-cni-089562
	I0917 18:48:35.259009   84998 main.go:141] libmachine: (newest-cni-089562) DBG | I0917 18:48:35.258934   85033 retry.go:31] will retry after 1.181531642s: waiting for machine to come up
	I0917 18:48:36.441781   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:36.442278   84998 main.go:141] libmachine: (newest-cni-089562) DBG | unable to find current IP address of domain newest-cni-089562 in network mk-newest-cni-089562
	I0917 18:48:36.442346   84998 main.go:141] libmachine: (newest-cni-089562) DBG | I0917 18:48:36.442262   85033 retry.go:31] will retry after 1.847919149s: waiting for machine to come up
	I0917 18:48:38.291608   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:38.292024   84998 main.go:141] libmachine: (newest-cni-089562) DBG | unable to find current IP address of domain newest-cni-089562 in network mk-newest-cni-089562
	I0917 18:48:38.292052   84998 main.go:141] libmachine: (newest-cni-089562) DBG | I0917 18:48:38.291985   85033 retry.go:31] will retry after 1.793447562s: waiting for machine to come up
	I0917 18:48:40.087612   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:40.088155   84998 main.go:141] libmachine: (newest-cni-089562) DBG | unable to find current IP address of domain newest-cni-089562 in network mk-newest-cni-089562
	I0917 18:48:40.088183   84998 main.go:141] libmachine: (newest-cni-089562) DBG | I0917 18:48:40.088099   85033 retry.go:31] will retry after 1.927111525s: waiting for machine to come up
	I0917 18:48:42.016952   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:42.017388   84998 main.go:141] libmachine: (newest-cni-089562) DBG | unable to find current IP address of domain newest-cni-089562 in network mk-newest-cni-089562
	I0917 18:48:42.017415   84998 main.go:141] libmachine: (newest-cni-089562) DBG | I0917 18:48:42.017340   85033 retry.go:31] will retry after 2.854260337s: waiting for machine to come up
	I0917 18:48:44.874372   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:44.874914   84998 main.go:141] libmachine: (newest-cni-089562) DBG | unable to find current IP address of domain newest-cni-089562 in network mk-newest-cni-089562
	I0917 18:48:44.874960   84998 main.go:141] libmachine: (newest-cni-089562) DBG | I0917 18:48:44.874873   85033 retry.go:31] will retry after 3.040881153s: waiting for machine to come up
	I0917 18:48:47.919194   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:47.919802   84998 main.go:141] libmachine: (newest-cni-089562) Found IP for machine: 192.168.61.47
	I0917 18:48:47.919833   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has current primary IP address 192.168.61.47 and MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:47.919842   84998 main.go:141] libmachine: (newest-cni-089562) Reserving static IP address...
	I0917 18:48:47.920235   84998 main.go:141] libmachine: (newest-cni-089562) DBG | found host DHCP lease matching {name: "newest-cni-089562", mac: "52:54:00:0f:be:d2", ip: "192.168.61.47"} in network mk-newest-cni-089562: {Iface:virbr3 ExpiryTime:2024-09-17 19:48:41 +0000 UTC Type:0 Mac:52:54:00:0f:be:d2 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:newest-cni-089562 Clientid:01:52:54:00:0f:be:d2}
	I0917 18:48:47.920268   84998 main.go:141] libmachine: (newest-cni-089562) DBG | skip adding static IP to network mk-newest-cni-089562 - found existing host DHCP lease matching {name: "newest-cni-089562", mac: "52:54:00:0f:be:d2", ip: "192.168.61.47"}
	I0917 18:48:47.920288   84998 main.go:141] libmachine: (newest-cni-089562) Reserved static IP address: 192.168.61.47
	I0917 18:48:47.920305   84998 main.go:141] libmachine: (newest-cni-089562) Waiting for SSH to be available...
	I0917 18:48:47.920321   84998 main.go:141] libmachine: (newest-cni-089562) DBG | Getting to WaitForSSH function...
	I0917 18:48:47.922295   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:47.922705   84998 main.go:141] libmachine: (newest-cni-089562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:be:d2", ip: ""} in network mk-newest-cni-089562: {Iface:virbr3 ExpiryTime:2024-09-17 19:48:41 +0000 UTC Type:0 Mac:52:54:00:0f:be:d2 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:newest-cni-089562 Clientid:01:52:54:00:0f:be:d2}
	I0917 18:48:47.922740   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined IP address 192.168.61.47 and MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:47.922868   84998 main.go:141] libmachine: (newest-cni-089562) DBG | Using SSH client type: external
	I0917 18:48:47.922911   84998 main.go:141] libmachine: (newest-cni-089562) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/newest-cni-089562/id_rsa (-rw-------)
	I0917 18:48:47.922944   84998 main.go:141] libmachine: (newest-cni-089562) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.47 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/newest-cni-089562/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:48:47.922960   84998 main.go:141] libmachine: (newest-cni-089562) DBG | About to run SSH command:
	I0917 18:48:47.922974   84998 main.go:141] libmachine: (newest-cni-089562) DBG | exit 0
	I0917 18:48:48.050107   84998 main.go:141] libmachine: (newest-cni-089562) DBG | SSH cmd err, output: <nil>: 
	I0917 18:48:48.050472   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetConfigRaw
	I0917 18:48:48.051223   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetIP
	I0917 18:48:48.054268   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:48.054712   84998 main.go:141] libmachine: (newest-cni-089562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:be:d2", ip: ""} in network mk-newest-cni-089562: {Iface:virbr3 ExpiryTime:2024-09-17 19:48:41 +0000 UTC Type:0 Mac:52:54:00:0f:be:d2 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:newest-cni-089562 Clientid:01:52:54:00:0f:be:d2}
	I0917 18:48:48.054744   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined IP address 192.168.61.47 and MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:48.055139   84998 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/newest-cni-089562/config.json ...
	I0917 18:48:48.055396   84998 machine.go:93] provisionDockerMachine start ...
	I0917 18:48:48.055424   84998 main.go:141] libmachine: (newest-cni-089562) Calling .DriverName
	I0917 18:48:48.055668   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHHostname
	I0917 18:48:48.058145   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:48.058500   84998 main.go:141] libmachine: (newest-cni-089562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:be:d2", ip: ""} in network mk-newest-cni-089562: {Iface:virbr3 ExpiryTime:2024-09-17 19:48:41 +0000 UTC Type:0 Mac:52:54:00:0f:be:d2 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:newest-cni-089562 Clientid:01:52:54:00:0f:be:d2}
	I0917 18:48:48.058526   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined IP address 192.168.61.47 and MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:48.058671   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHPort
	I0917 18:48:48.058874   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHKeyPath
	I0917 18:48:48.059025   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHKeyPath
	I0917 18:48:48.059177   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHUsername
	I0917 18:48:48.059364   84998 main.go:141] libmachine: Using SSH client type: native
	I0917 18:48:48.059624   84998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.47 22 <nil> <nil>}
	I0917 18:48:48.059651   84998 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 18:48:48.182311   84998 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 18:48:48.182344   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetMachineName
	I0917 18:48:48.182597   84998 buildroot.go:166] provisioning hostname "newest-cni-089562"
	I0917 18:48:48.182621   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetMachineName
	I0917 18:48:48.182817   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHHostname
	I0917 18:48:48.185832   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:48.186276   84998 main.go:141] libmachine: (newest-cni-089562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:be:d2", ip: ""} in network mk-newest-cni-089562: {Iface:virbr3 ExpiryTime:2024-09-17 19:48:41 +0000 UTC Type:0 Mac:52:54:00:0f:be:d2 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:newest-cni-089562 Clientid:01:52:54:00:0f:be:d2}
	I0917 18:48:48.186313   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined IP address 192.168.61.47 and MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:48.186594   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHPort
	I0917 18:48:48.186805   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHKeyPath
	I0917 18:48:48.186977   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHKeyPath
	I0917 18:48:48.187122   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHUsername
	I0917 18:48:48.187295   84998 main.go:141] libmachine: Using SSH client type: native
	I0917 18:48:48.187539   84998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.47 22 <nil> <nil>}
	I0917 18:48:48.187559   84998 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-089562 && echo "newest-cni-089562" | sudo tee /etc/hostname
	I0917 18:48:48.318807   84998 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-089562
	
	I0917 18:48:48.318836   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHHostname
	I0917 18:48:48.321916   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:48.322270   84998 main.go:141] libmachine: (newest-cni-089562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:be:d2", ip: ""} in network mk-newest-cni-089562: {Iface:virbr3 ExpiryTime:2024-09-17 19:48:41 +0000 UTC Type:0 Mac:52:54:00:0f:be:d2 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:newest-cni-089562 Clientid:01:52:54:00:0f:be:d2}
	I0917 18:48:48.322296   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined IP address 192.168.61.47 and MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:48.322504   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHPort
	I0917 18:48:48.322688   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHKeyPath
	I0917 18:48:48.322868   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHKeyPath
	I0917 18:48:48.323005   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHUsername
	I0917 18:48:48.323214   84998 main.go:141] libmachine: Using SSH client type: native
	I0917 18:48:48.323373   84998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.47 22 <nil> <nil>}
	I0917 18:48:48.323389   84998 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-089562' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-089562/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-089562' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:48:48.450994   84998 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:48:48.451024   84998 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:48:48.451063   84998 buildroot.go:174] setting up certificates
	I0917 18:48:48.451076   84998 provision.go:84] configureAuth start
	I0917 18:48:48.451089   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetMachineName
	I0917 18:48:48.451358   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetIP
	I0917 18:48:48.454220   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:48.454654   84998 main.go:141] libmachine: (newest-cni-089562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:be:d2", ip: ""} in network mk-newest-cni-089562: {Iface:virbr3 ExpiryTime:2024-09-17 19:48:41 +0000 UTC Type:0 Mac:52:54:00:0f:be:d2 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:newest-cni-089562 Clientid:01:52:54:00:0f:be:d2}
	I0917 18:48:48.454680   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined IP address 192.168.61.47 and MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:48.454843   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHHostname
	I0917 18:48:48.456937   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:48.457345   84998 main.go:141] libmachine: (newest-cni-089562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:be:d2", ip: ""} in network mk-newest-cni-089562: {Iface:virbr3 ExpiryTime:2024-09-17 19:48:41 +0000 UTC Type:0 Mac:52:54:00:0f:be:d2 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:newest-cni-089562 Clientid:01:52:54:00:0f:be:d2}
	I0917 18:48:48.457388   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined IP address 192.168.61.47 and MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:48.457670   84998 provision.go:143] copyHostCerts
	I0917 18:48:48.457750   84998 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:48:48.457762   84998 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:48:48.457850   84998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:48:48.457979   84998 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:48:48.457987   84998 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:48:48.458027   84998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:48:48.458124   84998 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:48:48.458134   84998 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:48:48.458167   84998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:48:48.458252   84998 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.newest-cni-089562 san=[127.0.0.1 192.168.61.47 localhost minikube newest-cni-089562]
	I0917 18:48:48.711318   84998 provision.go:177] copyRemoteCerts
	I0917 18:48:48.711374   84998 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:48:48.711405   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHHostname
	I0917 18:48:48.714283   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:48.714714   84998 main.go:141] libmachine: (newest-cni-089562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:be:d2", ip: ""} in network mk-newest-cni-089562: {Iface:virbr3 ExpiryTime:2024-09-17 19:48:41 +0000 UTC Type:0 Mac:52:54:00:0f:be:d2 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:newest-cni-089562 Clientid:01:52:54:00:0f:be:d2}
	I0917 18:48:48.714747   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined IP address 192.168.61.47 and MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:48.715079   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHPort
	I0917 18:48:48.715296   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHKeyPath
	I0917 18:48:48.715486   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHUsername
	I0917 18:48:48.715657   84998 sshutil.go:53] new ssh client: &{IP:192.168.61.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/newest-cni-089562/id_rsa Username:docker}
	I0917 18:48:48.804553   84998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:48:48.834735   84998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0917 18:48:48.863833   84998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 18:48:48.892634   84998 provision.go:87] duration metric: took 441.544912ms to configureAuth
	I0917 18:48:48.892663   84998 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:48:48.892909   84998 config.go:182] Loaded profile config "newest-cni-089562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:48:48.893000   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHHostname
	I0917 18:48:48.896222   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:48.896665   84998 main.go:141] libmachine: (newest-cni-089562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:be:d2", ip: ""} in network mk-newest-cni-089562: {Iface:virbr3 ExpiryTime:2024-09-17 19:48:41 +0000 UTC Type:0 Mac:52:54:00:0f:be:d2 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:newest-cni-089562 Clientid:01:52:54:00:0f:be:d2}
	I0917 18:48:48.896699   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined IP address 192.168.61.47 and MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:48.897011   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHPort
	I0917 18:48:48.897306   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHKeyPath
	I0917 18:48:48.897587   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHKeyPath
	I0917 18:48:48.897815   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHUsername
	I0917 18:48:48.898021   84998 main.go:141] libmachine: Using SSH client type: native
	I0917 18:48:48.898197   84998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.47 22 <nil> <nil>}
	I0917 18:48:48.898212   84998 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:48:49.157882   84998 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:48:49.157917   84998 machine.go:96] duration metric: took 1.102502488s to provisionDockerMachine
	I0917 18:48:49.157931   84998 start.go:293] postStartSetup for "newest-cni-089562" (driver="kvm2")
	I0917 18:48:49.157943   84998 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:48:49.157963   84998 main.go:141] libmachine: (newest-cni-089562) Calling .DriverName
	I0917 18:48:49.158286   84998 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:48:49.158318   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHHostname
	I0917 18:48:49.161046   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:49.161528   84998 main.go:141] libmachine: (newest-cni-089562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:be:d2", ip: ""} in network mk-newest-cni-089562: {Iface:virbr3 ExpiryTime:2024-09-17 19:48:41 +0000 UTC Type:0 Mac:52:54:00:0f:be:d2 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:newest-cni-089562 Clientid:01:52:54:00:0f:be:d2}
	I0917 18:48:49.161555   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined IP address 192.168.61.47 and MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:49.161783   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHPort
	I0917 18:48:49.161967   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHKeyPath
	I0917 18:48:49.162133   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHUsername
	I0917 18:48:49.162266   84998 sshutil.go:53] new ssh client: &{IP:192.168.61.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/newest-cni-089562/id_rsa Username:docker}
	I0917 18:48:49.249033   84998 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:48:49.253659   84998 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:48:49.253688   84998 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:48:49.253756   84998 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:48:49.253847   84998 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:48:49.253964   84998 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:48:49.264667   84998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:48:49.298120   84998 start.go:296] duration metric: took 140.175151ms for postStartSetup
	I0917 18:48:49.298161   84998 fix.go:56] duration metric: took 19.191852193s for fixHost
	I0917 18:48:49.298184   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHHostname
	I0917 18:48:49.301003   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:49.301367   84998 main.go:141] libmachine: (newest-cni-089562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:be:d2", ip: ""} in network mk-newest-cni-089562: {Iface:virbr3 ExpiryTime:2024-09-17 19:48:41 +0000 UTC Type:0 Mac:52:54:00:0f:be:d2 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:newest-cni-089562 Clientid:01:52:54:00:0f:be:d2}
	I0917 18:48:49.301397   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined IP address 192.168.61.47 and MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:49.301660   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHPort
	I0917 18:48:49.301864   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHKeyPath
	I0917 18:48:49.302006   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHKeyPath
	I0917 18:48:49.302118   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHUsername
	I0917 18:48:49.302312   84998 main.go:141] libmachine: Using SSH client type: native
	I0917 18:48:49.302535   84998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.47 22 <nil> <nil>}
	I0917 18:48:49.302554   84998 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:48:49.419901   84998 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726598929.391498185
	
	I0917 18:48:49.419927   84998 fix.go:216] guest clock: 1726598929.391498185
	I0917 18:48:49.419937   84998 fix.go:229] Guest: 2024-09-17 18:48:49.391498185 +0000 UTC Remote: 2024-09-17 18:48:49.29816529 +0000 UTC m=+19.344301034 (delta=93.332895ms)
	I0917 18:48:49.419961   84998 fix.go:200] guest clock delta is within tolerance: 93.332895ms
	I0917 18:48:49.419974   84998 start.go:83] releasing machines lock for "newest-cni-089562", held for 19.31367993s
	I0917 18:48:49.419999   84998 main.go:141] libmachine: (newest-cni-089562) Calling .DriverName
	I0917 18:48:49.420224   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetIP
	I0917 18:48:49.423403   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:49.423722   84998 main.go:141] libmachine: (newest-cni-089562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:be:d2", ip: ""} in network mk-newest-cni-089562: {Iface:virbr3 ExpiryTime:2024-09-17 19:48:41 +0000 UTC Type:0 Mac:52:54:00:0f:be:d2 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:newest-cni-089562 Clientid:01:52:54:00:0f:be:d2}
	I0917 18:48:49.423751   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined IP address 192.168.61.47 and MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:49.423940   84998 main.go:141] libmachine: (newest-cni-089562) Calling .DriverName
	I0917 18:48:49.424486   84998 main.go:141] libmachine: (newest-cni-089562) Calling .DriverName
	I0917 18:48:49.424700   84998 main.go:141] libmachine: (newest-cni-089562) Calling .DriverName
	I0917 18:48:49.424785   84998 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:48:49.424829   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHHostname
	I0917 18:48:49.424950   84998 ssh_runner.go:195] Run: cat /version.json
	I0917 18:48:49.424978   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHHostname
	I0917 18:48:49.428389   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:49.428712   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:49.428749   84998 main.go:141] libmachine: (newest-cni-089562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:be:d2", ip: ""} in network mk-newest-cni-089562: {Iface:virbr3 ExpiryTime:2024-09-17 19:48:41 +0000 UTC Type:0 Mac:52:54:00:0f:be:d2 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:newest-cni-089562 Clientid:01:52:54:00:0f:be:d2}
	I0917 18:48:49.428766   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined IP address 192.168.61.47 and MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:49.428934   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHPort
	I0917 18:48:49.429081   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHKeyPath
	I0917 18:48:49.429181   84998 main.go:141] libmachine: (newest-cni-089562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:be:d2", ip: ""} in network mk-newest-cni-089562: {Iface:virbr3 ExpiryTime:2024-09-17 19:48:41 +0000 UTC Type:0 Mac:52:54:00:0f:be:d2 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:newest-cni-089562 Clientid:01:52:54:00:0f:be:d2}
	I0917 18:48:49.429207   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined IP address 192.168.61.47 and MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:49.429287   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHUsername
	I0917 18:48:49.429459   84998 sshutil.go:53] new ssh client: &{IP:192.168.61.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/newest-cni-089562/id_rsa Username:docker}
	I0917 18:48:49.429480   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHPort
	I0917 18:48:49.429654   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHKeyPath
	I0917 18:48:49.429804   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHUsername
	I0917 18:48:49.429933   84998 sshutil.go:53] new ssh client: &{IP:192.168.61.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/newest-cni-089562/id_rsa Username:docker}
	I0917 18:48:49.537998   84998 ssh_runner.go:195] Run: systemctl --version
	I0917 18:48:49.544728   84998 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:48:49.692039   84998 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:48:49.699407   84998 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:48:49.699492   84998 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:48:49.718027   84998 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:48:49.718053   84998 start.go:495] detecting cgroup driver to use...
	I0917 18:48:49.718119   84998 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:48:49.736917   84998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:48:49.751069   84998 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:48:49.751166   84998 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:48:49.766701   84998 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:48:49.782657   84998 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:48:49.924783   84998 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:48:50.071353   84998 docker.go:233] disabling docker service ...
	I0917 18:48:50.071429   84998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:48:50.087658   84998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:48:50.102006   84998 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:48:50.273911   84998 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:48:50.407326   84998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:48:50.423081   84998 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:48:50.443417   84998 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 18:48:50.443490   84998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:48:50.455856   84998 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:48:50.455915   84998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:48:50.468036   84998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:48:50.479912   84998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:48:50.492307   84998 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:48:50.509520   84998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:48:50.521221   84998 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:48:50.542373   84998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:48:50.554173   84998 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:48:50.564811   84998 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:48:50.564880   84998 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:48:50.580671   84998 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:48:50.591714   84998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:48:50.719799   84998 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:48:50.819207   84998 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:48:50.819282   84998 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:48:50.824484   84998 start.go:563] Will wait 60s for crictl version
	I0917 18:48:50.824555   84998 ssh_runner.go:195] Run: which crictl
	I0917 18:48:50.828492   84998 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:48:50.868061   84998 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:48:50.868155   84998 ssh_runner.go:195] Run: crio --version
	I0917 18:48:50.896773   84998 ssh_runner.go:195] Run: crio --version
	I0917 18:48:50.929357   84998 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 18:48:50.930671   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetIP
	I0917 18:48:50.933683   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:50.934016   84998 main.go:141] libmachine: (newest-cni-089562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:be:d2", ip: ""} in network mk-newest-cni-089562: {Iface:virbr3 ExpiryTime:2024-09-17 19:48:41 +0000 UTC Type:0 Mac:52:54:00:0f:be:d2 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:newest-cni-089562 Clientid:01:52:54:00:0f:be:d2}
	I0917 18:48:50.934041   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined IP address 192.168.61.47 and MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:50.934294   84998 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0917 18:48:50.938844   84998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:48:50.954391   84998 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0917 18:48:50.955826   84998 kubeadm.go:883] updating cluster {Name:newest-cni-089562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:newest-cni-089562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6
m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:48:50.955948   84998 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 18:48:50.956015   84998 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:48:50.995447   84998 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0917 18:48:50.995513   84998 ssh_runner.go:195] Run: which lz4
	I0917 18:48:50.999911   84998 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 18:48:51.004809   84998 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 18:48:51.004849   84998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0917 18:48:52.442398   84998 crio.go:462] duration metric: took 1.44252447s to copy over tarball
	I0917 18:48:52.442488   84998 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 18:48:54.587812   84998 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.145298321s)
	I0917 18:48:54.587842   84998 crio.go:469] duration metric: took 2.145415338s to extract the tarball
	I0917 18:48:54.587850   84998 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 18:48:54.625985   84998 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:48:54.669085   84998 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 18:48:54.669114   84998 cache_images.go:84] Images are preloaded, skipping loading
	I0917 18:48:54.669124   84998 kubeadm.go:934] updating node { 192.168.61.47 8443 v1.31.1 crio true true} ...
	I0917 18:48:54.669221   84998 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-089562 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:newest-cni-089562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:48:54.669322   84998 ssh_runner.go:195] Run: crio config
	I0917 18:48:54.713823   84998 cni.go:84] Creating CNI manager for ""
	I0917 18:48:54.713847   84998 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:48:54.713858   84998 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0917 18:48:54.713877   84998 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.47 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-089562 NodeName:newest-cni-089562 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.61.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 18:48:54.714013   84998 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.47
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-089562"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.47
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.47"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:48:54.714072   84998 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 18:48:54.724242   84998 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:48:54.724306   84998 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:48:54.734390   84998 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (353 bytes)
	I0917 18:48:54.751664   84998 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:48:54.770065   84998 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2282 bytes)
	I0917 18:48:54.788147   84998 ssh_runner.go:195] Run: grep 192.168.61.47	control-plane.minikube.internal$ /etc/hosts
	I0917 18:48:54.792253   84998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.47	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:48:54.805288   84998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:48:54.918916   84998 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:48:54.937194   84998 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/newest-cni-089562 for IP: 192.168.61.47
	I0917 18:48:54.937222   84998 certs.go:194] generating shared ca certs ...
	I0917 18:48:54.937255   84998 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:48:54.937426   84998 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:48:54.937492   84998 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:48:54.937506   84998 certs.go:256] generating profile certs ...
	I0917 18:48:54.937620   84998 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/newest-cni-089562/client.key
	I0917 18:48:54.937696   84998 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/newest-cni-089562/apiserver.key.b662eab0
	I0917 18:48:54.937753   84998 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/newest-cni-089562/proxy-client.key
	I0917 18:48:54.937897   84998 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:48:54.937940   84998 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:48:54.937954   84998 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:48:54.937982   84998 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:48:54.938005   84998 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:48:54.938028   84998 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:48:54.938078   84998 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:48:54.938763   84998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:48:54.988174   84998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:48:55.023545   84998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:48:55.058990   84998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:48:55.088725   84998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/newest-cni-089562/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0917 18:48:55.118386   84998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/newest-cni-089562/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 18:48:55.149409   84998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/newest-cni-089562/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:48:55.177082   84998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/newest-cni-089562/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 18:48:55.203513   84998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:48:55.230723   84998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:48:55.255836   84998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:48:55.281366   84998 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:48:55.300386   84998 ssh_runner.go:195] Run: openssl version
	I0917 18:48:55.306828   84998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:48:55.318939   84998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:48:55.324067   84998 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:48:55.324126   84998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:48:55.330351   84998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:48:55.342408   84998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:48:55.354219   84998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:48:55.359657   84998 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:48:55.359729   84998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:48:55.366169   84998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:48:55.377776   84998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:48:55.389354   84998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:48:55.394146   84998 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:48:55.394231   84998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:48:55.400153   84998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:48:55.411795   84998 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:48:55.416705   84998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 18:48:55.422993   84998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 18:48:55.429050   84998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 18:48:55.435270   84998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 18:48:55.441209   84998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 18:48:55.447138   84998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 18:48:55.453258   84998 kubeadm.go:392] StartCluster: {Name:newest-cni-089562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:newest-cni-089562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s
ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:48:55.453340   84998 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:48:55.453384   84998 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:48:55.492376   84998 cri.go:89] found id: ""
	I0917 18:48:55.492441   84998 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:48:55.503313   84998 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 18:48:55.503340   84998 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 18:48:55.503394   84998 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 18:48:55.513432   84998 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 18:48:55.514043   84998 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-089562" does not appear in /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:48:55.514265   84998 kubeconfig.go:62] /home/jenkins/minikube-integration/19662-11085/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-089562" cluster setting kubeconfig missing "newest-cni-089562" context setting]
	I0917 18:48:55.514753   84998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:48:55.516046   84998 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 18:48:55.526138   84998 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.47
	I0917 18:48:55.526172   84998 kubeadm.go:1160] stopping kube-system containers ...
	I0917 18:48:55.526187   84998 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0917 18:48:55.526245   84998 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:48:55.568507   84998 cri.go:89] found id: ""
	I0917 18:48:55.568585   84998 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 18:48:55.586189   84998 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:48:55.596602   84998 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:48:55.596625   84998 kubeadm.go:157] found existing configuration files:
	
	I0917 18:48:55.596685   84998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:48:55.607161   84998 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:48:55.607226   84998 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:48:55.617606   84998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:48:55.628943   84998 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:48:55.629007   84998 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:48:55.640531   84998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:48:55.651696   84998 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:48:55.651752   84998 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:48:55.663284   84998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:48:55.673166   84998 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:48:55.673259   84998 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:48:55.683373   84998 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:48:55.693874   84998 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:48:55.816791   84998 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:48:57.354386   84998 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.537558204s)
	I0917 18:48:57.354426   84998 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:48:57.581472   84998 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:48:57.664908   84998 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:48:57.751934   84998 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:48:57.752023   84998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:48:58.252896   84998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:48:58.752205   84998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:48:58.773136   84998 api_server.go:72] duration metric: took 1.021201387s to wait for apiserver process to appear ...
	I0917 18:48:58.773167   84998 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:48:58.773188   84998 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0917 18:48:58.773798   84998 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
	I0917 18:48:59.274115   84998 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0917 18:49:01.677121   84998 api_server.go:279] https://192.168.61.47:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:49:01.677156   84998 api_server.go:103] status: https://192.168.61.47:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:49:01.677173   84998 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0917 18:49:01.705985   84998 api_server.go:279] https://192.168.61.47:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:49:01.706016   84998 api_server.go:103] status: https://192.168.61.47:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:49:01.774262   84998 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0917 18:49:01.779841   84998 api_server.go:279] https://192.168.61.47:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:49:01.779875   84998 api_server.go:103] status: https://192.168.61.47:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:49:02.273310   84998 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0917 18:49:02.278473   84998 api_server.go:279] https://192.168.61.47:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:49:02.278505   84998 api_server.go:103] status: https://192.168.61.47:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:49:02.774103   84998 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0917 18:49:02.784856   84998 api_server.go:279] https://192.168.61.47:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:49:02.784883   84998 api_server.go:103] status: https://192.168.61.47:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:49:03.273338   84998 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0917 18:49:03.277970   84998 api_server.go:279] https://192.168.61.47:8443/healthz returned 200:
	ok
	I0917 18:49:03.285428   84998 api_server.go:141] control plane version: v1.31.1
	I0917 18:49:03.285460   84998 api_server.go:131] duration metric: took 4.512285001s to wait for apiserver health ...
	I0917 18:49:03.285470   84998 cni.go:84] Creating CNI manager for ""
	I0917 18:49:03.285478   84998 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:49:03.287602   84998 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:49:03.289148   84998 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:49:03.333841   84998 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:49:03.372604   84998 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:49:03.382859   84998 system_pods.go:59] 8 kube-system pods found
	I0917 18:49:03.382892   84998 system_pods.go:61] "coredns-7c65d6cfc9-x579g" [4f809291-a04c-4e2c-9a2e-b76489eee53f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:49:03.382900   84998 system_pods.go:61] "etcd-newest-cni-089562" [c8678367-d2aa-4079-9d39-47b59f55d9f4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 18:49:03.382908   84998 system_pods.go:61] "kube-apiserver-newest-cni-089562" [35772dc0-5239-48c3-90ee-6744015efe68] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 18:49:03.382914   84998 system_pods.go:61] "kube-controller-manager-newest-cni-089562" [d4799ee2-319c-4ec5-9584-66fd85cd8d45] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 18:49:03.382926   84998 system_pods.go:61] "kube-proxy-mhcgm" [bfcd6883-a3f4-4163-b0c3-48784f9617b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 18:49:03.382931   84998 system_pods.go:61] "kube-scheduler-newest-cni-089562" [54cb19d1-88c4-44d8-a3a5-23bb9ad7f1eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 18:49:03.382936   84998 system_pods.go:61] "metrics-server-6867b74b74-8jnp4" [45b6e336-f6fc-4e69-96e0-8a9d00f8b0de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:49:03.382942   84998 system_pods.go:61] "storage-provisioner" [50794c5a-b3aa-4ba9-9874-960795392e0d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 18:49:03.382947   84998 system_pods.go:74] duration metric: took 10.32069ms to wait for pod list to return data ...
	I0917 18:49:03.382954   84998 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:49:03.388326   84998 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:49:03.388361   84998 node_conditions.go:123] node cpu capacity is 2
	I0917 18:49:03.388375   84998 node_conditions.go:105] duration metric: took 5.415724ms to run NodePressure ...
	I0917 18:49:03.388397   84998 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:49:03.671405   84998 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 18:49:03.684782   84998 ops.go:34] apiserver oom_adj: -16
	I0917 18:49:03.684805   84998 kubeadm.go:597] duration metric: took 8.181458642s to restartPrimaryControlPlane
	I0917 18:49:03.684814   84998 kubeadm.go:394] duration metric: took 8.231563944s to StartCluster
	I0917 18:49:03.684829   84998 settings.go:142] acquiring lock: {Name:mk34898861b3ed534da4bd8404feab95fe690001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:49:03.684904   84998 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:49:03.686044   84998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:49:03.686314   84998 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 18:49:03.686396   84998 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 18:49:03.686514   84998 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-089562"
	I0917 18:49:03.686528   84998 config.go:182] Loaded profile config "newest-cni-089562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:49:03.686538   84998 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-089562"
	I0917 18:49:03.686532   84998 addons.go:69] Setting default-storageclass=true in profile "newest-cni-089562"
	W0917 18:49:03.686546   84998 addons.go:243] addon storage-provisioner should already be in state true
	I0917 18:49:03.686561   84998 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-089562"
	I0917 18:49:03.686553   84998 addons.go:69] Setting metrics-server=true in profile "newest-cni-089562"
	I0917 18:49:03.686613   84998 addons.go:234] Setting addon metrics-server=true in "newest-cni-089562"
	W0917 18:49:03.686629   84998 addons.go:243] addon metrics-server should already be in state true
	I0917 18:49:03.686667   84998 host.go:66] Checking if "newest-cni-089562" exists ...
	I0917 18:49:03.686564   84998 addons.go:69] Setting dashboard=true in profile "newest-cni-089562"
	I0917 18:49:03.686760   84998 addons.go:234] Setting addon dashboard=true in "newest-cni-089562"
	W0917 18:49:03.686771   84998 addons.go:243] addon dashboard should already be in state true
	I0917 18:49:03.686797   84998 host.go:66] Checking if "newest-cni-089562" exists ...
	I0917 18:49:03.686589   84998 host.go:66] Checking if "newest-cni-089562" exists ...
	I0917 18:49:03.687050   84998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:49:03.687082   84998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:49:03.687109   84998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:49:03.687163   84998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:49:03.687198   84998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:49:03.687111   84998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:49:03.687237   84998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:49:03.687266   84998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:49:03.688506   84998 out.go:177] * Verifying Kubernetes components...
	I0917 18:49:03.690009   84998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:49:03.704218   84998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33991
	I0917 18:49:03.704714   84998 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:49:03.705306   84998 main.go:141] libmachine: Using API Version  1
	I0917 18:49:03.705327   84998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:49:03.705718   84998 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:49:03.706008   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetState
	I0917 18:49:03.707181   84998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37871
	I0917 18:49:03.707264   84998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35447
	I0917 18:49:03.707663   84998 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:49:03.707671   84998 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:49:03.707803   84998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32847
	I0917 18:49:03.708130   84998 main.go:141] libmachine: Using API Version  1
	I0917 18:49:03.708146   84998 main.go:141] libmachine: Using API Version  1
	I0917 18:49:03.708150   84998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:49:03.708162   84998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:49:03.708216   84998 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:49:03.708515   84998 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:49:03.708533   84998 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:49:03.708707   84998 main.go:141] libmachine: Using API Version  1
	I0917 18:49:03.708724   84998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:49:03.709046   84998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:49:03.709071   84998 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:49:03.709087   84998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:49:03.709123   84998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:49:03.709169   84998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:49:03.709336   84998 addons.go:234] Setting addon default-storageclass=true in "newest-cni-089562"
	W0917 18:49:03.709356   84998 addons.go:243] addon default-storageclass should already be in state true
	I0917 18:49:03.709382   84998 host.go:66] Checking if "newest-cni-089562" exists ...
	I0917 18:49:03.709597   84998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:49:03.709637   84998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:49:03.709728   84998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:49:03.709762   84998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:49:03.724688   84998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34681
	I0917 18:49:03.725206   84998 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:49:03.725857   84998 main.go:141] libmachine: Using API Version  1
	I0917 18:49:03.725884   84998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:49:03.726334   84998 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:49:03.726573   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetState
	I0917 18:49:03.728653   84998 main.go:141] libmachine: (newest-cni-089562) Calling .DriverName
	I0917 18:49:03.729791   84998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45755
	I0917 18:49:03.730282   84998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45005
	I0917 18:49:03.730393   84998 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:49:03.730619   84998 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:49:03.730871   84998 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0917 18:49:03.730990   84998 main.go:141] libmachine: Using API Version  1
	I0917 18:49:03.731006   84998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:49:03.731122   84998 main.go:141] libmachine: Using API Version  1
	I0917 18:49:03.731143   84998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:49:03.731353   84998 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:49:03.731527   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetState
	I0917 18:49:03.731598   84998 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:49:03.731797   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetState
	I0917 18:49:03.733620   84998 main.go:141] libmachine: (newest-cni-089562) Calling .DriverName
	I0917 18:49:03.733665   84998 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0917 18:49:03.733893   84998 main.go:141] libmachine: (newest-cni-089562) Calling .DriverName
	I0917 18:49:03.735007   84998 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0917 18:49:03.735013   84998 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:49:03.735023   84998 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0917 18:49:03.735065   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHHostname
	I0917 18:49:03.735139   84998 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0917 18:49:03.736404   84998 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 18:49:03.736438   84998 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 18:49:03.736454   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHHostname
	I0917 18:49:03.736711   84998 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:49:03.736725   84998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 18:49:03.736741   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHHostname
	I0917 18:49:03.738384   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:49:03.739078   84998 main.go:141] libmachine: (newest-cni-089562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:be:d2", ip: ""} in network mk-newest-cni-089562: {Iface:virbr3 ExpiryTime:2024-09-17 19:48:41 +0000 UTC Type:0 Mac:52:54:00:0f:be:d2 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:newest-cni-089562 Clientid:01:52:54:00:0f:be:d2}
	I0917 18:49:03.739096   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined IP address 192.168.61.47 and MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:49:03.739251   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHPort
	I0917 18:49:03.739535   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHKeyPath
	I0917 18:49:03.739665   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHUsername
	I0917 18:49:03.739768   84998 sshutil.go:53] new ssh client: &{IP:192.168.61.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/newest-cni-089562/id_rsa Username:docker}
	I0917 18:49:03.740577   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:49:03.740667   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:49:03.740945   84998 main.go:141] libmachine: (newest-cni-089562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:be:d2", ip: ""} in network mk-newest-cni-089562: {Iface:virbr3 ExpiryTime:2024-09-17 19:48:41 +0000 UTC Type:0 Mac:52:54:00:0f:be:d2 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:newest-cni-089562 Clientid:01:52:54:00:0f:be:d2}
	I0917 18:49:03.740959   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined IP address 192.168.61.47 and MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:49:03.741070   84998 main.go:141] libmachine: (newest-cni-089562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:be:d2", ip: ""} in network mk-newest-cni-089562: {Iface:virbr3 ExpiryTime:2024-09-17 19:48:41 +0000 UTC Type:0 Mac:52:54:00:0f:be:d2 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:newest-cni-089562 Clientid:01:52:54:00:0f:be:d2}
	I0917 18:49:03.741086   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined IP address 192.168.61.47 and MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:49:03.741129   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHPort
	I0917 18:49:03.741355   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHKeyPath
	I0917 18:49:03.741415   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHPort
	I0917 18:49:03.741544   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHUsername
	I0917 18:49:03.741576   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHKeyPath
	I0917 18:49:03.741690   84998 sshutil.go:53] new ssh client: &{IP:192.168.61.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/newest-cni-089562/id_rsa Username:docker}
	I0917 18:49:03.741810   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHUsername
	I0917 18:49:03.741925   84998 sshutil.go:53] new ssh client: &{IP:192.168.61.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/newest-cni-089562/id_rsa Username:docker}
	I0917 18:49:03.754366   84998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46425
	I0917 18:49:03.754809   84998 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:49:03.755279   84998 main.go:141] libmachine: Using API Version  1
	I0917 18:49:03.755298   84998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:49:03.755592   84998 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:49:03.756125   84998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:49:03.756180   84998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:49:03.771685   84998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41131
	I0917 18:49:03.772205   84998 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:49:03.772761   84998 main.go:141] libmachine: Using API Version  1
	I0917 18:49:03.772789   84998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:49:03.773176   84998 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:49:03.773433   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetState
	I0917 18:49:03.775082   84998 main.go:141] libmachine: (newest-cni-089562) Calling .DriverName
	I0917 18:49:03.775267   84998 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 18:49:03.775281   84998 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 18:49:03.775294   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHHostname
	I0917 18:49:03.777910   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:49:03.778323   84998 main.go:141] libmachine: (newest-cni-089562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:be:d2", ip: ""} in network mk-newest-cni-089562: {Iface:virbr3 ExpiryTime:2024-09-17 19:48:41 +0000 UTC Type:0 Mac:52:54:00:0f:be:d2 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:newest-cni-089562 Clientid:01:52:54:00:0f:be:d2}
	I0917 18:49:03.778349   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined IP address 192.168.61.47 and MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:49:03.778427   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHPort
	I0917 18:49:03.778554   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHKeyPath
	I0917 18:49:03.778687   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetSSHUsername
	I0917 18:49:03.778784   84998 sshutil.go:53] new ssh client: &{IP:192.168.61.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/newest-cni-089562/id_rsa Username:docker}
	I0917 18:49:03.948317   84998 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:49:03.977539   84998 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:49:03.977623   84998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:49:03.999225   84998 api_server.go:72] duration metric: took 312.883201ms to wait for apiserver process to appear ...
	I0917 18:49:03.999249   84998 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:49:03.999280   84998 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
	I0917 18:49:04.009715   84998 api_server.go:279] https://192.168.61.47:8443/healthz returned 200:
	ok
	I0917 18:49:04.013509   84998 api_server.go:141] control plane version: v1.31.1
	I0917 18:49:04.013549   84998 api_server.go:131] duration metric: took 14.292938ms to wait for apiserver health ...
	I0917 18:49:04.013561   84998 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:49:04.024473   84998 system_pods.go:59] 8 kube-system pods found
	I0917 18:49:04.024506   84998 system_pods.go:61] "coredns-7c65d6cfc9-x579g" [4f809291-a04c-4e2c-9a2e-b76489eee53f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:49:04.024530   84998 system_pods.go:61] "etcd-newest-cni-089562" [c8678367-d2aa-4079-9d39-47b59f55d9f4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 18:49:04.024545   84998 system_pods.go:61] "kube-apiserver-newest-cni-089562" [35772dc0-5239-48c3-90ee-6744015efe68] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 18:49:04.024554   84998 system_pods.go:61] "kube-controller-manager-newest-cni-089562" [d4799ee2-319c-4ec5-9584-66fd85cd8d45] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 18:49:04.024561   84998 system_pods.go:61] "kube-proxy-mhcgm" [bfcd6883-a3f4-4163-b0c3-48784f9617b2] Running
	I0917 18:49:04.024572   84998 system_pods.go:61] "kube-scheduler-newest-cni-089562" [54cb19d1-88c4-44d8-a3a5-23bb9ad7f1eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 18:49:04.024580   84998 system_pods.go:61] "metrics-server-6867b74b74-8jnp4" [45b6e336-f6fc-4e69-96e0-8a9d00f8b0de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:49:04.024590   84998 system_pods.go:61] "storage-provisioner" [50794c5a-b3aa-4ba9-9874-960795392e0d] Running
	I0917 18:49:04.024599   84998 system_pods.go:74] duration metric: took 11.030272ms to wait for pod list to return data ...
	I0917 18:49:04.024612   84998 default_sa.go:34] waiting for default service account to be created ...
	I0917 18:49:04.028968   84998 default_sa.go:45] found service account: "default"
	I0917 18:49:04.028990   84998 default_sa.go:55] duration metric: took 4.369667ms for default service account to be created ...
	I0917 18:49:04.029001   84998 kubeadm.go:582] duration metric: took 342.663407ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0917 18:49:04.029019   84998 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:49:04.037882   84998 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:49:04.037906   84998 node_conditions.go:123] node cpu capacity is 2
	I0917 18:49:04.037914   84998 node_conditions.go:105] duration metric: took 8.890742ms to run NodePressure ...
	I0917 18:49:04.037925   84998 start.go:241] waiting for startup goroutines ...
	I0917 18:49:04.054541   84998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 18:49:04.119022   84998 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 18:49:04.119045   84998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0917 18:49:04.185076   84998 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0917 18:49:04.185103   84998 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0917 18:49:04.187281   84998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:49:04.189327   84998 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 18:49:04.189351   84998 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 18:49:04.255908   84998 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0917 18:49:04.255938   84998 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0917 18:49:04.286232   84998 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:49:04.286257   84998 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 18:49:04.336234   84998 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0917 18:49:04.336261   84998 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0917 18:49:04.348197   84998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:49:04.367703   84998 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0917 18:49:04.367727   84998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0917 18:49:04.416848   84998 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0917 18:49:04.416875   84998 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0917 18:49:04.476284   84998 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0917 18:49:04.476342   84998 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0917 18:49:04.528222   84998 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0917 18:49:04.528249   84998 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0917 18:49:04.629854   84998 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0917 18:49:04.629884   84998 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0917 18:49:04.681135   84998 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0917 18:49:04.681163   84998 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0917 18:49:04.760489   84998 main.go:141] libmachine: Making call to close driver server
	I0917 18:49:04.760517   84998 main.go:141] libmachine: (newest-cni-089562) Calling .Close
	I0917 18:49:04.760808   84998 main.go:141] libmachine: (newest-cni-089562) DBG | Closing plugin on server side
	I0917 18:49:04.760871   84998 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:49:04.760904   84998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:49:04.760924   84998 main.go:141] libmachine: Making call to close driver server
	I0917 18:49:04.760935   84998 main.go:141] libmachine: (newest-cni-089562) Calling .Close
	I0917 18:49:04.761250   84998 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:49:04.761266   84998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:49:04.761487   84998 main.go:141] libmachine: (newest-cni-089562) DBG | Closing plugin on server side
	I0917 18:49:04.768572   84998 main.go:141] libmachine: Making call to close driver server
	I0917 18:49:04.768602   84998 main.go:141] libmachine: (newest-cni-089562) Calling .Close
	I0917 18:49:04.768888   84998 main.go:141] libmachine: (newest-cni-089562) DBG | Closing plugin on server side
	I0917 18:49:04.768920   84998 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:49:04.768932   84998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:49:04.769139   84998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0917 18:49:06.063134   84998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.714891542s)
	I0917 18:49:06.063199   84998 main.go:141] libmachine: Making call to close driver server
	I0917 18:49:06.063214   84998 main.go:141] libmachine: (newest-cni-089562) Calling .Close
	I0917 18:49:06.063218   84998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.875906214s)
	I0917 18:49:06.063253   84998 main.go:141] libmachine: Making call to close driver server
	I0917 18:49:06.063279   84998 main.go:141] libmachine: (newest-cni-089562) Calling .Close
	I0917 18:49:06.063624   84998 main.go:141] libmachine: (newest-cni-089562) DBG | Closing plugin on server side
	I0917 18:49:06.063633   84998 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:49:06.063646   84998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:49:06.063655   84998 main.go:141] libmachine: Making call to close driver server
	I0917 18:49:06.063662   84998 main.go:141] libmachine: (newest-cni-089562) Calling .Close
	I0917 18:49:06.063701   84998 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:49:06.063715   84998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:49:06.063723   84998 main.go:141] libmachine: Making call to close driver server
	I0917 18:49:06.063731   84998 main.go:141] libmachine: (newest-cni-089562) Calling .Close
	I0917 18:49:06.063627   84998 main.go:141] libmachine: (newest-cni-089562) DBG | Closing plugin on server side
	I0917 18:49:06.064020   84998 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:49:06.064182   84998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:49:06.064198   84998 addons.go:475] Verifying addon metrics-server=true in "newest-cni-089562"
	I0917 18:49:06.065413   84998 main.go:141] libmachine: (newest-cni-089562) DBG | Closing plugin on server side
	I0917 18:49:06.065454   84998 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:49:06.065476   84998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:49:06.652514   84998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.883327172s)
	I0917 18:49:06.652569   84998 main.go:141] libmachine: Making call to close driver server
	I0917 18:49:06.652583   84998 main.go:141] libmachine: (newest-cni-089562) Calling .Close
	I0917 18:49:06.652900   84998 main.go:141] libmachine: (newest-cni-089562) DBG | Closing plugin on server side
	I0917 18:49:06.652954   84998 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:49:06.652997   84998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:49:06.653006   84998 main.go:141] libmachine: Making call to close driver server
	I0917 18:49:06.653012   84998 main.go:141] libmachine: (newest-cni-089562) Calling .Close
	I0917 18:49:06.653306   84998 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:49:06.653358   84998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:49:06.653391   84998 main.go:141] libmachine: (newest-cni-089562) DBG | Closing plugin on server side
	I0917 18:49:06.655474   84998 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-089562 addons enable metrics-server
	
	I0917 18:49:06.657116   84998 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I0917 18:49:06.658425   84998 addons.go:510] duration metric: took 2.972028505s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I0917 18:49:06.658474   84998 start.go:246] waiting for cluster config update ...
	I0917 18:49:06.658491   84998 start.go:255] writing updated cluster config ...
	I0917 18:49:06.658816   84998 ssh_runner.go:195] Run: rm -f paused
	I0917 18:49:06.725281   84998 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 18:49:06.727235   84998 out.go:177] * Done! kubectl is now configured to use "newest-cni-089562" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 17 18:49:29 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:49:29.096104856Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598969096081396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8bf6909a-cd77-4de5-b5d3-cef6b8826485 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:49:29 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:49:29.098802997Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e2ecb0f6-ef7c-4797-bbb2-f833462caa6d name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:49:29 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:49:29.098920238Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e2ecb0f6-ef7c-4797-bbb2-f833462caa6d name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:49:29 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:49:29.099225736Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2fbd0e5e760d13f98ab9ba88d999247cdf66a8ad8098c9ee7f28e65d6572a9b2,PodSandboxId:7923027a515fafeb14cafdccf65674e8ab363eafd9c8521c201d536688427d48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726597971340365083,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ae1ecf-9311-4d61-a56d-9147876d4a9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9840debe68b5053c1a491899a5d7c656982084f2f2e4604d316cce9d1a26c7a9,PodSandboxId:ba8a4f783612b9ee3f9b29afe0de11f7d2a97125a5904615cb6eed0b9ac631e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597970014126565,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x4l48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a20eeb-edd1-4f5b-bf64-ba3d2c8ae05b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0c6cb1df5f79832219bd145ba552a3daa93c23a8b00ceb93302f6999bbc7c1f,PodSandboxId:edaaa54b9023ee5ac274786fdda52a691b3f386503d3fecbae6623f985ec1c94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597970033199716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8nrnc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 96eeb328-605e-468b-a022-dbb7b5b44501,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8198df1218bca5231f562facee0a790436f98cf41df2b13d4cd52b339a38e663,PodSandboxId:3fb89d2bd06f6aa0b8191de26abfe9a8ee98722c72030fd0b39d7f596988a198,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726597969262811666,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xwqtr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5875ff28-7e41-4887-94da-d7632d8141e8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e40fdd1fd764d351571e680a299a8cc448471f3dbd8cef20a8d2af3297a33f23,PodSandboxId:747cd257b791e23976b223b692f5ee95f70169a7f4c00f548182f38927dc66f0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:172659795
8292872227,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16c8cafb7a9faf4d563bc354343d4a14,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c01bdc8e5cd2fe1fb27733c583449b8c337d1c3156b08f4708f3e06c1c03fc6c,PodSandboxId:dbc153b93eb310e52b3f1c0d3117e0af55c074e5a14c00d8d4de005590435a11,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172
6597958298498310,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a7f45bd62ebffc8bb2ad5afa38b84c6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a88b1fb4a49aaf15b06bc8a6136326491aac09f2c6933e9fe3b24c6c2e0420f,PodSandboxId:e4ec6be3f735c261867fc8996e332c520db697ab66073baff0ae403c9e04f673,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726597958205257659,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9ce964059f89a4c4963cb520a63bb2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b58b4f5db1adeb379ee6936ee97d2de412f77fa82e9596ab6d585d73685519b0,PodSandboxId:c76c1f3a1fcee01b98aa500e538c3f66249a9c8a17e3ca0e62f264a605c9d325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726597958128230597,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663ce023ea41410cef7d5ca4b524d300,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd03b090b920a7cb9d13bcb9bb11127b97b36285f56e2c13a4ae01064016eb5,PodSandboxId:f8b9965d59910efe5fe80491c57c0e246a58c4be035388e14dc1d1f5955cb961,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726597669999094213,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663ce023ea41410cef7d5ca4b524d300,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e2ecb0f6-ef7c-4797-bbb2-f833462caa6d name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:49:29 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:49:29.148093851Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8da26d9e-2c1f-42f3-a46f-7d9a249661c1 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:49:29 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:49:29.148175092Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8da26d9e-2c1f-42f3-a46f-7d9a249661c1 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:49:29 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:49:29.149565916Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cf9ed896-407f-457f-9d79-8479393ccb3b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:49:29 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:49:29.150056786Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598969150033572,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cf9ed896-407f-457f-9d79-8479393ccb3b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:49:29 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:49:29.150579057Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b1882c6-9758-4839-bb7e-f6d956e8c533 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:49:29 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:49:29.150632339Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b1882c6-9758-4839-bb7e-f6d956e8c533 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:49:29 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:49:29.150921463Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2fbd0e5e760d13f98ab9ba88d999247cdf66a8ad8098c9ee7f28e65d6572a9b2,PodSandboxId:7923027a515fafeb14cafdccf65674e8ab363eafd9c8521c201d536688427d48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726597971340365083,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ae1ecf-9311-4d61-a56d-9147876d4a9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9840debe68b5053c1a491899a5d7c656982084f2f2e4604d316cce9d1a26c7a9,PodSandboxId:ba8a4f783612b9ee3f9b29afe0de11f7d2a97125a5904615cb6eed0b9ac631e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597970014126565,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x4l48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a20eeb-edd1-4f5b-bf64-ba3d2c8ae05b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0c6cb1df5f79832219bd145ba552a3daa93c23a8b00ceb93302f6999bbc7c1f,PodSandboxId:edaaa54b9023ee5ac274786fdda52a691b3f386503d3fecbae6623f985ec1c94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597970033199716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8nrnc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 96eeb328-605e-468b-a022-dbb7b5b44501,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8198df1218bca5231f562facee0a790436f98cf41df2b13d4cd52b339a38e663,PodSandboxId:3fb89d2bd06f6aa0b8191de26abfe9a8ee98722c72030fd0b39d7f596988a198,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726597969262811666,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xwqtr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5875ff28-7e41-4887-94da-d7632d8141e8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e40fdd1fd764d351571e680a299a8cc448471f3dbd8cef20a8d2af3297a33f23,PodSandboxId:747cd257b791e23976b223b692f5ee95f70169a7f4c00f548182f38927dc66f0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:172659795
8292872227,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16c8cafb7a9faf4d563bc354343d4a14,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c01bdc8e5cd2fe1fb27733c583449b8c337d1c3156b08f4708f3e06c1c03fc6c,PodSandboxId:dbc153b93eb310e52b3f1c0d3117e0af55c074e5a14c00d8d4de005590435a11,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172
6597958298498310,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a7f45bd62ebffc8bb2ad5afa38b84c6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a88b1fb4a49aaf15b06bc8a6136326491aac09f2c6933e9fe3b24c6c2e0420f,PodSandboxId:e4ec6be3f735c261867fc8996e332c520db697ab66073baff0ae403c9e04f673,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726597958205257659,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9ce964059f89a4c4963cb520a63bb2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b58b4f5db1adeb379ee6936ee97d2de412f77fa82e9596ab6d585d73685519b0,PodSandboxId:c76c1f3a1fcee01b98aa500e538c3f66249a9c8a17e3ca0e62f264a605c9d325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726597958128230597,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663ce023ea41410cef7d5ca4b524d300,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd03b090b920a7cb9d13bcb9bb11127b97b36285f56e2c13a4ae01064016eb5,PodSandboxId:f8b9965d59910efe5fe80491c57c0e246a58c4be035388e14dc1d1f5955cb961,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726597669999094213,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663ce023ea41410cef7d5ca4b524d300,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b1882c6-9758-4839-bb7e-f6d956e8c533 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:49:29 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:49:29.193834884Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=874c93a6-b9cc-4c56-acd3-4fa5d081a869 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:49:29 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:49:29.193912195Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=874c93a6-b9cc-4c56-acd3-4fa5d081a869 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:49:29 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:49:29.195447687Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=07971b47-ed3c-4283-955a-299764066153 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:49:29 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:49:29.196057215Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598969196020908,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=07971b47-ed3c-4283-955a-299764066153 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:49:29 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:49:29.196552099Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=09a3cd9e-3b30-4009-8ac7-ffaa0ab303b5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:49:29 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:49:29.196607904Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=09a3cd9e-3b30-4009-8ac7-ffaa0ab303b5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:49:29 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:49:29.196884059Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2fbd0e5e760d13f98ab9ba88d999247cdf66a8ad8098c9ee7f28e65d6572a9b2,PodSandboxId:7923027a515fafeb14cafdccf65674e8ab363eafd9c8521c201d536688427d48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726597971340365083,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ae1ecf-9311-4d61-a56d-9147876d4a9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9840debe68b5053c1a491899a5d7c656982084f2f2e4604d316cce9d1a26c7a9,PodSandboxId:ba8a4f783612b9ee3f9b29afe0de11f7d2a97125a5904615cb6eed0b9ac631e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597970014126565,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x4l48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a20eeb-edd1-4f5b-bf64-ba3d2c8ae05b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0c6cb1df5f79832219bd145ba552a3daa93c23a8b00ceb93302f6999bbc7c1f,PodSandboxId:edaaa54b9023ee5ac274786fdda52a691b3f386503d3fecbae6623f985ec1c94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597970033199716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8nrnc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 96eeb328-605e-468b-a022-dbb7b5b44501,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8198df1218bca5231f562facee0a790436f98cf41df2b13d4cd52b339a38e663,PodSandboxId:3fb89d2bd06f6aa0b8191de26abfe9a8ee98722c72030fd0b39d7f596988a198,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726597969262811666,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xwqtr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5875ff28-7e41-4887-94da-d7632d8141e8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e40fdd1fd764d351571e680a299a8cc448471f3dbd8cef20a8d2af3297a33f23,PodSandboxId:747cd257b791e23976b223b692f5ee95f70169a7f4c00f548182f38927dc66f0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:172659795
8292872227,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16c8cafb7a9faf4d563bc354343d4a14,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c01bdc8e5cd2fe1fb27733c583449b8c337d1c3156b08f4708f3e06c1c03fc6c,PodSandboxId:dbc153b93eb310e52b3f1c0d3117e0af55c074e5a14c00d8d4de005590435a11,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172
6597958298498310,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a7f45bd62ebffc8bb2ad5afa38b84c6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a88b1fb4a49aaf15b06bc8a6136326491aac09f2c6933e9fe3b24c6c2e0420f,PodSandboxId:e4ec6be3f735c261867fc8996e332c520db697ab66073baff0ae403c9e04f673,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726597958205257659,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9ce964059f89a4c4963cb520a63bb2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b58b4f5db1adeb379ee6936ee97d2de412f77fa82e9596ab6d585d73685519b0,PodSandboxId:c76c1f3a1fcee01b98aa500e538c3f66249a9c8a17e3ca0e62f264a605c9d325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726597958128230597,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663ce023ea41410cef7d5ca4b524d300,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd03b090b920a7cb9d13bcb9bb11127b97b36285f56e2c13a4ae01064016eb5,PodSandboxId:f8b9965d59910efe5fe80491c57c0e246a58c4be035388e14dc1d1f5955cb961,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726597669999094213,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663ce023ea41410cef7d5ca4b524d300,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=09a3cd9e-3b30-4009-8ac7-ffaa0ab303b5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:49:29 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:49:29.233915040Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f81766fb-3cbe-4530-a545-ec40a8857cab name=/runtime.v1.RuntimeService/Version
	Sep 17 18:49:29 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:49:29.234010293Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f81766fb-3cbe-4530-a545-ec40a8857cab name=/runtime.v1.RuntimeService/Version
	Sep 17 18:49:29 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:49:29.235329906Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ee19194d-4182-494b-b496-143a565828c3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:49:29 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:49:29.235888187Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598969235860152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee19194d-4182-494b-b496-143a565828c3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:49:29 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:49:29.236343752Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=09ac8e43-c6fd-498f-9140-5fe367990718 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:49:29 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:49:29.236395166Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=09ac8e43-c6fd-498f-9140-5fe367990718 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:49:29 default-k8s-diff-port-438836 crio[715]: time="2024-09-17 18:49:29.236607564Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2fbd0e5e760d13f98ab9ba88d999247cdf66a8ad8098c9ee7f28e65d6572a9b2,PodSandboxId:7923027a515fafeb14cafdccf65674e8ab363eafd9c8521c201d536688427d48,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726597971340365083,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ae1ecf-9311-4d61-a56d-9147876d4a9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9840debe68b5053c1a491899a5d7c656982084f2f2e4604d316cce9d1a26c7a9,PodSandboxId:ba8a4f783612b9ee3f9b29afe0de11f7d2a97125a5904615cb6eed0b9ac631e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597970014126565,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x4l48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a20eeb-edd1-4f5b-bf64-ba3d2c8ae05b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0c6cb1df5f79832219bd145ba552a3daa93c23a8b00ceb93302f6999bbc7c1f,PodSandboxId:edaaa54b9023ee5ac274786fdda52a691b3f386503d3fecbae6623f985ec1c94,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726597970033199716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8nrnc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 96eeb328-605e-468b-a022-dbb7b5b44501,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8198df1218bca5231f562facee0a790436f98cf41df2b13d4cd52b339a38e663,PodSandboxId:3fb89d2bd06f6aa0b8191de26abfe9a8ee98722c72030fd0b39d7f596988a198,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726597969262811666,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xwqtr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5875ff28-7e41-4887-94da-d7632d8141e8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e40fdd1fd764d351571e680a299a8cc448471f3dbd8cef20a8d2af3297a33f23,PodSandboxId:747cd257b791e23976b223b692f5ee95f70169a7f4c00f548182f38927dc66f0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:172659795
8292872227,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16c8cafb7a9faf4d563bc354343d4a14,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c01bdc8e5cd2fe1fb27733c583449b8c337d1c3156b08f4708f3e06c1c03fc6c,PodSandboxId:dbc153b93eb310e52b3f1c0d3117e0af55c074e5a14c00d8d4de005590435a11,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172
6597958298498310,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a7f45bd62ebffc8bb2ad5afa38b84c6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a88b1fb4a49aaf15b06bc8a6136326491aac09f2c6933e9fe3b24c6c2e0420f,PodSandboxId:e4ec6be3f735c261867fc8996e332c520db697ab66073baff0ae403c9e04f673,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726597958205257659,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e9ce964059f89a4c4963cb520a63bb2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b58b4f5db1adeb379ee6936ee97d2de412f77fa82e9596ab6d585d73685519b0,PodSandboxId:c76c1f3a1fcee01b98aa500e538c3f66249a9c8a17e3ca0e62f264a605c9d325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726597958128230597,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663ce023ea41410cef7d5ca4b524d300,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd03b090b920a7cb9d13bcb9bb11127b97b36285f56e2c13a4ae01064016eb5,PodSandboxId:f8b9965d59910efe5fe80491c57c0e246a58c4be035388e14dc1d1f5955cb961,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726597669999094213,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-438836,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663ce023ea41410cef7d5ca4b524d300,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=09ac8e43-c6fd-498f-9140-5fe367990718 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2fbd0e5e760d1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   7923027a515fa       storage-provisioner
	a0c6cb1df5f79       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   edaaa54b9023e       coredns-7c65d6cfc9-8nrnc
	9840debe68b50       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   ba8a4f783612b       coredns-7c65d6cfc9-x4l48
	8198df1218bca       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   16 minutes ago      Running             kube-proxy                0                   3fb89d2bd06f6       kube-proxy-xwqtr
	c01bdc8e5cd2f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   dbc153b93eb31       etcd-default-k8s-diff-port-438836
	e40fdd1fd764d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   16 minutes ago      Running             kube-controller-manager   2                   747cd257b791e       kube-controller-manager-default-k8s-diff-port-438836
	5a88b1fb4a49a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   16 minutes ago      Running             kube-scheduler            2                   e4ec6be3f735c       kube-scheduler-default-k8s-diff-port-438836
	b58b4f5db1ade       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   16 minutes ago      Running             kube-apiserver            2                   c76c1f3a1fcee       kube-apiserver-default-k8s-diff-port-438836
	5bd03b090b920       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   21 minutes ago      Exited              kube-apiserver            1                   f8b9965d59910       kube-apiserver-default-k8s-diff-port-438836
	
	
	==> coredns [9840debe68b5053c1a491899a5d7c656982084f2f2e4604d316cce9d1a26c7a9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [a0c6cb1df5f79832219bd145ba552a3daa93c23a8b00ceb93302f6999bbc7c1f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-438836
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-438836
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=default-k8s-diff-port-438836
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T18_32_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 18:32:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-438836
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 18:49:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 18:48:10 +0000   Tue, 17 Sep 2024 18:32:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 18:48:10 +0000   Tue, 17 Sep 2024 18:32:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 18:48:10 +0000   Tue, 17 Sep 2024 18:32:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 18:48:10 +0000   Tue, 17 Sep 2024 18:32:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.58
	  Hostname:    default-k8s-diff-port-438836
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 121495779e0d4310bb99eb1555fdbd16
	  System UUID:                12149577-9e0d-4310-bb99-eb1555fdbd16
	  Boot ID:                    ad02a2b6-bf44-4181-9070-705b317051e8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-8nrnc                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-x4l48                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-default-k8s-diff-port-438836                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-438836             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-438836    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-xwqtr                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-438836             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-6867b74b74-qnfv2                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         16m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-438836 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-438836 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node default-k8s-diff-port-438836 status is now: NodeHasSufficientPID
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node default-k8s-diff-port-438836 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node default-k8s-diff-port-438836 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node default-k8s-diff-port-438836 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node default-k8s-diff-port-438836 event: Registered Node default-k8s-diff-port-438836 in Controller
	  Normal  CIDRAssignmentFailed     16m                cidrAllocator    Node default-k8s-diff-port-438836 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[  +0.052542] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042240] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.915767] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.574460] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.641227] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.740995] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.064887] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.076457] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.218343] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.160693] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.376709] systemd-fstab-generator[706]: Ignoring "noauto" option for root device
	[  +4.780578] systemd-fstab-generator[799]: Ignoring "noauto" option for root device
	[  +0.067769] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.386700] systemd-fstab-generator[921]: Ignoring "noauto" option for root device
	[  +5.655402] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.801206] kauditd_printk_skb: 85 callbacks suppressed
	[Sep17 18:32] kauditd_printk_skb: 4 callbacks suppressed
	[  +1.393770] systemd-fstab-generator[2615]: Ignoring "noauto" option for root device
	[  +4.766861] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.800419] systemd-fstab-generator[2936]: Ignoring "noauto" option for root device
	[  +5.800095] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.268975] systemd-fstab-generator[3172]: Ignoring "noauto" option for root device
	[  +6.450932] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [c01bdc8e5cd2fe1fb27733c583449b8c337d1c3156b08f4708f3e06c1c03fc6c] <==
	{"level":"info","ts":"2024-09-17T18:32:39.050831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 received MsgPreVoteResp from ded7f9817c909548 at term 1"}
	{"level":"info","ts":"2024-09-17T18:32:39.050847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 became candidate at term 2"}
	{"level":"info","ts":"2024-09-17T18:32:39.050853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 received MsgVoteResp from ded7f9817c909548 at term 2"}
	{"level":"info","ts":"2024-09-17T18:32:39.050862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 became leader at term 2"}
	{"level":"info","ts":"2024-09-17T18:32:39.050869Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ded7f9817c909548 elected leader ded7f9817c909548 at term 2"}
	{"level":"info","ts":"2024-09-17T18:32:39.054926Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ded7f9817c909548","local-member-attributes":"{Name:default-k8s-diff-port-438836 ClientURLs:[https://192.168.39.58:2379]}","request-path":"/0/members/ded7f9817c909548/attributes","cluster-id":"91c640bc00cd2aea","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T18:32:39.055095Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T18:32:39.055592Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:32:39.057946Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T18:32:39.061033Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T18:32:39.062398Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.58:2379"}
	{"level":"info","ts":"2024-09-17T18:32:39.066555Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T18:32:39.069505Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-17T18:32:39.071029Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T18:32:39.075729Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T18:32:39.075925Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"91c640bc00cd2aea","local-member-id":"ded7f9817c909548","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:32:39.077834Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:32:39.077933Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:42:39.261890Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":723}
	{"level":"info","ts":"2024-09-17T18:42:39.271808Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":723,"took":"9.300163ms","hash":757706465,"current-db-size-bytes":2211840,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2211840,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-09-17T18:42:39.271949Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":757706465,"revision":723,"compact-revision":-1}
	{"level":"info","ts":"2024-09-17T18:47:39.272760Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":966}
	{"level":"info","ts":"2024-09-17T18:47:39.277914Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":966,"took":"4.41263ms","hash":1371771554,"current-db-size-bytes":2211840,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1564672,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-09-17T18:47:39.278023Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1371771554,"revision":966,"compact-revision":723}
	{"level":"info","ts":"2024-09-17T18:48:57.175166Z","caller":"traceutil/trace.go:171","msg":"trace[1663525931] transaction","detail":"{read_only:false; response_revision:1274; number_of_response:1; }","duration":"107.502384ms","start":"2024-09-17T18:48:57.067631Z","end":"2024-09-17T18:48:57.175134Z","steps":["trace[1663525931] 'process raft request'  (duration: 107.164352ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:49:29 up 22 min,  0 users,  load average: 0.09, 0.29, 0.26
	Linux default-k8s-diff-port-438836 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5bd03b090b920a7cb9d13bcb9bb11127b97b36285f56e2c13a4ae01064016eb5] <==
	W0917 18:32:30.248021       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.251053       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.264752       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.309301       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.331945       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.375292       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.380949       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.385495       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.446278       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.551957       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.551957       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.553294       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.592605       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.658192       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.736986       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.781305       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.843139       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.857193       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.867857       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.893730       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:30.964164       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:31.007781       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:34.771412       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:35.043367       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:32:35.159594       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b58b4f5db1adeb379ee6936ee97d2de412f77fa82e9596ab6d585d73685519b0] <==
	I0917 18:45:42.138453       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0917 18:45:42.139594       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0917 18:47:41.137191       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 18:47:41.137328       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0917 18:47:42.139170       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 18:47:42.139276       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0917 18:47:42.139389       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 18:47:42.139613       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0917 18:47:42.140475       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0917 18:47:42.141551       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0917 18:48:42.140994       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 18:48:42.141089       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0917 18:48:42.142147       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0917 18:48:42.142295       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 18:48:42.142419       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0917 18:48:42.143732       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [e40fdd1fd764d351571e680a299a8cc448471f3dbd8cef20a8d2af3297a33f23] <==
	E0917 18:44:18.128719       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:44:18.692787       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0917 18:44:25.654221       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="91.351µs"
	E0917 18:44:48.135302       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:44:48.702057       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:45:18.142533       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:45:18.710418       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:45:48.149397       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:45:48.721877       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:46:18.156707       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:46:18.732983       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:46:48.163894       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:46:48.741548       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:47:18.171755       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:47:18.750857       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:47:48.179003       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:47:48.758491       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0917 18:48:10.851801       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-438836"
	E0917 18:48:18.185887       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:48:18.769137       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:48:48.193451       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:48:48.778582       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:49:18.200014       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:49:18.787562       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0917 18:49:21.653354       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="224.146µs"
	
	
	==> kube-proxy [8198df1218bca5231f562facee0a790436f98cf41df2b13d4cd52b339a38e663] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 18:32:49.855997       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 18:32:49.901183       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.58"]
	E0917 18:32:49.901286       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 18:32:50.112081       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 18:32:50.112119       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 18:32:50.112143       1 server_linux.go:169] "Using iptables Proxier"
	I0917 18:32:50.116304       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 18:32:50.116614       1 server.go:483] "Version info" version="v1.31.1"
	I0917 18:32:50.116625       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 18:32:50.119600       1 config.go:199] "Starting service config controller"
	I0917 18:32:50.119635       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 18:32:50.119747       1 config.go:105] "Starting endpoint slice config controller"
	I0917 18:32:50.119753       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 18:32:50.120417       1 config.go:328] "Starting node config controller"
	I0917 18:32:50.120460       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 18:32:50.221768       1 shared_informer.go:320] Caches are synced for node config
	I0917 18:32:50.221814       1 shared_informer.go:320] Caches are synced for service config
	I0917 18:32:50.221841       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5a88b1fb4a49aaf15b06bc8a6136326491aac09f2c6933e9fe3b24c6c2e0420f] <==
	W0917 18:32:41.132728       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 18:32:41.135283       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:42.032550       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 18:32:42.032705       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:42.043108       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 18:32:42.043234       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:42.044682       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 18:32:42.044792       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:42.072466       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 18:32:42.072528       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:42.118207       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0917 18:32:42.118270       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:42.191111       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 18:32:42.191167       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:42.260329       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0917 18:32:42.260386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:42.319704       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0917 18:32:42.319838       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:42.353379       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0917 18:32:42.353519       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0917 18:32:42.409474       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0917 18:32:42.409564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 18:32:42.470722       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0917 18:32:42.470780       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0917 18:32:45.000820       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 17 18:48:33 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:48:33.943315    2943 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598913943032847,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:48:33 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:48:33.943585    2943 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598913943032847,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:48:36 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:48:36.632814    2943 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-qnfv2" podUID="75be5ed8-b62d-42c8-8ea9-5809187be05a"
	Sep 17 18:48:43 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:48:43.722446    2943 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 17 18:48:43 default-k8s-diff-port-438836 kubelet[2943]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 17 18:48:43 default-k8s-diff-port-438836 kubelet[2943]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 17 18:48:43 default-k8s-diff-port-438836 kubelet[2943]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 17 18:48:43 default-k8s-diff-port-438836 kubelet[2943]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 17 18:48:43 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:48:43.944692    2943 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598923944349168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:48:43 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:48:43.944750    2943 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598923944349168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:48:47 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:48:47.634110    2943 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-qnfv2" podUID="75be5ed8-b62d-42c8-8ea9-5809187be05a"
	Sep 17 18:48:53 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:48:53.946434    2943 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598933945949438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:48:53 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:48:53.947624    2943 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598933945949438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:48:58 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:48:58.632635    2943 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-qnfv2" podUID="75be5ed8-b62d-42c8-8ea9-5809187be05a"
	Sep 17 18:49:03 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:49:03.952885    2943 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598943949247204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:49:03 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:49:03.952943    2943 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598943949247204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:49:09 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:49:09.650098    2943 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 17 18:49:09 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:49:09.650243    2943 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 17 18:49:09 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:49:09.650590    2943 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pbxnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountP
ropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile
:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-qnfv2_kube-system(75be5ed8-b62d-42c8-8ea9-5809187be05a): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Sep 17 18:49:09 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:49:09.653161    2943 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-qnfv2" podUID="75be5ed8-b62d-42c8-8ea9-5809187be05a"
	Sep 17 18:49:13 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:49:13.959263    2943 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598953958027431,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:49:13 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:49:13.959311    2943 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598953958027431,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:49:21 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:49:21.634383    2943 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-qnfv2" podUID="75be5ed8-b62d-42c8-8ea9-5809187be05a"
	Sep 17 18:49:23 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:49:23.961631    2943 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598963961219760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:49:23 default-k8s-diff-port-438836 kubelet[2943]: E0917 18:49:23.961726    2943 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598963961219760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [2fbd0e5e760d13f98ab9ba88d999247cdf66a8ad8098c9ee7f28e65d6572a9b2] <==
	I0917 18:32:51.464714       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 18:32:51.475466       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 18:32:51.475512       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0917 18:32:51.494524       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 18:32:51.494721       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-438836_8361e523-e803-46d6-9e51-ba5af59ac90d!
	I0917 18:32:51.497516       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4b5836b4-3547-40fb-980a-2268372245a3", APIVersion:"v1", ResourceVersion:"437", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-438836_8361e523-e803-46d6-9e51-ba5af59ac90d became leader
	I0917 18:32:51.595115       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-438836_8361e523-e803-46d6-9e51-ba5af59ac90d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-438836 -n default-k8s-diff-port-438836
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-438836 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-qnfv2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-438836 describe pod metrics-server-6867b74b74-qnfv2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-438836 describe pod metrics-server-6867b74b74-qnfv2: exit status 1 (62.308634ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-qnfv2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-438836 describe pod metrics-server-6867b74b74-qnfv2: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (447.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (372.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-081863 -n embed-certs-081863
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-17 18:48:47.413669958 +0000 UTC m=+6796.488242230
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-081863 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-081863 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.661µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-081863 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-081863 -n embed-certs-081863
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-081863 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-081863 logs -n 25: (1.324347382s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                     | disable-driver-mounts-671774 | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | disable-driver-mounts-671774                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:20 UTC |
	|         | default-k8s-diff-port-438836                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-081863            | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-081863                                  | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-328741             | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:20 UTC | 17 Sep 24 18:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-328741                                   | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:20 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-438836  | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:21 UTC | 17 Sep 24 18:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:21 UTC |                     |
	|         | default-k8s-diff-port-438836                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-081863                 | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-190698        | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-081863                                  | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC | 17 Sep 24 18:33 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-328741                  | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-328741                                   | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC | 17 Sep 24 18:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-438836       | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC | 17 Sep 24 18:32 UTC |
	|         | default-k8s-diff-port-438836                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-190698                              | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC | 17 Sep 24 18:23 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-190698             | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC | 17 Sep 24 18:23 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-190698                              | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-190698                              | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:47 UTC | 17 Sep 24 18:47 UTC |
	| start   | -p newest-cni-089562 --memory=2200 --alsologtostderr   | newest-cni-089562            | jenkins | v1.34.0 | 17 Sep 24 18:47 UTC | 17 Sep 24 18:48 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-328741                                   | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:47 UTC | 17 Sep 24 18:47 UTC |
	| addons  | enable metrics-server -p newest-cni-089562             | newest-cni-089562            | jenkins | v1.34.0 | 17 Sep 24 18:48 UTC | 17 Sep 24 18:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-089562                                   | newest-cni-089562            | jenkins | v1.34.0 | 17 Sep 24 18:48 UTC | 17 Sep 24 18:48 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-089562                  | newest-cni-089562            | jenkins | v1.34.0 | 17 Sep 24 18:48 UTC | 17 Sep 24 18:48 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-089562 --memory=2200 --alsologtostderr   | newest-cni-089562            | jenkins | v1.34.0 | 17 Sep 24 18:48 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 18:48:29
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 18:48:29.991251   84998 out.go:345] Setting OutFile to fd 1 ...
	I0917 18:48:29.991383   84998 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:48:29.991394   84998 out.go:358] Setting ErrFile to fd 2...
	I0917 18:48:29.991399   84998 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:48:29.991634   84998 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 18:48:29.992448   84998 out.go:352] Setting JSON to false
	I0917 18:48:29.993539   84998 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":9025,"bootTime":1726589885,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 18:48:29.993653   84998 start.go:139] virtualization: kvm guest
	I0917 18:48:29.995997   84998 out.go:177] * [newest-cni-089562] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 18:48:29.997691   84998 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 18:48:29.997701   84998 notify.go:220] Checking for updates...
	I0917 18:48:30.000364   84998 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 18:48:30.002173   84998 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:48:30.003677   84998 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 18:48:30.004973   84998 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 18:48:30.006482   84998 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 18:48:30.008149   84998 config.go:182] Loaded profile config "newest-cni-089562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:48:30.008615   84998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:48:30.008694   84998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:48:30.025310   84998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46345
	I0917 18:48:30.025881   84998 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:48:30.026448   84998 main.go:141] libmachine: Using API Version  1
	I0917 18:48:30.026468   84998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:48:30.026835   84998 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:48:30.027021   84998 main.go:141] libmachine: (newest-cni-089562) Calling .DriverName
	I0917 18:48:30.027369   84998 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 18:48:30.027691   84998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:48:30.027725   84998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:48:30.044172   84998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42987
	I0917 18:48:30.044583   84998 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:48:30.045051   84998 main.go:141] libmachine: Using API Version  1
	I0917 18:48:30.045073   84998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:48:30.045422   84998 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:48:30.045819   84998 main.go:141] libmachine: (newest-cni-089562) Calling .DriverName
	I0917 18:48:30.082856   84998 out.go:177] * Using the kvm2 driver based on existing profile
	I0917 18:48:30.084207   84998 start.go:297] selected driver: kvm2
	I0917 18:48:30.084224   84998 start.go:901] validating driver "kvm2" against &{Name:newest-cni-089562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:newest-cni-089562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:48:30.084359   84998 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 18:48:30.085100   84998 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:48:30.085186   84998 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19662-11085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0917 18:48:30.101359   84998 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0917 18:48:30.101806   84998 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0917 18:48:30.101840   84998 cni.go:84] Creating CNI manager for ""
	I0917 18:48:30.101882   84998 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:48:30.101924   84998 start.go:340] cluster config:
	{Name:newest-cni-089562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-089562 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:48:30.102022   84998 iso.go:125] acquiring lock: {Name:mkee7131e09b1c5eb94d106bda884bcbdbf6f906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:48:30.104131   84998 out.go:177] * Starting "newest-cni-089562" primary control-plane node in "newest-cni-089562" cluster
	I0917 18:48:30.105728   84998 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 18:48:30.105779   84998 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0917 18:48:30.105788   84998 cache.go:56] Caching tarball of preloaded images
	I0917 18:48:30.105869   84998 preload.go:172] Found /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 18:48:30.105881   84998 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0917 18:48:30.105993   84998 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/newest-cni-089562/config.json ...
	I0917 18:48:30.106207   84998 start.go:360] acquireMachinesLock for newest-cni-089562: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 18:48:30.106279   84998 start.go:364] duration metric: took 48.55µs to acquireMachinesLock for "newest-cni-089562"
	I0917 18:48:30.106301   84998 start.go:96] Skipping create...Using existing machine configuration
	I0917 18:48:30.106308   84998 fix.go:54] fixHost starting: 
	I0917 18:48:30.106614   84998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:48:30.106649   84998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:48:30.122861   84998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43543
	I0917 18:48:30.123284   84998 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:48:30.123874   84998 main.go:141] libmachine: Using API Version  1
	I0917 18:48:30.123913   84998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:48:30.124220   84998 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:48:30.124400   84998 main.go:141] libmachine: (newest-cni-089562) Calling .DriverName
	I0917 18:48:30.124565   84998 main.go:141] libmachine: (newest-cni-089562) Calling .GetState
	I0917 18:48:30.126172   84998 fix.go:112] recreateIfNeeded on newest-cni-089562: state=Stopped err=<nil>
	I0917 18:48:30.126198   84998 main.go:141] libmachine: (newest-cni-089562) Calling .DriverName
	W0917 18:48:30.126354   84998 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 18:48:30.128491   84998 out.go:177] * Restarting existing kvm2 VM for "newest-cni-089562" ...
	I0917 18:48:30.129817   84998 main.go:141] libmachine: (newest-cni-089562) Calling .Start
	I0917 18:48:30.130007   84998 main.go:141] libmachine: (newest-cni-089562) Ensuring networks are active...
	I0917 18:48:30.130919   84998 main.go:141] libmachine: (newest-cni-089562) Ensuring network default is active
	I0917 18:48:30.131456   84998 main.go:141] libmachine: (newest-cni-089562) Ensuring network mk-newest-cni-089562 is active
	I0917 18:48:30.131963   84998 main.go:141] libmachine: (newest-cni-089562) Getting domain xml...
	I0917 18:48:30.132948   84998 main.go:141] libmachine: (newest-cni-089562) Creating domain...
	I0917 18:48:31.399517   84998 main.go:141] libmachine: (newest-cni-089562) Waiting to get IP...
	I0917 18:48:31.400318   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:31.400846   84998 main.go:141] libmachine: (newest-cni-089562) DBG | unable to find current IP address of domain newest-cni-089562 in network mk-newest-cni-089562
	I0917 18:48:31.400917   84998 main.go:141] libmachine: (newest-cni-089562) DBG | I0917 18:48:31.400814   85033 retry.go:31] will retry after 303.138391ms: waiting for machine to come up
	I0917 18:48:31.705472   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:31.706000   84998 main.go:141] libmachine: (newest-cni-089562) DBG | unable to find current IP address of domain newest-cni-089562 in network mk-newest-cni-089562
	I0917 18:48:31.706027   84998 main.go:141] libmachine: (newest-cni-089562) DBG | I0917 18:48:31.705950   85033 retry.go:31] will retry after 357.628795ms: waiting for machine to come up
	I0917 18:48:32.065687   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:32.066177   84998 main.go:141] libmachine: (newest-cni-089562) DBG | unable to find current IP address of domain newest-cni-089562 in network mk-newest-cni-089562
	I0917 18:48:32.066201   84998 main.go:141] libmachine: (newest-cni-089562) DBG | I0917 18:48:32.066126   85033 retry.go:31] will retry after 468.730442ms: waiting for machine to come up
	I0917 18:48:32.536718   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:32.537149   84998 main.go:141] libmachine: (newest-cni-089562) DBG | unable to find current IP address of domain newest-cni-089562 in network mk-newest-cni-089562
	I0917 18:48:32.537178   84998 main.go:141] libmachine: (newest-cni-089562) DBG | I0917 18:48:32.537120   85033 retry.go:31] will retry after 492.831284ms: waiting for machine to come up
	I0917 18:48:33.031366   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:33.031851   84998 main.go:141] libmachine: (newest-cni-089562) DBG | unable to find current IP address of domain newest-cni-089562 in network mk-newest-cni-089562
	I0917 18:48:33.031875   84998 main.go:141] libmachine: (newest-cni-089562) DBG | I0917 18:48:33.031804   85033 retry.go:31] will retry after 645.10896ms: waiting for machine to come up
	I0917 18:48:33.678340   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:33.678872   84998 main.go:141] libmachine: (newest-cni-089562) DBG | unable to find current IP address of domain newest-cni-089562 in network mk-newest-cni-089562
	I0917 18:48:33.678894   84998 main.go:141] libmachine: (newest-cni-089562) DBG | I0917 18:48:33.678823   85033 retry.go:31] will retry after 604.007171ms: waiting for machine to come up
	I0917 18:48:34.284798   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:34.285359   84998 main.go:141] libmachine: (newest-cni-089562) DBG | unable to find current IP address of domain newest-cni-089562 in network mk-newest-cni-089562
	I0917 18:48:34.285382   84998 main.go:141] libmachine: (newest-cni-089562) DBG | I0917 18:48:34.285304   85033 retry.go:31] will retry after 971.834239ms: waiting for machine to come up
	I0917 18:48:35.258438   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:35.258979   84998 main.go:141] libmachine: (newest-cni-089562) DBG | unable to find current IP address of domain newest-cni-089562 in network mk-newest-cni-089562
	I0917 18:48:35.259009   84998 main.go:141] libmachine: (newest-cni-089562) DBG | I0917 18:48:35.258934   85033 retry.go:31] will retry after 1.181531642s: waiting for machine to come up
	I0917 18:48:36.441781   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:36.442278   84998 main.go:141] libmachine: (newest-cni-089562) DBG | unable to find current IP address of domain newest-cni-089562 in network mk-newest-cni-089562
	I0917 18:48:36.442346   84998 main.go:141] libmachine: (newest-cni-089562) DBG | I0917 18:48:36.442262   85033 retry.go:31] will retry after 1.847919149s: waiting for machine to come up
	I0917 18:48:38.291608   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:38.292024   84998 main.go:141] libmachine: (newest-cni-089562) DBG | unable to find current IP address of domain newest-cni-089562 in network mk-newest-cni-089562
	I0917 18:48:38.292052   84998 main.go:141] libmachine: (newest-cni-089562) DBG | I0917 18:48:38.291985   85033 retry.go:31] will retry after 1.793447562s: waiting for machine to come up
	I0917 18:48:40.087612   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:40.088155   84998 main.go:141] libmachine: (newest-cni-089562) DBG | unable to find current IP address of domain newest-cni-089562 in network mk-newest-cni-089562
	I0917 18:48:40.088183   84998 main.go:141] libmachine: (newest-cni-089562) DBG | I0917 18:48:40.088099   85033 retry.go:31] will retry after 1.927111525s: waiting for machine to come up
	I0917 18:48:42.016952   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:42.017388   84998 main.go:141] libmachine: (newest-cni-089562) DBG | unable to find current IP address of domain newest-cni-089562 in network mk-newest-cni-089562
	I0917 18:48:42.017415   84998 main.go:141] libmachine: (newest-cni-089562) DBG | I0917 18:48:42.017340   85033 retry.go:31] will retry after 2.854260337s: waiting for machine to come up
	I0917 18:48:44.874372   84998 main.go:141] libmachine: (newest-cni-089562) DBG | domain newest-cni-089562 has defined MAC address 52:54:00:0f:be:d2 in network mk-newest-cni-089562
	I0917 18:48:44.874914   84998 main.go:141] libmachine: (newest-cni-089562) DBG | unable to find current IP address of domain newest-cni-089562 in network mk-newest-cni-089562
	I0917 18:48:44.874960   84998 main.go:141] libmachine: (newest-cni-089562) DBG | I0917 18:48:44.874873   85033 retry.go:31] will retry after 3.040881153s: waiting for machine to come up
	
	
	==> CRI-O <==
	Sep 17 18:48:48 embed-certs-081863 crio[712]: time="2024-09-17 18:48:48.036330821Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598928036307685,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ac932afb-b095-46bd-a9ff-61de6a924390 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:48:48 embed-certs-081863 crio[712]: time="2024-09-17 18:48:48.036953113Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e456d163-9b6a-4d71-998c-033b2027928e name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:48:48 embed-certs-081863 crio[712]: time="2024-09-17 18:48:48.037060734Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e456d163-9b6a-4d71-998c-033b2027928e name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:48:48 embed-certs-081863 crio[712]: time="2024-09-17 18:48:48.037284502Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2fd2982baed92b765f1cd00e438c595150bfc1d7db6f00fbd13f1c4301be0af,PodSandboxId:65b6042b431eaafc929cdd98b54f6fad68c8a5390cfceed3b82bc91f2c321c56,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726598007921318657,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-662sf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc7d0fc2-f0dd-420c-b2f9-16ddb84bf3c3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dfd4aa286a96cc38bc161378523546816375a97e3975bfdb9ea096e29560424,PodSandboxId:ff5d150cca77926ac03fc941ef94c090c41b11bac1847d8744a77cad58301148,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726598007920264982,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dxjr7,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 16ebe197-5fcf-4988-968b-c9edd71886ff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa96d2aea6e07cacabc5f4fac23da55198ad1d1d74bd2c8ad9cb041b9062ed3,PodSandboxId:a770382b9e3102837ee6c972c554af3bb606e19484480ae2fd4aefa8a6624b12,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1726598007680141278,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 107868ba-cf29-42b0-bb0d-c0da9b6b4c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a62d26a92dc9106cd1c098db2c8d60ffed160222c87217dd5e6d2716a35030a,PodSandboxId:69994d5984084ef0727a158f891504afa1d80d54defc1030a63eae732214a8eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726598007635029542,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7w64h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46f3bcbd-64c9-4b30-9aa9-6f6e1eb9833b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0345045d68d04ef66df723640923d62d2528e7d5f7bbbd152118ed20fbb0eea,PodSandboxId:e6ac0bad7080625b8dc6d41d6e220900fd3b1e4dc4148c7444c052a8b97f3acd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726597996390421210
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be47b4099e325b3b1c4c87cd0c31bee,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd490a48dfb6addeeac86af9eb870abbd26bbcb4f9d71fc988e0d6595ae7a68,PodSandboxId:ab9a0859267cf57589906fe1cc1b097491ba4662611b1642d37e5faed82ed6cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726597996382
248806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4405647265d111ad2dc00b43ba5fdd68,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4757efa9abab5ce3aaeff14db8d8984f93281054c88e4c1507e7d5c2aeacf7c,PodSandboxId:9b15f72b553f9954485a3e190436c4287181cd99fb5d57c530a2c1938ae21091,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726597996385426819,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e77089a25f90888de8661fff9420990,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbb0b6d28b2ad7146435b1aacb4a12e2234e63117af18165d784114330c1a1db,PodSandboxId:7f79282b9b1b55422486de0f779a3323c024aa4802224b31dd4a4cf356d99c76,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726597996239938189,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4243d94cac9cd89eef8782f3a6d2858f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19bcc0b5d372609d466df179c233f539c54b17933e07226f60276eee65377599,PodSandboxId:8a2ddcc7f3ffa6df7580e485781e8b1e65358919e22d1c3ea2265bec8706e241,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726597709488234048,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e77089a25f90888de8661fff9420990,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e456d163-9b6a-4d71-998c-033b2027928e name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:48:48 embed-certs-081863 crio[712]: time="2024-09-17 18:48:48.074728656Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b8f58b0f-7a33-4073-814d-930cbed89cd2 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:48:48 embed-certs-081863 crio[712]: time="2024-09-17 18:48:48.074813670Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b8f58b0f-7a33-4073-814d-930cbed89cd2 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:48:48 embed-certs-081863 crio[712]: time="2024-09-17 18:48:48.076250472Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0aeb6c94-0ff6-465b-b8eb-1f8069bdcdc1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:48:48 embed-certs-081863 crio[712]: time="2024-09-17 18:48:48.076919864Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598928076892531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0aeb6c94-0ff6-465b-b8eb-1f8069bdcdc1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:48:48 embed-certs-081863 crio[712]: time="2024-09-17 18:48:48.077681325Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=97278af2-88e8-4dd5-b2e6-09d31172d4ff name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:48:48 embed-certs-081863 crio[712]: time="2024-09-17 18:48:48.077739314Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=97278af2-88e8-4dd5-b2e6-09d31172d4ff name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:48:48 embed-certs-081863 crio[712]: time="2024-09-17 18:48:48.077931135Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2fd2982baed92b765f1cd00e438c595150bfc1d7db6f00fbd13f1c4301be0af,PodSandboxId:65b6042b431eaafc929cdd98b54f6fad68c8a5390cfceed3b82bc91f2c321c56,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726598007921318657,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-662sf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc7d0fc2-f0dd-420c-b2f9-16ddb84bf3c3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dfd4aa286a96cc38bc161378523546816375a97e3975bfdb9ea096e29560424,PodSandboxId:ff5d150cca77926ac03fc941ef94c090c41b11bac1847d8744a77cad58301148,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726598007920264982,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dxjr7,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 16ebe197-5fcf-4988-968b-c9edd71886ff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa96d2aea6e07cacabc5f4fac23da55198ad1d1d74bd2c8ad9cb041b9062ed3,PodSandboxId:a770382b9e3102837ee6c972c554af3bb606e19484480ae2fd4aefa8a6624b12,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1726598007680141278,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 107868ba-cf29-42b0-bb0d-c0da9b6b4c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a62d26a92dc9106cd1c098db2c8d60ffed160222c87217dd5e6d2716a35030a,PodSandboxId:69994d5984084ef0727a158f891504afa1d80d54defc1030a63eae732214a8eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726598007635029542,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7w64h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46f3bcbd-64c9-4b30-9aa9-6f6e1eb9833b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0345045d68d04ef66df723640923d62d2528e7d5f7bbbd152118ed20fbb0eea,PodSandboxId:e6ac0bad7080625b8dc6d41d6e220900fd3b1e4dc4148c7444c052a8b97f3acd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726597996390421210
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be47b4099e325b3b1c4c87cd0c31bee,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd490a48dfb6addeeac86af9eb870abbd26bbcb4f9d71fc988e0d6595ae7a68,PodSandboxId:ab9a0859267cf57589906fe1cc1b097491ba4662611b1642d37e5faed82ed6cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726597996382
248806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4405647265d111ad2dc00b43ba5fdd68,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4757efa9abab5ce3aaeff14db8d8984f93281054c88e4c1507e7d5c2aeacf7c,PodSandboxId:9b15f72b553f9954485a3e190436c4287181cd99fb5d57c530a2c1938ae21091,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726597996385426819,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e77089a25f90888de8661fff9420990,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbb0b6d28b2ad7146435b1aacb4a12e2234e63117af18165d784114330c1a1db,PodSandboxId:7f79282b9b1b55422486de0f779a3323c024aa4802224b31dd4a4cf356d99c76,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726597996239938189,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4243d94cac9cd89eef8782f3a6d2858f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19bcc0b5d372609d466df179c233f539c54b17933e07226f60276eee65377599,PodSandboxId:8a2ddcc7f3ffa6df7580e485781e8b1e65358919e22d1c3ea2265bec8706e241,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726597709488234048,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e77089a25f90888de8661fff9420990,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=97278af2-88e8-4dd5-b2e6-09d31172d4ff name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:48:48 embed-certs-081863 crio[712]: time="2024-09-17 18:48:48.120800868Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4c8d1d5a-1cea-4c9b-93e2-4d788ce30d4d name=/runtime.v1.RuntimeService/Version
	Sep 17 18:48:48 embed-certs-081863 crio[712]: time="2024-09-17 18:48:48.120898466Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c8d1d5a-1cea-4c9b-93e2-4d788ce30d4d name=/runtime.v1.RuntimeService/Version
	Sep 17 18:48:48 embed-certs-081863 crio[712]: time="2024-09-17 18:48:48.122446143Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5968ad08-18cc-4524-8ee6-4cbc4e0d4726 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:48:48 embed-certs-081863 crio[712]: time="2024-09-17 18:48:48.123457567Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598928123430656,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5968ad08-18cc-4524-8ee6-4cbc4e0d4726 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:48:48 embed-certs-081863 crio[712]: time="2024-09-17 18:48:48.124311777Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9c5360ce-5a20-4575-bda9-d69e8b48788a name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:48:48 embed-certs-081863 crio[712]: time="2024-09-17 18:48:48.124397011Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9c5360ce-5a20-4575-bda9-d69e8b48788a name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:48:48 embed-certs-081863 crio[712]: time="2024-09-17 18:48:48.124738106Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2fd2982baed92b765f1cd00e438c595150bfc1d7db6f00fbd13f1c4301be0af,PodSandboxId:65b6042b431eaafc929cdd98b54f6fad68c8a5390cfceed3b82bc91f2c321c56,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726598007921318657,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-662sf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc7d0fc2-f0dd-420c-b2f9-16ddb84bf3c3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dfd4aa286a96cc38bc161378523546816375a97e3975bfdb9ea096e29560424,PodSandboxId:ff5d150cca77926ac03fc941ef94c090c41b11bac1847d8744a77cad58301148,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726598007920264982,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dxjr7,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 16ebe197-5fcf-4988-968b-c9edd71886ff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa96d2aea6e07cacabc5f4fac23da55198ad1d1d74bd2c8ad9cb041b9062ed3,PodSandboxId:a770382b9e3102837ee6c972c554af3bb606e19484480ae2fd4aefa8a6624b12,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1726598007680141278,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 107868ba-cf29-42b0-bb0d-c0da9b6b4c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a62d26a92dc9106cd1c098db2c8d60ffed160222c87217dd5e6d2716a35030a,PodSandboxId:69994d5984084ef0727a158f891504afa1d80d54defc1030a63eae732214a8eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726598007635029542,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7w64h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46f3bcbd-64c9-4b30-9aa9-6f6e1eb9833b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0345045d68d04ef66df723640923d62d2528e7d5f7bbbd152118ed20fbb0eea,PodSandboxId:e6ac0bad7080625b8dc6d41d6e220900fd3b1e4dc4148c7444c052a8b97f3acd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726597996390421210
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be47b4099e325b3b1c4c87cd0c31bee,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd490a48dfb6addeeac86af9eb870abbd26bbcb4f9d71fc988e0d6595ae7a68,PodSandboxId:ab9a0859267cf57589906fe1cc1b097491ba4662611b1642d37e5faed82ed6cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726597996382
248806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4405647265d111ad2dc00b43ba5fdd68,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4757efa9abab5ce3aaeff14db8d8984f93281054c88e4c1507e7d5c2aeacf7c,PodSandboxId:9b15f72b553f9954485a3e190436c4287181cd99fb5d57c530a2c1938ae21091,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726597996385426819,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e77089a25f90888de8661fff9420990,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbb0b6d28b2ad7146435b1aacb4a12e2234e63117af18165d784114330c1a1db,PodSandboxId:7f79282b9b1b55422486de0f779a3323c024aa4802224b31dd4a4cf356d99c76,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726597996239938189,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4243d94cac9cd89eef8782f3a6d2858f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19bcc0b5d372609d466df179c233f539c54b17933e07226f60276eee65377599,PodSandboxId:8a2ddcc7f3ffa6df7580e485781e8b1e65358919e22d1c3ea2265bec8706e241,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726597709488234048,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e77089a25f90888de8661fff9420990,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9c5360ce-5a20-4575-bda9-d69e8b48788a name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:48:48 embed-certs-081863 crio[712]: time="2024-09-17 18:48:48.160612587Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=746b0a00-2bf6-4f74-a1f5-8d3a3a8ab311 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:48:48 embed-certs-081863 crio[712]: time="2024-09-17 18:48:48.160696150Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=746b0a00-2bf6-4f74-a1f5-8d3a3a8ab311 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:48:48 embed-certs-081863 crio[712]: time="2024-09-17 18:48:48.162201067Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=20d93bad-2bcf-4b45-8609-951a923c5911 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:48:48 embed-certs-081863 crio[712]: time="2024-09-17 18:48:48.162842836Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598928162813723,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=20d93bad-2bcf-4b45-8609-951a923c5911 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:48:48 embed-certs-081863 crio[712]: time="2024-09-17 18:48:48.163679814Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e4faa6c3-a082-4c6b-a876-c8e84cf526b6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:48:48 embed-certs-081863 crio[712]: time="2024-09-17 18:48:48.163740848Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e4faa6c3-a082-4c6b-a876-c8e84cf526b6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:48:48 embed-certs-081863 crio[712]: time="2024-09-17 18:48:48.163952450Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2fd2982baed92b765f1cd00e438c595150bfc1d7db6f00fbd13f1c4301be0af,PodSandboxId:65b6042b431eaafc929cdd98b54f6fad68c8a5390cfceed3b82bc91f2c321c56,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726598007921318657,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-662sf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc7d0fc2-f0dd-420c-b2f9-16ddb84bf3c3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dfd4aa286a96cc38bc161378523546816375a97e3975bfdb9ea096e29560424,PodSandboxId:ff5d150cca77926ac03fc941ef94c090c41b11bac1847d8744a77cad58301148,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726598007920264982,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dxjr7,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 16ebe197-5fcf-4988-968b-c9edd71886ff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa96d2aea6e07cacabc5f4fac23da55198ad1d1d74bd2c8ad9cb041b9062ed3,PodSandboxId:a770382b9e3102837ee6c972c554af3bb606e19484480ae2fd4aefa8a6624b12,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1726598007680141278,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 107868ba-cf29-42b0-bb0d-c0da9b6b4c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a62d26a92dc9106cd1c098db2c8d60ffed160222c87217dd5e6d2716a35030a,PodSandboxId:69994d5984084ef0727a158f891504afa1d80d54defc1030a63eae732214a8eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726598007635029542,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7w64h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46f3bcbd-64c9-4b30-9aa9-6f6e1eb9833b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0345045d68d04ef66df723640923d62d2528e7d5f7bbbd152118ed20fbb0eea,PodSandboxId:e6ac0bad7080625b8dc6d41d6e220900fd3b1e4dc4148c7444c052a8b97f3acd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726597996390421210
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be47b4099e325b3b1c4c87cd0c31bee,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd490a48dfb6addeeac86af9eb870abbd26bbcb4f9d71fc988e0d6595ae7a68,PodSandboxId:ab9a0859267cf57589906fe1cc1b097491ba4662611b1642d37e5faed82ed6cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726597996382
248806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4405647265d111ad2dc00b43ba5fdd68,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4757efa9abab5ce3aaeff14db8d8984f93281054c88e4c1507e7d5c2aeacf7c,PodSandboxId:9b15f72b553f9954485a3e190436c4287181cd99fb5d57c530a2c1938ae21091,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726597996385426819,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e77089a25f90888de8661fff9420990,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbb0b6d28b2ad7146435b1aacb4a12e2234e63117af18165d784114330c1a1db,PodSandboxId:7f79282b9b1b55422486de0f779a3323c024aa4802224b31dd4a4cf356d99c76,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726597996239938189,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4243d94cac9cd89eef8782f3a6d2858f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19bcc0b5d372609d466df179c233f539c54b17933e07226f60276eee65377599,PodSandboxId:8a2ddcc7f3ffa6df7580e485781e8b1e65358919e22d1c3ea2265bec8706e241,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726597709488234048,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-081863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e77089a25f90888de8661fff9420990,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e4faa6c3-a082-4c6b-a876-c8e84cf526b6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c2fd2982baed9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago      Running             coredns                   0                   65b6042b431ea       coredns-7c65d6cfc9-662sf
	8dfd4aa286a96       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago      Running             coredns                   0                   ff5d150cca779       coredns-7c65d6cfc9-dxjr7
	1aa96d2aea6e0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   a770382b9e310       storage-provisioner
	0a62d26a92dc9       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   15 minutes ago      Running             kube-proxy                0                   69994d5984084       kube-proxy-7w64h
	b0345045d68d0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   15 minutes ago      Running             kube-controller-manager   2                   e6ac0bad70806       kube-controller-manager-embed-certs-081863
	e4757efa9abab       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   15 minutes ago      Running             kube-apiserver            2                   9b15f72b553f9       kube-apiserver-embed-certs-081863
	0cd490a48dfb6       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   15 minutes ago      Running             kube-scheduler            2                   ab9a0859267cf       kube-scheduler-embed-certs-081863
	fbb0b6d28b2ad       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   15 minutes ago      Running             etcd                      2                   7f79282b9b1b5       etcd-embed-certs-081863
	19bcc0b5d3726       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   20 minutes ago      Exited              kube-apiserver            1                   8a2ddcc7f3ffa       kube-apiserver-embed-certs-081863
	
	
	==> coredns [8dfd4aa286a96cc38bc161378523546816375a97e3975bfdb9ea096e29560424] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [c2fd2982baed92b765f1cd00e438c595150bfc1d7db6f00fbd13f1c4301be0af] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-081863
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-081863
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=embed-certs-081863
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T18_33_22_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 18:33:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-081863
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 18:48:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 18:43:43 +0000   Tue, 17 Sep 2024 18:33:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 18:43:43 +0000   Tue, 17 Sep 2024 18:33:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 18:43:43 +0000   Tue, 17 Sep 2024 18:33:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 18:43:43 +0000   Tue, 17 Sep 2024 18:33:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.61
	  Hostname:    embed-certs-081863
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 484b610558c74ad0a08f35c832966507
	  System UUID:                484b6105-58c7-4ad0-a08f-35c832966507
	  Boot ID:                    f49a0f38-8397-4d05-9ae1-35d932263375
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-662sf                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-7c65d6cfc9-dxjr7                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-embed-certs-081863                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kube-apiserver-embed-certs-081863             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-embed-certs-081863    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-7w64h                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-embed-certs-081863             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-6867b74b74-98t8z               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         15m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node embed-certs-081863 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node embed-certs-081863 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node embed-certs-081863 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node embed-certs-081863 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node embed-certs-081863 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node embed-certs-081863 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                node-controller  Node embed-certs-081863 event: Registered Node embed-certs-081863 in Controller
	
	
	==> dmesg <==
	[  +0.044771] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.253306] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.702251] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.700482] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.165682] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.064829] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059806] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.195482] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.155993] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.300584] systemd-fstab-generator[702]: Ignoring "noauto" option for root device
	[  +4.332498] systemd-fstab-generator[793]: Ignoring "noauto" option for root device
	[  +0.070969] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.934659] systemd-fstab-generator[914]: Ignoring "noauto" option for root device
	[  +4.592666] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.329192] kauditd_printk_skb: 54 callbacks suppressed
	[Sep17 18:29] kauditd_printk_skb: 31 callbacks suppressed
	[Sep17 18:33] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.264527] systemd-fstab-generator[2548]: Ignoring "noauto" option for root device
	[  +5.190181] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.902582] systemd-fstab-generator[2869]: Ignoring "noauto" option for root device
	[  +4.410741] systemd-fstab-generator[2975]: Ignoring "noauto" option for root device
	[  +0.098701] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.689755] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [fbb0b6d28b2ad7146435b1aacb4a12e2234e63117af18165d784114330c1a1db] <==
	{"level":"info","ts":"2024-09-17T18:33:17.034664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e29fd3db84bd8ae5 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-17T18:33:17.034700Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e29fd3db84bd8ae5 received MsgPreVoteResp from e29fd3db84bd8ae5 at term 1"}
	{"level":"info","ts":"2024-09-17T18:33:17.034740Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e29fd3db84bd8ae5 became candidate at term 2"}
	{"level":"info","ts":"2024-09-17T18:33:17.034763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e29fd3db84bd8ae5 received MsgVoteResp from e29fd3db84bd8ae5 at term 2"}
	{"level":"info","ts":"2024-09-17T18:33:17.034791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e29fd3db84bd8ae5 became leader at term 2"}
	{"level":"info","ts":"2024-09-17T18:33:17.034817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e29fd3db84bd8ae5 elected leader e29fd3db84bd8ae5 at term 2"}
	{"level":"info","ts":"2024-09-17T18:33:17.040725Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e29fd3db84bd8ae5","local-member-attributes":"{Name:embed-certs-081863 ClientURLs:[https://192.168.50.61:2379]}","request-path":"/0/members/e29fd3db84bd8ae5/attributes","cluster-id":"1b36a7ea249c729a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T18:33:17.040829Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T18:33:17.041263Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T18:33:17.042027Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T18:33:17.050539Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T18:33:17.050609Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T18:33:17.046776Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T18:33:17.055203Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-17T18:33:17.048570Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:33:17.054292Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.61:2379"}
	{"level":"info","ts":"2024-09-17T18:33:17.121264Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1b36a7ea249c729a","local-member-id":"e29fd3db84bd8ae5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:33:17.124582Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:33:17.124771Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T18:43:17.177390Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":724}
	{"level":"info","ts":"2024-09-17T18:43:17.187794Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":724,"took":"9.776798ms","hash":3154952064,"current-db-size-bytes":2433024,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2433024,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-09-17T18:43:17.187898Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3154952064,"revision":724,"compact-revision":-1}
	{"level":"info","ts":"2024-09-17T18:48:17.187158Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":967}
	{"level":"info","ts":"2024-09-17T18:48:17.191350Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":967,"took":"3.662881ms","hash":2740427599,"current-db-size-bytes":2433024,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1654784,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-09-17T18:48:17.191411Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2740427599,"revision":967,"compact-revision":724}
	
	
	==> kernel <==
	 18:48:48 up 20 min,  0 users,  load average: 0.19, 0.11, 0.10
	Linux embed-certs-081863 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [19bcc0b5d372609d466df179c233f539c54b17933e07226f60276eee65377599] <==
	W0917 18:33:09.563771       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:09.590705       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:09.594296       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:09.622191       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:09.635384       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:09.635663       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:09.726658       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:09.731067       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:09.744700       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:09.750304       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:09.750712       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:09.796053       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:09.931680       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:09.935417       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:09.945421       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:09.983301       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:10.045221       1 logging.go:55] [core] [Channel #15 SubChannel #17]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:10.076110       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:10.076358       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:10.093360       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:10.159023       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:10.195005       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:10.207101       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:10.268783       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 18:33:10.347417       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e4757efa9abab5ce3aaeff14db8d8984f93281054c88e4c1507e7d5c2aeacf7c] <==
	I0917 18:44:20.010073       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0917 18:44:20.010107       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0917 18:46:20.010714       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 18:46:20.011123       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0917 18:46:20.011042       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 18:46:20.011198       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0917 18:46:20.012385       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0917 18:46:20.012535       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0917 18:48:19.011318       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 18:48:19.011592       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0917 18:48:20.013178       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 18:48:20.013256       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0917 18:48:20.013527       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 18:48:20.013771       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0917 18:48:20.014516       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0917 18:48:20.015629       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b0345045d68d04ef66df723640923d62d2528e7d5f7bbbd152118ed20fbb0eea] <==
	E0917 18:43:26.100767       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:43:26.620648       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0917 18:43:43.970916       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-081863"
	E0917 18:43:56.107990       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:43:56.632139       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:44:26.116848       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:44:26.649553       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0917 18:44:34.001739       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="312.707µs"
	I0917 18:44:46.001557       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="129.308µs"
	E0917 18:44:56.123566       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:44:56.658660       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:45:26.131616       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:45:26.669061       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:45:56.138549       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:45:56.677963       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:46:26.145774       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:46:26.693086       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:46:56.151930       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:46:56.701781       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:47:26.162253       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:47:26.712715       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:47:56.171020       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:47:56.723190       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0917 18:48:26.178989       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 18:48:26.742340       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0a62d26a92dc9106cd1c098db2c8d60ffed160222c87217dd5e6d2716a35030a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 18:33:28.454705       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 18:33:28.465201       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.61"]
	E0917 18:33:28.465295       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 18:33:28.504354       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 18:33:28.504405       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 18:33:28.504429       1 server_linux.go:169] "Using iptables Proxier"
	I0917 18:33:28.507407       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 18:33:28.508081       1 server.go:483] "Version info" version="v1.31.1"
	I0917 18:33:28.508112       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 18:33:28.512761       1 config.go:199] "Starting service config controller"
	I0917 18:33:28.512861       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 18:33:28.512918       1 config.go:105] "Starting endpoint slice config controller"
	I0917 18:33:28.512939       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 18:33:28.514416       1 config.go:328] "Starting node config controller"
	I0917 18:33:28.514453       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 18:33:28.613106       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 18:33:28.613234       1 shared_informer.go:320] Caches are synced for service config
	I0917 18:33:28.614642       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0cd490a48dfb6addeeac86af9eb870abbd26bbcb4f9d71fc988e0d6595ae7a68] <==
	W0917 18:33:19.846693       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0917 18:33:19.846747       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 18:33:19.851709       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 18:33:19.851760       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 18:33:19.856865       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 18:33:19.856913       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 18:33:19.899195       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0917 18:33:19.899331       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0917 18:33:19.919278       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0917 18:33:19.919346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 18:33:19.946143       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 18:33:19.946213       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 18:33:19.964981       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0917 18:33:19.965040       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 18:33:19.981516       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0917 18:33:19.981568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 18:33:20.003895       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0917 18:33:20.003950       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 18:33:20.126928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 18:33:20.127004       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 18:33:20.156232       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0917 18:33:20.156290       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 18:33:20.394451       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0917 18:33:20.394556       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0917 18:33:22.603215       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 17 18:47:42 embed-certs-081863 kubelet[2876]: E0917 18:47:42.238820    2876 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598862237695834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:47:42 embed-certs-081863 kubelet[2876]: E0917 18:47:42.239349    2876 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598862237695834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:47:46 embed-certs-081863 kubelet[2876]: E0917 18:47:46.981192    2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-98t8z" podUID="941996a1-2109-4c06-88d1-19c6987f81bf"
	Sep 17 18:47:52 embed-certs-081863 kubelet[2876]: E0917 18:47:52.241700    2876 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598872241133433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:47:52 embed-certs-081863 kubelet[2876]: E0917 18:47:52.242138    2876 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598872241133433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:47:58 embed-certs-081863 kubelet[2876]: E0917 18:47:58.980775    2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-98t8z" podUID="941996a1-2109-4c06-88d1-19c6987f81bf"
	Sep 17 18:48:02 embed-certs-081863 kubelet[2876]: E0917 18:48:02.244884    2876 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598882244200139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:48:02 embed-certs-081863 kubelet[2876]: E0917 18:48:02.245319    2876 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598882244200139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:48:10 embed-certs-081863 kubelet[2876]: E0917 18:48:10.981721    2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-98t8z" podUID="941996a1-2109-4c06-88d1-19c6987f81bf"
	Sep 17 18:48:12 embed-certs-081863 kubelet[2876]: E0917 18:48:12.247307    2876 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598892246748563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:48:12 embed-certs-081863 kubelet[2876]: E0917 18:48:12.247812    2876 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598892246748563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:48:21 embed-certs-081863 kubelet[2876]: E0917 18:48:21.981214    2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-98t8z" podUID="941996a1-2109-4c06-88d1-19c6987f81bf"
	Sep 17 18:48:22 embed-certs-081863 kubelet[2876]: E0917 18:48:22.031518    2876 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 17 18:48:22 embed-certs-081863 kubelet[2876]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 17 18:48:22 embed-certs-081863 kubelet[2876]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 17 18:48:22 embed-certs-081863 kubelet[2876]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 17 18:48:22 embed-certs-081863 kubelet[2876]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 17 18:48:22 embed-certs-081863 kubelet[2876]: E0917 18:48:22.249549    2876 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598902249079038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:48:22 embed-certs-081863 kubelet[2876]: E0917 18:48:22.249580    2876 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598902249079038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:48:32 embed-certs-081863 kubelet[2876]: E0917 18:48:32.252267    2876 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598912251448137,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:48:32 embed-certs-081863 kubelet[2876]: E0917 18:48:32.252745    2876 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598912251448137,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:48:33 embed-certs-081863 kubelet[2876]: E0917 18:48:33.981182    2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-98t8z" podUID="941996a1-2109-4c06-88d1-19c6987f81bf"
	Sep 17 18:48:42 embed-certs-081863 kubelet[2876]: E0917 18:48:42.255407    2876 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598922254988117,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:48:42 embed-certs-081863 kubelet[2876]: E0917 18:48:42.255911    2876 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598922254988117,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 18:48:47 embed-certs-081863 kubelet[2876]: E0917 18:48:47.983006    2876 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-98t8z" podUID="941996a1-2109-4c06-88d1-19c6987f81bf"
	
	
	==> storage-provisioner [1aa96d2aea6e07cacabc5f4fac23da55198ad1d1d74bd2c8ad9cb041b9062ed3] <==
	I0917 18:33:28.304878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 18:33:28.340297       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 18:33:28.342898       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0917 18:33:28.360592       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 18:33:28.361238       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-081863_8b6cbfb3-cf47-4ca3-ac91-300ec6505313!
	I0917 18:33:28.367404       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8436ff3f-2690-449c-9ae4-d4990227f65a", APIVersion:"v1", ResourceVersion:"433", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-081863_8b6cbfb3-cf47-4ca3-ac91-300ec6505313 became leader
	I0917 18:33:28.462631       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-081863_8b6cbfb3-cf47-4ca3-ac91-300ec6505313!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-081863 -n embed-certs-081863
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-081863 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-98t8z
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-081863 describe pod metrics-server-6867b74b74-98t8z
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-081863 describe pod metrics-server-6867b74b74-98t8z: exit status 1 (68.215366ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-98t8z" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-081863 describe pod metrics-server-6867b74b74-98t8z: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (372.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (133.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
E0917 18:46:21.889583   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/calico-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
E0917 18:46:24.983175   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
E0917 18:46:44.544732   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/custom-flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
E0917 18:47:02.206422   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/bridge-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.143:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.143:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-190698 -n old-k8s-version-190698
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-190698 -n old-k8s-version-190698: exit status 2 (239.454807ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-190698" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-190698 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-190698 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.647µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-190698 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-190698 -n old-k8s-version-190698
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-190698 -n old-k8s-version-190698: exit status 2 (225.21717ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-190698 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-190698 logs -n 25: (1.710546546s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo containerd config dump                            |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-639892                           | enable-default-cni-639892    | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	| delete  | -p                                                     | disable-driver-mounts-671774 | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | disable-driver-mounts-671774                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:20 UTC |
	|         | default-k8s-diff-port-438836                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-081863            | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC | 17 Sep 24 18:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-081863                                  | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-328741             | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:20 UTC | 17 Sep 24 18:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-328741                                   | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:20 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-438836  | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:21 UTC | 17 Sep 24 18:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:21 UTC |                     |
	|         | default-k8s-diff-port-438836                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-081863                 | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-190698        | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-081863                                  | embed-certs-081863           | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC | 17 Sep 24 18:33 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-328741                  | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-328741                                   | no-preload-328741            | jenkins | v1.34.0 | 17 Sep 24 18:22 UTC | 17 Sep 24 18:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-438836       | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-438836 | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC | 17 Sep 24 18:32 UTC |
	|         | default-k8s-diff-port-438836                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-190698                              | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC | 17 Sep 24 18:23 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-190698             | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC | 17 Sep 24 18:23 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-190698                              | old-k8s-version-190698       | jenkins | v1.34.0 | 17 Sep 24 18:23 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 18:23:50
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 18:23:50.674050   78008 out.go:345] Setting OutFile to fd 1 ...
	I0917 18:23:50.674338   78008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:23:50.674349   78008 out.go:358] Setting ErrFile to fd 2...
	I0917 18:23:50.674356   78008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:23:50.674556   78008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 18:23:50.675161   78008 out.go:352] Setting JSON to false
	I0917 18:23:50.676159   78008 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7546,"bootTime":1726589885,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 18:23:50.676252   78008 start.go:139] virtualization: kvm guest
	I0917 18:23:50.678551   78008 out.go:177] * [old-k8s-version-190698] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 18:23:50.679898   78008 notify.go:220] Checking for updates...
	I0917 18:23:50.679923   78008 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 18:23:50.681520   78008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 18:23:50.683062   78008 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:23:50.684494   78008 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 18:23:50.685988   78008 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 18:23:50.687372   78008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 18:23:50.689066   78008 config.go:182] Loaded profile config "old-k8s-version-190698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0917 18:23:50.689526   78008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:23:50.689604   78008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:23:50.704879   78008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41839
	I0917 18:23:50.705416   78008 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:23:50.705985   78008 main.go:141] libmachine: Using API Version  1
	I0917 18:23:50.706014   78008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:23:50.706318   78008 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:23:50.706508   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:23:50.708560   78008 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0917 18:23:50.709804   78008 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 18:23:50.710139   78008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:23:50.710185   78008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:23:50.725466   78008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41935
	I0917 18:23:50.725978   78008 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:23:50.726521   78008 main.go:141] libmachine: Using API Version  1
	I0917 18:23:50.726552   78008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:23:50.726874   78008 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:23:50.727047   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:23:50.764769   78008 out.go:177] * Using the kvm2 driver based on existing profile
	I0917 18:23:50.766378   78008 start.go:297] selected driver: kvm2
	I0917 18:23:50.766396   78008 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:23:50.766522   78008 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 18:23:50.767254   78008 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:23:50.767323   78008 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19662-11085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0917 18:23:50.783226   78008 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0917 18:23:50.783619   78008 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 18:23:50.783658   78008 cni.go:84] Creating CNI manager for ""
	I0917 18:23:50.783697   78008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:23:50.783745   78008 start.go:340] cluster config:
	{Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:23:50.783859   78008 iso.go:125] acquiring lock: {Name:mkee7131e09b1c5eb94d106bda884bcbdbf6f906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 18:23:48.141429   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:23:50.786173   78008 out.go:177] * Starting "old-k8s-version-190698" primary control-plane node in "old-k8s-version-190698" cluster
	I0917 18:23:50.787985   78008 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0917 18:23:50.788036   78008 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0917 18:23:50.788046   78008 cache.go:56] Caching tarball of preloaded images
	I0917 18:23:50.788122   78008 preload.go:172] Found /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 18:23:50.788132   78008 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0917 18:23:50.788236   78008 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/config.json ...
	I0917 18:23:50.788409   78008 start.go:360] acquireMachinesLock for old-k8s-version-190698: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 18:23:54.221530   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:23:57.293515   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:03.373505   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:06.445563   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:12.525534   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:15.597572   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:21.677533   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:24.749529   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:30.829519   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:33.901554   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:39.981533   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:43.053468   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:49.133556   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:52.205564   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:24:58.285562   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:01.357500   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:07.437467   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:10.509559   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:16.589464   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:19.661586   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:25.741498   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:28.813506   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:34.893488   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:37.965553   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:44.045546   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:47.117526   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:53.197534   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:25:56.269532   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:02.349528   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:05.421492   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:11.501470   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:14.573534   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:20.653500   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:23.725530   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:29.805601   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:32.877548   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:38.957496   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:42.029510   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:48.109547   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:51.181567   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:26:57.261480   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:27:00.333628   77264 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.61:22: connect: no route to host
	I0917 18:27:03.338059   77433 start.go:364] duration metric: took 4m21.061938866s to acquireMachinesLock for "no-preload-328741"
	I0917 18:27:03.338119   77433 start.go:96] Skipping create...Using existing machine configuration
	I0917 18:27:03.338127   77433 fix.go:54] fixHost starting: 
	I0917 18:27:03.338580   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:27:03.338627   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:27:03.353917   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38843
	I0917 18:27:03.354383   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:27:03.354859   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:27:03.354881   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:27:03.355169   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:27:03.355331   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:03.355481   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetState
	I0917 18:27:03.357141   77433 fix.go:112] recreateIfNeeded on no-preload-328741: state=Stopped err=<nil>
	I0917 18:27:03.357164   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	W0917 18:27:03.357305   77433 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 18:27:03.359125   77433 out.go:177] * Restarting existing kvm2 VM for "no-preload-328741" ...
	I0917 18:27:03.335549   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:27:03.335586   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetMachineName
	I0917 18:27:03.335955   77264 buildroot.go:166] provisioning hostname "embed-certs-081863"
	I0917 18:27:03.335984   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetMachineName
	I0917 18:27:03.336183   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:27:03.337915   77264 machine.go:96] duration metric: took 4m37.417759423s to provisionDockerMachine
	I0917 18:27:03.337964   77264 fix.go:56] duration metric: took 4m37.441049892s for fixHost
	I0917 18:27:03.337973   77264 start.go:83] releasing machines lock for "embed-certs-081863", held for 4m37.441075799s
	W0917 18:27:03.337995   77264 start.go:714] error starting host: provision: host is not running
	W0917 18:27:03.338098   77264 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0917 18:27:03.338107   77264 start.go:729] Will try again in 5 seconds ...
	I0917 18:27:03.360504   77433 main.go:141] libmachine: (no-preload-328741) Calling .Start
	I0917 18:27:03.360723   77433 main.go:141] libmachine: (no-preload-328741) Ensuring networks are active...
	I0917 18:27:03.361552   77433 main.go:141] libmachine: (no-preload-328741) Ensuring network default is active
	I0917 18:27:03.361892   77433 main.go:141] libmachine: (no-preload-328741) Ensuring network mk-no-preload-328741 is active
	I0917 18:27:03.362266   77433 main.go:141] libmachine: (no-preload-328741) Getting domain xml...
	I0917 18:27:03.362986   77433 main.go:141] libmachine: (no-preload-328741) Creating domain...
	I0917 18:27:04.605668   77433 main.go:141] libmachine: (no-preload-328741) Waiting to get IP...
	I0917 18:27:04.606667   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:04.607120   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:04.607206   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:04.607116   78688 retry.go:31] will retry after 233.634344ms: waiting for machine to come up
	I0917 18:27:04.842666   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:04.843211   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:04.843238   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:04.843149   78688 retry.go:31] will retry after 295.987515ms: waiting for machine to come up
	I0917 18:27:05.140821   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:05.141150   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:05.141173   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:05.141121   78688 retry.go:31] will retry after 482.890276ms: waiting for machine to come up
	I0917 18:27:05.625952   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:05.626401   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:05.626461   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:05.626347   78688 retry.go:31] will retry after 554.515102ms: waiting for machine to come up
	I0917 18:27:06.182038   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:06.182421   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:06.182448   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:06.182375   78688 retry.go:31] will retry after 484.48355ms: waiting for machine to come up
	I0917 18:27:06.668366   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:06.668886   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:06.668917   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:06.668862   78688 retry.go:31] will retry after 821.433387ms: waiting for machine to come up
	I0917 18:27:08.338629   77264 start.go:360] acquireMachinesLock for embed-certs-081863: {Name:mk6402dbe89020208df2680dbd7b5623a7cfa6f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 18:27:07.491878   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:07.492313   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:07.492333   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:07.492274   78688 retry.go:31] will retry after 777.017059ms: waiting for machine to come up
	I0917 18:27:08.271320   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:08.271721   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:08.271748   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:08.271671   78688 retry.go:31] will retry after 1.033548419s: waiting for machine to come up
	I0917 18:27:09.307361   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:09.307889   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:09.307922   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:09.307826   78688 retry.go:31] will retry after 1.347955425s: waiting for machine to come up
	I0917 18:27:10.657426   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:10.657903   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:10.657927   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:10.657850   78688 retry.go:31] will retry after 1.52847221s: waiting for machine to come up
	I0917 18:27:12.188594   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:12.189069   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:12.189094   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:12.189031   78688 retry.go:31] will retry after 2.329019451s: waiting for machine to come up
	I0917 18:27:14.519240   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:14.519691   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:14.519718   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:14.519643   78688 retry.go:31] will retry after 2.547184893s: waiting for machine to come up
	I0917 18:27:17.068162   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:17.068621   77433 main.go:141] libmachine: (no-preload-328741) DBG | unable to find current IP address of domain no-preload-328741 in network mk-no-preload-328741
	I0917 18:27:17.068645   77433 main.go:141] libmachine: (no-preload-328741) DBG | I0917 18:27:17.068577   78688 retry.go:31] will retry after 3.042534231s: waiting for machine to come up
	I0917 18:27:21.442547   77819 start.go:364] duration metric: took 3m42.844200352s to acquireMachinesLock for "default-k8s-diff-port-438836"
	I0917 18:27:21.442612   77819 start.go:96] Skipping create...Using existing machine configuration
	I0917 18:27:21.442620   77819 fix.go:54] fixHost starting: 
	I0917 18:27:21.443035   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:27:21.443089   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:27:21.462997   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38191
	I0917 18:27:21.463468   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:27:21.464035   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:27:21.464056   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:27:21.464377   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:27:21.464546   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:21.464703   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetState
	I0917 18:27:21.466460   77819 fix.go:112] recreateIfNeeded on default-k8s-diff-port-438836: state=Stopped err=<nil>
	I0917 18:27:21.466502   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	W0917 18:27:21.466643   77819 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 18:27:21.468932   77819 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-438836" ...
	I0917 18:27:20.113857   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.114336   77433 main.go:141] libmachine: (no-preload-328741) Found IP for machine: 192.168.72.182
	I0917 18:27:20.114359   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has current primary IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.114364   77433 main.go:141] libmachine: (no-preload-328741) Reserving static IP address...
	I0917 18:27:20.114774   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "no-preload-328741", mac: "52:54:00:de:bd:6d", ip: "192.168.72.182"} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.114792   77433 main.go:141] libmachine: (no-preload-328741) Reserved static IP address: 192.168.72.182
	I0917 18:27:20.114808   77433 main.go:141] libmachine: (no-preload-328741) DBG | skip adding static IP to network mk-no-preload-328741 - found existing host DHCP lease matching {name: "no-preload-328741", mac: "52:54:00:de:bd:6d", ip: "192.168.72.182"}
	I0917 18:27:20.114822   77433 main.go:141] libmachine: (no-preload-328741) DBG | Getting to WaitForSSH function...
	I0917 18:27:20.114831   77433 main.go:141] libmachine: (no-preload-328741) Waiting for SSH to be available...
	I0917 18:27:20.116945   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.117224   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.117268   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.117371   77433 main.go:141] libmachine: (no-preload-328741) DBG | Using SSH client type: external
	I0917 18:27:20.117396   77433 main.go:141] libmachine: (no-preload-328741) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa (-rw-------)
	I0917 18:27:20.117427   77433 main.go:141] libmachine: (no-preload-328741) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:27:20.117439   77433 main.go:141] libmachine: (no-preload-328741) DBG | About to run SSH command:
	I0917 18:27:20.117446   77433 main.go:141] libmachine: (no-preload-328741) DBG | exit 0
	I0917 18:27:20.241462   77433 main.go:141] libmachine: (no-preload-328741) DBG | SSH cmd err, output: <nil>: 
	I0917 18:27:20.241844   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetConfigRaw
	I0917 18:27:20.242520   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetIP
	I0917 18:27:20.245397   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.245786   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.245821   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.246121   77433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/config.json ...
	I0917 18:27:20.246346   77433 machine.go:93] provisionDockerMachine start ...
	I0917 18:27:20.246367   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:20.246573   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.248978   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.249318   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.249345   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.249489   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:20.249643   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.249795   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.249911   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:20.250048   77433 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:20.250301   77433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0917 18:27:20.250317   77433 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 18:27:20.357778   77433 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 18:27:20.357805   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetMachineName
	I0917 18:27:20.358058   77433 buildroot.go:166] provisioning hostname "no-preload-328741"
	I0917 18:27:20.358083   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetMachineName
	I0917 18:27:20.358261   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.361057   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.361463   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.361498   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.361617   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:20.361774   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.361948   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.362031   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:20.362157   77433 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:20.362321   77433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0917 18:27:20.362337   77433 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-328741 && echo "no-preload-328741" | sudo tee /etc/hostname
	I0917 18:27:20.486928   77433 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-328741
	
	I0917 18:27:20.486956   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.489814   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.490212   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.490245   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.490451   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:20.490627   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.490846   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.491105   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:20.491327   77433 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:20.491532   77433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0917 18:27:20.491553   77433 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-328741' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-328741/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-328741' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:27:20.607308   77433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:27:20.607336   77433 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:27:20.607379   77433 buildroot.go:174] setting up certificates
	I0917 18:27:20.607394   77433 provision.go:84] configureAuth start
	I0917 18:27:20.607407   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetMachineName
	I0917 18:27:20.607708   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetIP
	I0917 18:27:20.610353   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.610722   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.610751   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.610897   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.612874   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.613160   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.613196   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.613366   77433 provision.go:143] copyHostCerts
	I0917 18:27:20.613425   77433 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:27:20.613435   77433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:27:20.613508   77433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:27:20.613607   77433 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:27:20.613614   77433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:27:20.613645   77433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:27:20.613706   77433 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:27:20.613713   77433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:27:20.613734   77433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:27:20.613789   77433 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.no-preload-328741 san=[127.0.0.1 192.168.72.182 localhost minikube no-preload-328741]
	I0917 18:27:20.808567   77433 provision.go:177] copyRemoteCerts
	I0917 18:27:20.808634   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:27:20.808662   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.811568   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.811927   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.811954   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.812154   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:20.812347   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.812503   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:20.812627   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:27:20.895825   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0917 18:27:20.922489   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 18:27:20.948827   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:27:20.974824   77433 provision.go:87] duration metric: took 367.418792ms to configureAuth
	I0917 18:27:20.974852   77433 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:27:20.975023   77433 config.go:182] Loaded profile config "no-preload-328741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:27:20.975090   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:20.977758   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.978068   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:20.978105   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:20.978254   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:20.978473   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.978662   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:20.978784   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:20.978951   77433 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:20.979110   77433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0917 18:27:20.979126   77433 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:27:21.205095   77433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:27:21.205123   77433 machine.go:96] duration metric: took 958.76263ms to provisionDockerMachine
	I0917 18:27:21.205136   77433 start.go:293] postStartSetup for "no-preload-328741" (driver="kvm2")
	I0917 18:27:21.205148   77433 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:27:21.205167   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:21.205532   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:27:21.205565   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:21.208451   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.208840   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:21.208882   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.209046   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:21.209355   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:21.209578   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:21.209759   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:27:21.291918   77433 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:27:21.296054   77433 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:27:21.296077   77433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:27:21.296139   77433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:27:21.296215   77433 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:27:21.296313   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:27:21.305838   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:27:21.331220   77433 start.go:296] duration metric: took 126.069168ms for postStartSetup
	I0917 18:27:21.331261   77433 fix.go:56] duration metric: took 17.993134184s for fixHost
	I0917 18:27:21.331280   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:21.334290   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.334663   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:21.334688   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.334893   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:21.335134   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:21.335275   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:21.335443   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:21.335597   77433 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:21.335788   77433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0917 18:27:21.335803   77433 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:27:21.442323   77433 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726597641.413351440
	
	I0917 18:27:21.442375   77433 fix.go:216] guest clock: 1726597641.413351440
	I0917 18:27:21.442390   77433 fix.go:229] Guest: 2024-09-17 18:27:21.41335144 +0000 UTC Remote: 2024-09-17 18:27:21.331264373 +0000 UTC m=+279.198911017 (delta=82.087067ms)
	I0917 18:27:21.442423   77433 fix.go:200] guest clock delta is within tolerance: 82.087067ms
	I0917 18:27:21.442443   77433 start.go:83] releasing machines lock for "no-preload-328741", held for 18.10434208s
	I0917 18:27:21.442489   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:21.442775   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetIP
	I0917 18:27:21.445223   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.445561   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:21.445602   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.445710   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:21.446182   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:21.446357   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:27:21.446466   77433 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:27:21.446519   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:21.446551   77433 ssh_runner.go:195] Run: cat /version.json
	I0917 18:27:21.446574   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:27:21.449063   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.449340   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.449400   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:21.449435   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.449557   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:21.449699   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:21.449832   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:21.449833   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:21.449866   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:21.450010   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:27:21.450004   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:27:21.450104   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:27:21.450222   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:27:21.450352   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:27:21.552947   77433 ssh_runner.go:195] Run: systemctl --version
	I0917 18:27:21.559634   77433 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:27:21.707720   77433 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:27:21.714672   77433 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:27:21.714746   77433 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:27:21.731669   77433 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:27:21.731700   77433 start.go:495] detecting cgroup driver to use...
	I0917 18:27:21.731776   77433 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:27:21.749370   77433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:27:21.765181   77433 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:27:21.765284   77433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:27:21.782356   77433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:27:21.801216   77433 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:27:21.918587   77433 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:27:22.089578   77433 docker.go:233] disabling docker service ...
	I0917 18:27:22.089661   77433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:27:22.110533   77433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:27:22.125372   77433 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:27:22.241575   77433 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:27:22.367081   77433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:27:22.381835   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:27:22.402356   77433 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 18:27:22.402432   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.413980   77433 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:27:22.414051   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.426845   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.439426   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.451352   77433 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:27:22.463891   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.475686   77433 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.495380   77433 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:22.507217   77433 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:27:22.517776   77433 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:27:22.517844   77433 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:27:22.537889   77433 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:27:22.549554   77433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:27:22.663258   77433 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:27:22.762619   77433 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:27:22.762693   77433 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:27:22.769911   77433 start.go:563] Will wait 60s for crictl version
	I0917 18:27:22.769967   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:22.775014   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:27:22.819750   77433 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:27:22.819864   77433 ssh_runner.go:195] Run: crio --version
	I0917 18:27:22.849303   77433 ssh_runner.go:195] Run: crio --version
	I0917 18:27:22.887418   77433 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 18:27:21.470362   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Start
	I0917 18:27:21.470570   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Ensuring networks are active...
	I0917 18:27:21.471316   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Ensuring network default is active
	I0917 18:27:21.471781   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Ensuring network mk-default-k8s-diff-port-438836 is active
	I0917 18:27:21.472151   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Getting domain xml...
	I0917 18:27:21.472856   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Creating domain...
	I0917 18:27:22.744436   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting to get IP...
	I0917 18:27:22.745314   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:22.745829   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:22.745899   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:22.745819   78807 retry.go:31] will retry after 201.903728ms: waiting for machine to come up
	I0917 18:27:22.949838   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:22.951570   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:22.951596   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:22.951537   78807 retry.go:31] will retry after 376.852856ms: waiting for machine to come up
	I0917 18:27:23.330165   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:23.330685   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:23.330706   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:23.330633   78807 retry.go:31] will retry after 415.874344ms: waiting for machine to come up
	I0917 18:27:22.888728   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetIP
	I0917 18:27:22.891793   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:22.892111   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:27:22.892130   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:27:22.892513   77433 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0917 18:27:22.897071   77433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:27:22.911118   77433 kubeadm.go:883] updating cluster {Name:no-preload-328741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-328741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:27:22.911279   77433 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 18:27:22.911333   77433 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:27:22.949155   77433 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0917 18:27:22.949180   77433 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0917 18:27:22.949270   77433 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:22.949289   77433 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:22.949319   77433 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0917 18:27:22.949298   77433 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:22.949398   77433 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:22.949424   77433 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:22.949449   77433 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:22.949339   77433 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:22.950952   77433 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:22.951106   77433 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:22.951113   77433 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:22.951238   77433 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:22.951257   77433 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0917 18:27:22.951257   77433 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:22.951343   77433 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:22.951426   77433 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.145473   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:23.155577   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:23.167187   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:23.169154   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:23.171736   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.196199   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:23.225029   77433 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0917 18:27:23.225085   77433 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:23.225133   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.233185   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0917 18:27:23.269008   77433 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0917 18:27:23.269045   77433 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:23.269092   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.307273   77433 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0917 18:27:23.307319   77433 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:23.307374   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.345906   77433 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0917 18:27:23.345949   77433 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.345999   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.346222   77433 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0917 18:27:23.346259   77433 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:23.346316   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.362612   77433 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0917 18:27:23.362657   77433 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:23.362684   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:23.362707   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.464589   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:23.464684   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:23.464742   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:23.464815   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.464903   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:23.464911   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:23.616289   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:23.616349   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:23.616400   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:23.616459   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.616514   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:23.616548   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 18:27:23.752643   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0917 18:27:23.752754   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0917 18:27:23.752754   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0917 18:27:23.761857   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0917 18:27:23.761945   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0917 18:27:23.762041   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0917 18:27:23.768641   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0917 18:27:23.883181   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0917 18:27:23.883181   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0917 18:27:23.883230   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0917 18:27:23.883294   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0917 18:27:23.883301   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0917 18:27:23.883302   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0917 18:27:23.883314   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0917 18:27:23.883371   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0917 18:27:23.883388   77433 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0917 18:27:23.883401   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0917 18:27:23.883413   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0917 18:27:23.883680   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0917 18:27:23.883758   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0917 18:27:23.894354   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0917 18:27:23.894539   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0917 18:27:23.901735   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0917 18:27:23.901990   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0917 18:27:23.909116   77433 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:26.450360   77433 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.566575076s)
	I0917 18:27:26.450405   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0917 18:27:26.450360   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.566921389s)
	I0917 18:27:26.450422   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0917 18:27:26.450429   77433 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.541282746s)
	I0917 18:27:26.450444   77433 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0917 18:27:26.450492   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0917 18:27:26.450485   77433 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0917 18:27:26.450524   77433 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:26.450567   77433 ssh_runner.go:195] Run: which crictl
	I0917 18:27:23.748331   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:23.748832   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:23.748862   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:23.748765   78807 retry.go:31] will retry after 515.370863ms: waiting for machine to come up
	I0917 18:27:24.265477   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:24.265902   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:24.265939   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:24.265859   78807 retry.go:31] will retry after 629.410487ms: waiting for machine to come up
	I0917 18:27:24.896939   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:24.897469   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:24.897500   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:24.897415   78807 retry.go:31] will retry after 846.873676ms: waiting for machine to come up
	I0917 18:27:25.745594   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:25.746228   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:25.746254   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:25.746167   78807 retry.go:31] will retry after 1.192058073s: waiting for machine to come up
	I0917 18:27:26.940216   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:26.940678   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:26.940702   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:26.940637   78807 retry.go:31] will retry after 1.449067435s: waiting for machine to come up
	I0917 18:27:28.392247   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:28.392711   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:28.392753   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:28.392665   78807 retry.go:31] will retry after 1.444723582s: waiting for machine to come up
	I0917 18:27:29.730898   77433 ssh_runner.go:235] Completed: which crictl: (3.280308944s)
	I0917 18:27:29.730988   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:29.731032   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.280407278s)
	I0917 18:27:29.731069   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0917 18:27:29.731121   77433 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0917 18:27:29.731164   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0917 18:27:29.781214   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:32.016162   77433 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.234900005s)
	I0917 18:27:32.016246   77433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:27:32.016175   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.284993422s)
	I0917 18:27:32.016331   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0917 18:27:32.016382   77433 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0917 18:27:32.016431   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0917 18:27:32.062774   77433 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0917 18:27:32.062903   77433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0917 18:27:29.839565   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:29.840118   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:29.840154   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:29.840044   78807 retry.go:31] will retry after 1.984255207s: waiting for machine to come up
	I0917 18:27:31.825642   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:31.826059   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:31.826105   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:31.826027   78807 retry.go:31] will retry after 1.870760766s: waiting for machine to come up
	I0917 18:27:34.201435   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.18496735s)
	I0917 18:27:34.201470   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0917 18:27:34.201493   77433 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0917 18:27:34.201506   77433 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.138578181s)
	I0917 18:27:34.201545   77433 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0917 18:27:34.201547   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0917 18:27:36.281470   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.079903331s)
	I0917 18:27:36.281515   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0917 18:27:36.281539   77433 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0917 18:27:36.281581   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0917 18:27:33.698947   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:33.699358   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:33.699389   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:33.699308   78807 retry.go:31] will retry after 2.194557575s: waiting for machine to come up
	I0917 18:27:35.896774   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:35.897175   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | unable to find current IP address of domain default-k8s-diff-port-438836 in network mk-default-k8s-diff-port-438836
	I0917 18:27:35.897215   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | I0917 18:27:35.897139   78807 retry.go:31] will retry after 3.232409388s: waiting for machine to come up
	I0917 18:27:40.422552   78008 start.go:364] duration metric: took 3m49.634084682s to acquireMachinesLock for "old-k8s-version-190698"
	I0917 18:27:40.422631   78008 start.go:96] Skipping create...Using existing machine configuration
	I0917 18:27:40.422641   78008 fix.go:54] fixHost starting: 
	I0917 18:27:40.423075   78008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:27:40.423129   78008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:27:40.444791   78008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38471
	I0917 18:27:40.445363   78008 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:27:40.446028   78008 main.go:141] libmachine: Using API Version  1
	I0917 18:27:40.446063   78008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:27:40.446445   78008 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:27:40.446690   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:27:40.446844   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetState
	I0917 18:27:40.448698   78008 fix.go:112] recreateIfNeeded on old-k8s-version-190698: state=Stopped err=<nil>
	I0917 18:27:40.448743   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	W0917 18:27:40.448912   78008 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 18:27:40.451316   78008 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-190698" ...
	I0917 18:27:40.452694   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .Start
	I0917 18:27:40.452899   78008 main.go:141] libmachine: (old-k8s-version-190698) Ensuring networks are active...
	I0917 18:27:40.453913   78008 main.go:141] libmachine: (old-k8s-version-190698) Ensuring network default is active
	I0917 18:27:40.454353   78008 main.go:141] libmachine: (old-k8s-version-190698) Ensuring network mk-old-k8s-version-190698 is active
	I0917 18:27:40.454806   78008 main.go:141] libmachine: (old-k8s-version-190698) Getting domain xml...
	I0917 18:27:40.455606   78008 main.go:141] libmachine: (old-k8s-version-190698) Creating domain...
	I0917 18:27:39.131665   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.132199   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Found IP for machine: 192.168.39.58
	I0917 18:27:39.132224   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Reserving static IP address...
	I0917 18:27:39.132241   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has current primary IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.132683   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-438836", mac: "52:54:00:78:fb:fd", ip: "192.168.39.58"} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.132716   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | skip adding static IP to network mk-default-k8s-diff-port-438836 - found existing host DHCP lease matching {name: "default-k8s-diff-port-438836", mac: "52:54:00:78:fb:fd", ip: "192.168.39.58"}
	I0917 18:27:39.132729   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Reserved static IP address: 192.168.39.58
	I0917 18:27:39.132744   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Waiting for SSH to be available...
	I0917 18:27:39.132759   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Getting to WaitForSSH function...
	I0917 18:27:39.135223   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.135590   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.135612   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.135797   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Using SSH client type: external
	I0917 18:27:39.135825   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa (-rw-------)
	I0917 18:27:39.135871   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:27:39.135888   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | About to run SSH command:
	I0917 18:27:39.135899   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | exit 0
	I0917 18:27:39.261644   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | SSH cmd err, output: <nil>: 
	I0917 18:27:39.261978   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetConfigRaw
	I0917 18:27:39.262594   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetIP
	I0917 18:27:39.265005   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.265308   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.265376   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.265576   77819 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/config.json ...
	I0917 18:27:39.265817   77819 machine.go:93] provisionDockerMachine start ...
	I0917 18:27:39.265835   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:39.266039   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.268290   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.268616   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.268646   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.268846   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:39.269019   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.269159   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.269333   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:39.269497   77819 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:39.269689   77819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0917 18:27:39.269701   77819 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 18:27:39.378024   77819 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 18:27:39.378050   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetMachineName
	I0917 18:27:39.378284   77819 buildroot.go:166] provisioning hostname "default-k8s-diff-port-438836"
	I0917 18:27:39.378322   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetMachineName
	I0917 18:27:39.378529   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.381247   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.381574   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.381614   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.381765   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:39.381938   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.382057   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.382169   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:39.382311   77819 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:39.382546   77819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0917 18:27:39.382567   77819 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-438836 && echo "default-k8s-diff-port-438836" | sudo tee /etc/hostname
	I0917 18:27:39.516431   77819 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-438836
	
	I0917 18:27:39.516462   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.519542   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.519934   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.519966   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.520172   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:39.520405   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.520594   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.520773   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:39.520927   77819 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:39.521094   77819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0917 18:27:39.521111   77819 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-438836' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-438836/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-438836' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:27:39.640608   77819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:27:39.640656   77819 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:27:39.640717   77819 buildroot.go:174] setting up certificates
	I0917 18:27:39.640731   77819 provision.go:84] configureAuth start
	I0917 18:27:39.640750   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetMachineName
	I0917 18:27:39.641038   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetIP
	I0917 18:27:39.643698   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.644026   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.644085   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.644374   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.646822   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.647198   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.647227   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.647360   77819 provision.go:143] copyHostCerts
	I0917 18:27:39.647428   77819 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:27:39.647441   77819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:27:39.647516   77819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:27:39.647637   77819 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:27:39.647658   77819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:27:39.647693   77819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:27:39.647782   77819 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:27:39.647790   77819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:27:39.647817   77819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:27:39.647883   77819 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-438836 san=[127.0.0.1 192.168.39.58 default-k8s-diff-port-438836 localhost minikube]
	I0917 18:27:39.751962   77819 provision.go:177] copyRemoteCerts
	I0917 18:27:39.752028   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:27:39.752053   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.754975   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.755348   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.755381   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.755541   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:39.755725   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.755872   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:39.755988   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:27:39.840071   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0917 18:27:39.866175   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 18:27:39.896353   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:27:39.924332   77819 provision.go:87] duration metric: took 283.582838ms to configureAuth
	I0917 18:27:39.924363   77819 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:27:39.924606   77819 config.go:182] Loaded profile config "default-k8s-diff-port-438836": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:27:39.924701   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:39.927675   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.928027   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:39.928058   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:39.928307   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:39.928545   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.928710   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:39.928854   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:39.929011   77819 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:39.929244   77819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0917 18:27:39.929272   77819 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:27:40.170729   77819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:27:40.170763   77819 machine.go:96] duration metric: took 904.932975ms to provisionDockerMachine
	I0917 18:27:40.170776   77819 start.go:293] postStartSetup for "default-k8s-diff-port-438836" (driver="kvm2")
	I0917 18:27:40.170789   77819 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:27:40.170810   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:40.171145   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:27:40.171187   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:40.173980   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.174451   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:40.174480   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.174739   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:40.174926   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:40.175096   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:40.175261   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:27:40.263764   77819 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:27:40.269500   77819 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:27:40.269528   77819 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:27:40.269611   77819 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:27:40.269711   77819 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:27:40.269838   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:27:40.280672   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:27:40.309608   77819 start.go:296] duration metric: took 138.819033ms for postStartSetup
	I0917 18:27:40.309648   77819 fix.go:56] duration metric: took 18.867027995s for fixHost
	I0917 18:27:40.309668   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:40.312486   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.313018   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:40.313042   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.313201   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:40.313408   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:40.313574   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:40.313691   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:40.313853   77819 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:40.314037   77819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0917 18:27:40.314050   77819 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:27:40.422393   77819 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726597660.391833807
	
	I0917 18:27:40.422417   77819 fix.go:216] guest clock: 1726597660.391833807
	I0917 18:27:40.422424   77819 fix.go:229] Guest: 2024-09-17 18:27:40.391833807 +0000 UTC Remote: 2024-09-17 18:27:40.309651352 +0000 UTC m=+241.856499140 (delta=82.182455ms)
	I0917 18:27:40.422443   77819 fix.go:200] guest clock delta is within tolerance: 82.182455ms
	I0917 18:27:40.422448   77819 start.go:83] releasing machines lock for "default-k8s-diff-port-438836", held for 18.97986821s
	I0917 18:27:40.422473   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:40.422745   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetIP
	I0917 18:27:40.425463   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.425856   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:40.425885   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.426048   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:40.426529   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:40.426665   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:27:40.426742   77819 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:27:40.426807   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:40.426910   77819 ssh_runner.go:195] Run: cat /version.json
	I0917 18:27:40.426936   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:27:40.429570   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.429639   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.429967   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:40.430004   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:40.430031   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.430047   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:40.430161   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:40.430297   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:27:40.430376   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:40.430470   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:27:40.430662   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:40.430664   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:27:40.430841   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:27:40.430837   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:27:40.532536   77819 ssh_runner.go:195] Run: systemctl --version
	I0917 18:27:40.540125   77819 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:27:40.697991   77819 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:27:40.705336   77819 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:27:40.705427   77819 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:27:40.723038   77819 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:27:40.723065   77819 start.go:495] detecting cgroup driver to use...
	I0917 18:27:40.723135   77819 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:27:40.745561   77819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:27:40.765884   77819 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:27:40.765955   77819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:27:40.786769   77819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:27:40.805655   77819 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:27:40.935895   77819 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:27:41.121556   77819 docker.go:233] disabling docker service ...
	I0917 18:27:41.121638   77819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:27:41.144711   77819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:27:41.164782   77819 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:27:41.308439   77819 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:27:41.467525   77819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:27:41.485989   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:27:41.510198   77819 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 18:27:41.510282   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.526458   77819 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:27:41.526566   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.543334   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.558978   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.574621   77819 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:27:41.587226   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.603144   77819 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.627410   77819 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:27:41.639981   77819 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:27:41.651547   77819 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:27:41.651615   77819 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:27:41.669534   77819 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:27:41.684429   77819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:27:41.839270   77819 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:27:41.974151   77819 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:27:41.974230   77819 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:27:41.980491   77819 start.go:563] Will wait 60s for crictl version
	I0917 18:27:41.980563   77819 ssh_runner.go:195] Run: which crictl
	I0917 18:27:41.985802   77819 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:27:42.033141   77819 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:27:42.033247   77819 ssh_runner.go:195] Run: crio --version
	I0917 18:27:42.076192   77819 ssh_runner.go:195] Run: crio --version
	I0917 18:27:42.118442   77819 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 18:27:37.750960   77433 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.469353165s)
	I0917 18:27:37.750995   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0917 18:27:37.751021   77433 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0917 18:27:37.751074   77433 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0917 18:27:38.415240   77433 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0917 18:27:38.415308   77433 cache_images.go:123] Successfully loaded all cached images
	I0917 18:27:38.415317   77433 cache_images.go:92] duration metric: took 15.466122195s to LoadCachedImages
	I0917 18:27:38.415338   77433 kubeadm.go:934] updating node { 192.168.72.182 8443 v1.31.1 crio true true} ...
	I0917 18:27:38.415428   77433 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-328741 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-328741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:27:38.415536   77433 ssh_runner.go:195] Run: crio config
	I0917 18:27:38.466849   77433 cni.go:84] Creating CNI manager for ""
	I0917 18:27:38.466880   77433 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:27:38.466893   77433 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:27:38.466921   77433 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.182 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-328741 NodeName:no-preload-328741 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 18:27:38.467090   77433 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-328741"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.182
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.182"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:27:38.467166   77433 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 18:27:38.478263   77433 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:27:38.478345   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:27:38.488938   77433 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0917 18:27:38.509613   77433 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:27:38.529224   77433 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0917 18:27:38.549010   77433 ssh_runner.go:195] Run: grep 192.168.72.182	control-plane.minikube.internal$ /etc/hosts
	I0917 18:27:38.553381   77433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:27:38.566215   77433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:27:38.688671   77433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:27:38.708655   77433 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741 for IP: 192.168.72.182
	I0917 18:27:38.708677   77433 certs.go:194] generating shared ca certs ...
	I0917 18:27:38.708693   77433 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:27:38.708860   77433 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:27:38.708916   77433 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:27:38.708930   77433 certs.go:256] generating profile certs ...
	I0917 18:27:38.709038   77433 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/client.key
	I0917 18:27:38.709130   77433 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/apiserver.key.843ed40b
	I0917 18:27:38.709199   77433 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/proxy-client.key
	I0917 18:27:38.709384   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:27:38.709421   77433 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:27:38.709435   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:27:38.709471   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:27:38.709519   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:27:38.709552   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:27:38.709606   77433 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:27:38.710412   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:27:38.754736   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:27:38.792703   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:27:38.826420   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:27:38.869433   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0917 18:27:38.897601   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 18:27:38.928694   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:27:38.953856   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/no-preload-328741/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 18:27:38.978643   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:27:39.004382   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:27:39.031548   77433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:27:39.057492   77433 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:27:39.075095   77433 ssh_runner.go:195] Run: openssl version
	I0917 18:27:39.081033   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:27:39.092196   77433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:27:39.097013   77433 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:27:39.097070   77433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:27:39.103104   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:27:39.114377   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:27:39.125639   77433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:27:39.130757   77433 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:27:39.130828   77433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:27:39.137857   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:27:39.150215   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:27:39.161792   77433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:39.166467   77433 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:39.166528   77433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:39.172262   77433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:27:39.183793   77433 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:27:39.188442   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 18:27:39.194477   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 18:27:39.200688   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 18:27:39.207092   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 18:27:39.213451   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 18:27:39.220286   77433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 18:27:39.226642   77433 kubeadm.go:392] StartCluster: {Name:no-preload-328741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-328741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:27:39.226747   77433 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:27:39.226814   77433 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:27:39.273929   77433 cri.go:89] found id: ""
	I0917 18:27:39.274001   77433 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:27:39.286519   77433 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 18:27:39.286543   77433 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 18:27:39.286584   77433 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 18:27:39.298955   77433 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 18:27:39.300296   77433 kubeconfig.go:125] found "no-preload-328741" server: "https://192.168.72.182:8443"
	I0917 18:27:39.303500   77433 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 18:27:39.316866   77433 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.182
	I0917 18:27:39.316904   77433 kubeadm.go:1160] stopping kube-system containers ...
	I0917 18:27:39.316917   77433 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0917 18:27:39.316980   77433 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:27:39.356519   77433 cri.go:89] found id: ""
	I0917 18:27:39.356608   77433 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 18:27:39.373894   77433 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:27:39.387121   77433 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:27:39.387140   77433 kubeadm.go:157] found existing configuration files:
	
	I0917 18:27:39.387183   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:27:39.397807   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:27:39.397867   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:27:39.408393   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:27:39.420103   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:27:39.420175   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:27:39.432123   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:27:39.442237   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:27:39.442308   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:27:39.452902   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:27:39.462802   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:27:39.462857   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:27:39.473035   77433 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:27:39.483824   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:39.603594   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:40.540682   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:40.798278   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:40.876550   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:41.006410   77433 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:27:41.006504   77433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:41.507355   77433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:42.006707   77433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:42.054395   77433 api_server.go:72] duration metric: took 1.047984188s to wait for apiserver process to appear ...
	I0917 18:27:42.054448   77433 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:27:42.054473   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:42.054949   77433 api_server.go:269] stopped: https://192.168.72.182:8443/healthz: Get "https://192.168.72.182:8443/healthz": dial tcp 192.168.72.182:8443: connect: connection refused
	I0917 18:27:42.119537   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetIP
	I0917 18:27:42.122908   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:42.123378   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:27:42.123409   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:27:42.123739   77819 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0917 18:27:42.129654   77819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:27:42.144892   77819 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-438836 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-438836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:27:42.145015   77819 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 18:27:42.145054   77819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:27:42.191002   77819 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0917 18:27:42.191086   77819 ssh_runner.go:195] Run: which lz4
	I0917 18:27:42.196979   77819 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 18:27:42.203024   77819 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 18:27:42.203079   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0917 18:27:41.874915   78008 main.go:141] libmachine: (old-k8s-version-190698) Waiting to get IP...
	I0917 18:27:41.875882   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:41.876350   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:41.876438   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:41.876337   78975 retry.go:31] will retry after 221.467702ms: waiting for machine to come up
	I0917 18:27:42.100196   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:42.100848   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:42.100869   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:42.100798   78975 retry.go:31] will retry after 339.25287ms: waiting for machine to come up
	I0917 18:27:42.441407   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:42.442029   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:42.442057   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:42.441987   78975 retry.go:31] will retry after 471.576193ms: waiting for machine to come up
	I0917 18:27:42.915529   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:42.916159   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:42.916187   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:42.916123   78975 retry.go:31] will retry after 502.97146ms: waiting for machine to come up
	I0917 18:27:43.420795   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:43.421214   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:43.421256   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:43.421163   78975 retry.go:31] will retry after 660.138027ms: waiting for machine to come up
	I0917 18:27:44.082653   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:44.083225   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:44.083255   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:44.083166   78975 retry.go:31] will retry after 656.142121ms: waiting for machine to come up
	I0917 18:27:44.740700   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:44.741167   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:44.741193   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:44.741129   78975 retry.go:31] will retry after 928.613341ms: waiting for machine to come up
	I0917 18:27:45.671934   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:45.672452   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:45.672489   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:45.672370   78975 retry.go:31] will retry after 980.051509ms: waiting for machine to come up
	I0917 18:27:42.554732   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:45.472618   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:27:45.472651   77433 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:27:45.472667   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:45.491418   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:27:45.491447   77433 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:27:45.554728   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:45.562047   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:27:45.562083   77433 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:27:46.054709   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:46.077483   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:27:46.077533   77433 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:27:46.555249   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:46.570200   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:27:46.570242   77433 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:27:47.054604   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:27:47.062637   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 200:
	ok
	I0917 18:27:47.074075   77433 api_server.go:141] control plane version: v1.31.1
	I0917 18:27:47.074107   77433 api_server.go:131] duration metric: took 5.019651057s to wait for apiserver health ...
	I0917 18:27:47.074118   77433 cni.go:84] Creating CNI manager for ""
	I0917 18:27:47.074127   77433 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:27:47.275236   77433 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:27:43.762089   77819 crio.go:462] duration metric: took 1.565150626s to copy over tarball
	I0917 18:27:43.762183   77819 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 18:27:46.222613   77819 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.460401071s)
	I0917 18:27:46.222640   77819 crio.go:469] duration metric: took 2.460522168s to extract the tarball
	I0917 18:27:46.222649   77819 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 18:27:46.260257   77819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:27:46.314982   77819 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 18:27:46.315007   77819 cache_images.go:84] Images are preloaded, skipping loading
	I0917 18:27:46.315017   77819 kubeadm.go:934] updating node { 192.168.39.58 8444 v1.31.1 crio true true} ...
	I0917 18:27:46.315159   77819 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-438836 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-438836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:27:46.315267   77819 ssh_runner.go:195] Run: crio config
	I0917 18:27:46.372511   77819 cni.go:84] Creating CNI manager for ""
	I0917 18:27:46.372534   77819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:27:46.372545   77819 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:27:46.372564   77819 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-438836 NodeName:default-k8s-diff-port-438836 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 18:27:46.372684   77819 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-438836"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.58
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:27:46.372742   77819 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 18:27:46.383855   77819 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:27:46.383950   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:27:46.394588   77819 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0917 18:27:46.416968   77819 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:27:46.438389   77819 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0917 18:27:46.461630   77819 ssh_runner.go:195] Run: grep 192.168.39.58	control-plane.minikube.internal$ /etc/hosts
	I0917 18:27:46.467126   77819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:27:46.484625   77819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:27:46.614753   77819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:27:46.638959   77819 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836 for IP: 192.168.39.58
	I0917 18:27:46.638984   77819 certs.go:194] generating shared ca certs ...
	I0917 18:27:46.639004   77819 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:27:46.639166   77819 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:27:46.639228   77819 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:27:46.639240   77819 certs.go:256] generating profile certs ...
	I0917 18:27:46.639349   77819 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/client.key
	I0917 18:27:46.639420   77819 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/apiserver.key.06041009
	I0917 18:27:46.639484   77819 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/proxy-client.key
	I0917 18:27:46.639636   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:27:46.639695   77819 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:27:46.639708   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:27:46.639740   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:27:46.639773   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:27:46.639807   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:27:46.639904   77819 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:27:46.640789   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:27:46.681791   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:27:46.715575   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:27:46.746415   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:27:46.780380   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 18:27:46.805518   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 18:27:46.841727   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:27:46.881056   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/default-k8s-diff-port-438836/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 18:27:46.918589   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:27:46.947113   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:27:46.977741   77819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:27:47.015143   77819 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:27:47.036837   77819 ssh_runner.go:195] Run: openssl version
	I0917 18:27:47.043152   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:27:47.057503   77819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:47.063479   77819 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:47.063554   77819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:27:47.072746   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:27:47.090698   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:27:47.105125   77819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:27:47.110617   77819 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:27:47.110690   77819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:27:47.117267   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:27:47.131593   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:27:47.145726   77819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:27:47.151245   77819 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:27:47.151350   77819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:27:47.157996   77819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:27:47.171327   77819 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:27:47.178058   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 18:27:47.185068   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 18:27:47.191776   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 18:27:47.198740   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 18:27:47.206057   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 18:27:47.212608   77819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 18:27:47.219345   77819 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-438836 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-438836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:27:47.219459   77819 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:27:47.219518   77819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:27:47.259853   77819 cri.go:89] found id: ""
	I0917 18:27:47.259944   77819 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:27:47.271127   77819 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 18:27:47.271146   77819 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 18:27:47.271197   77819 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 18:27:47.283724   77819 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 18:27:47.284834   77819 kubeconfig.go:125] found "default-k8s-diff-port-438836" server: "https://192.168.39.58:8444"
	I0917 18:27:47.287040   77819 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 18:27:47.298429   77819 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.58
	I0917 18:27:47.298462   77819 kubeadm.go:1160] stopping kube-system containers ...
	I0917 18:27:47.298481   77819 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0917 18:27:47.298535   77819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:27:47.341739   77819 cri.go:89] found id: ""
	I0917 18:27:47.341820   77819 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 18:27:47.361539   77819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:27:47.377218   77819 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:27:47.377254   77819 kubeadm.go:157] found existing configuration files:
	
	I0917 18:27:47.377301   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0917 18:27:47.390846   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:27:47.390913   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:27:47.401363   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0917 18:27:47.411412   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:27:47.411490   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:27:47.422596   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0917 18:27:47.438021   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:27:47.438102   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:27:47.450085   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0917 18:27:47.461269   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:27:47.461343   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:27:47.472893   77819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:27:47.484393   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:47.620947   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:46.654519   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:46.654962   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:46.655001   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:46.654927   78975 retry.go:31] will retry after 1.346541235s: waiting for machine to come up
	I0917 18:27:48.003569   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:48.004084   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:48.004118   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:48.004017   78975 retry.go:31] will retry after 2.098571627s: waiting for machine to come up
	I0917 18:27:50.105422   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:50.106073   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:50.106096   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:50.105998   78975 retry.go:31] will retry after 1.995584656s: waiting for machine to come up
	I0917 18:27:47.424559   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:27:47.441071   77433 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:27:47.462954   77433 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:27:47.636311   77433 system_pods.go:59] 8 kube-system pods found
	I0917 18:27:47.636361   77433 system_pods.go:61] "coredns-7c65d6cfc9-cgmx9" [e539dfc7-82f3-4e3a-b4d8-262c528fa5bf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:27:47.636373   77433 system_pods.go:61] "etcd-no-preload-328741" [16eed9ef-b991-4760-a116-af9716a70d71] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 18:27:47.636388   77433 system_pods.go:61] "kube-apiserver-no-preload-328741" [ed952dd4-6a99-4ad8-9cdb-c47a5f9d8e46] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 18:27:47.636397   77433 system_pods.go:61] "kube-controller-manager-no-preload-328741" [5da59a8e-4ce3-41f0-a8a0-d022f8788ce1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 18:27:47.636407   77433 system_pods.go:61] "kube-proxy-kpzxv" [eae9f1b2-95bf-44bf-9752-92e34a863520] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 18:27:47.636415   77433 system_pods.go:61] "kube-scheduler-no-preload-328741" [54c4a13c-e03c-4ccb-993b-7b454a66f266] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 18:27:47.636428   77433 system_pods.go:61] "metrics-server-6867b74b74-l8n57" [06210da2-3da4-4082-a966-7a808d762db9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:27:47.636434   77433 system_pods.go:61] "storage-provisioner" [c7501af5-63e1-499f-acfe-48c569e460dd] Running
	I0917 18:27:47.636445   77433 system_pods.go:74] duration metric: took 173.469578ms to wait for pod list to return data ...
	I0917 18:27:47.636458   77433 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:27:47.642831   77433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:27:47.642863   77433 node_conditions.go:123] node cpu capacity is 2
	I0917 18:27:47.642876   77433 node_conditions.go:105] duration metric: took 6.413638ms to run NodePressure ...
	I0917 18:27:47.642898   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:49.172338   77433 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.529413888s)
	I0917 18:27:49.172374   77433 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0917 18:27:49.181467   77433 kubeadm.go:739] kubelet initialised
	I0917 18:27:49.181492   77433 kubeadm.go:740] duration metric: took 9.106065ms waiting for restarted kubelet to initialise ...
	I0917 18:27:49.181504   77433 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:27:49.188444   77433 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:51.196629   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace has status "Ready":"False"
	I0917 18:27:48.837267   77819 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.216281013s)
	I0917 18:27:48.837303   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:49.079443   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:49.184248   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:49.270646   77819 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:27:49.270739   77819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:49.771210   77819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:50.270888   77819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:27:50.300440   77819 api_server.go:72] duration metric: took 1.029792788s to wait for apiserver process to appear ...
	I0917 18:27:50.300472   77819 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:27:50.300497   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:50.301150   77819 api_server.go:269] stopped: https://192.168.39.58:8444/healthz: Get "https://192.168.39.58:8444/healthz": dial tcp 192.168.39.58:8444: connect: connection refused
	I0917 18:27:50.800904   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:53.830413   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:27:53.830444   77819 api_server.go:103] status: https://192.168.39.58:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:27:53.830466   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:53.863997   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:27:53.864040   77819 api_server.go:103] status: https://192.168.39.58:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:27:54.301188   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:54.308708   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:27:54.308744   77819 api_server.go:103] status: https://192.168.39.58:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:27:54.801293   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:54.810135   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:27:54.810165   77819 api_server.go:103] status: https://192.168.39.58:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:27:55.300669   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:27:55.306598   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 200:
	ok
	I0917 18:27:55.314062   77819 api_server.go:141] control plane version: v1.31.1
	I0917 18:27:55.314089   77819 api_server.go:131] duration metric: took 5.013610515s to wait for apiserver health ...
	I0917 18:27:55.314098   77819 cni.go:84] Creating CNI manager for ""
	I0917 18:27:55.314105   77819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:27:55.315933   77819 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:27:52.103970   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:52.104598   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:52.104668   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:52.104610   78975 retry.go:31] will retry after 3.302824s: waiting for machine to come up
	I0917 18:27:55.410506   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:55.410967   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | unable to find current IP address of domain old-k8s-version-190698 in network mk-old-k8s-version-190698
	I0917 18:27:55.410993   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | I0917 18:27:55.410917   78975 retry.go:31] will retry after 3.790367729s: waiting for machine to come up
	I0917 18:27:53.697650   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace has status "Ready":"False"
	I0917 18:27:56.195779   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace has status "Ready":"False"
	I0917 18:27:55.317026   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:27:55.328593   77819 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:27:55.353710   77819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:27:55.364593   77819 system_pods.go:59] 8 kube-system pods found
	I0917 18:27:55.364637   77819 system_pods.go:61] "coredns-7c65d6cfc9-5wm4j" [af3267b8-4da2-4e95-802e-981814415f7d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:27:55.364649   77819 system_pods.go:61] "etcd-default-k8s-diff-port-438836" [72235e11-dd9c-4560-a258-84ae2fefc0ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 18:27:55.364659   77819 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-438836" [606ffa55-26de-426a-b101-3e5db2329146] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 18:27:55.364682   77819 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-438836" [a9ef6aae-54f9-4ac7-959f-3fb9dcf6019d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 18:27:55.364694   77819 system_pods.go:61] "kube-proxy-pbjlc" [de4d4161-64cd-4794-9eaa-d42b1b13e4a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 18:27:55.364702   77819 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-438836" [ba637ee3-77ca-4b12-8936-3e8616be80d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 18:27:55.364712   77819 system_pods.go:61] "metrics-server-6867b74b74-gpdsn" [4d3193f7-7912-40c6-b86e-402935023601] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:27:55.364722   77819 system_pods.go:61] "storage-provisioner" [5dbf57a2-126c-46e2-9be5-eb2974b84720] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 18:27:55.364739   77819 system_pods.go:74] duration metric: took 10.995638ms to wait for pod list to return data ...
	I0917 18:27:55.364752   77819 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:27:55.369115   77819 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:27:55.369145   77819 node_conditions.go:123] node cpu capacity is 2
	I0917 18:27:55.369159   77819 node_conditions.go:105] duration metric: took 4.401118ms to run NodePressure ...
	I0917 18:27:55.369179   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:27:55.688791   77819 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0917 18:27:55.694004   77819 kubeadm.go:739] kubelet initialised
	I0917 18:27:55.694035   77819 kubeadm.go:740] duration metric: took 5.21454ms waiting for restarted kubelet to initialise ...
	I0917 18:27:55.694045   77819 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:27:55.700066   77819 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5wm4j" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.706889   77819 pod_ready.go:103] pod "coredns-7c65d6cfc9-5wm4j" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:00.566518   77264 start.go:364] duration metric: took 52.227841633s to acquireMachinesLock for "embed-certs-081863"
	I0917 18:28:00.566588   77264 start.go:96] Skipping create...Using existing machine configuration
	I0917 18:28:00.566596   77264 fix.go:54] fixHost starting: 
	I0917 18:28:00.567020   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:28:00.567055   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:28:00.585812   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46167
	I0917 18:28:00.586338   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:28:00.586855   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:28:00.586878   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:28:00.587201   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:28:00.587368   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:00.587552   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetState
	I0917 18:28:00.589641   77264 fix.go:112] recreateIfNeeded on embed-certs-081863: state=Stopped err=<nil>
	I0917 18:28:00.589668   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	W0917 18:28:00.589827   77264 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 18:28:00.591622   77264 out.go:177] * Restarting existing kvm2 VM for "embed-certs-081863" ...
	I0917 18:27:59.203551   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.204119   78008 main.go:141] libmachine: (old-k8s-version-190698) Found IP for machine: 192.168.61.143
	I0917 18:27:59.204145   78008 main.go:141] libmachine: (old-k8s-version-190698) Reserving static IP address...
	I0917 18:27:59.204160   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has current primary IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.204580   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "old-k8s-version-190698", mac: "52:54:00:72:8a:43", ip: "192.168.61.143"} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.204623   78008 main.go:141] libmachine: (old-k8s-version-190698) Reserved static IP address: 192.168.61.143
	I0917 18:27:59.204642   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | skip adding static IP to network mk-old-k8s-version-190698 - found existing host DHCP lease matching {name: "old-k8s-version-190698", mac: "52:54:00:72:8a:43", ip: "192.168.61.143"}
	I0917 18:27:59.204660   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | Getting to WaitForSSH function...
	I0917 18:27:59.204675   78008 main.go:141] libmachine: (old-k8s-version-190698) Waiting for SSH to be available...
	I0917 18:27:59.206831   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.207248   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.207277   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.207563   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | Using SSH client type: external
	I0917 18:27:59.207591   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa (-rw-------)
	I0917 18:27:59.207628   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.143 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:27:59.207648   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | About to run SSH command:
	I0917 18:27:59.207660   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | exit 0
	I0917 18:27:59.334284   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | SSH cmd err, output: <nil>: 
	I0917 18:27:59.334712   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetConfigRaw
	I0917 18:27:59.335400   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:27:59.337795   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.338175   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.338199   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.338448   78008 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/config.json ...
	I0917 18:27:59.338675   78008 machine.go:93] provisionDockerMachine start ...
	I0917 18:27:59.338696   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:27:59.338932   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.340943   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.341313   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.341338   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.341517   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:27:59.341695   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.341821   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.341953   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:27:59.342138   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:59.342349   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:27:59.342366   78008 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 18:27:59.449958   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 18:27:59.449986   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetMachineName
	I0917 18:27:59.450245   78008 buildroot.go:166] provisioning hostname "old-k8s-version-190698"
	I0917 18:27:59.450275   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetMachineName
	I0917 18:27:59.450449   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.453653   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.454015   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.454044   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.454246   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:27:59.454451   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.454608   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.454777   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:27:59.454978   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:59.455195   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:27:59.455212   78008 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-190698 && echo "old-k8s-version-190698" | sudo tee /etc/hostname
	I0917 18:27:59.576721   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-190698
	
	I0917 18:27:59.576758   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.579821   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.580176   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.580211   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.580420   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:27:59.580601   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.580774   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.580920   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:27:59.581097   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:27:59.581292   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:27:59.581313   78008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-190698' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-190698/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-190698' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:27:59.696335   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:27:59.696366   78008 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:27:59.696387   78008 buildroot.go:174] setting up certificates
	I0917 18:27:59.696396   78008 provision.go:84] configureAuth start
	I0917 18:27:59.696405   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetMachineName
	I0917 18:27:59.696689   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:27:59.699694   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.700052   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.700079   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.700251   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.702492   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.702870   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.702897   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.703098   78008 provision.go:143] copyHostCerts
	I0917 18:27:59.703211   78008 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:27:59.703228   78008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:27:59.703308   78008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:27:59.703494   78008 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:27:59.703511   78008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:27:59.703557   78008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:27:59.703696   78008 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:27:59.703711   78008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:27:59.703743   78008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:27:59.703843   78008 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-190698 san=[127.0.0.1 192.168.61.143 localhost minikube old-k8s-version-190698]
	I0917 18:27:59.881199   78008 provision.go:177] copyRemoteCerts
	I0917 18:27:59.881281   78008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:27:59.881319   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:27:59.884199   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.884526   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:27:59.884559   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:27:59.884808   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:27:59.885004   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:27:59.885174   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:27:59.885311   78008 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:27:59.972021   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:27:59.999996   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0917 18:28:00.028759   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 18:28:00.062167   78008 provision.go:87] duration metric: took 365.752983ms to configureAuth
	I0917 18:28:00.062224   78008 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:28:00.062431   78008 config.go:182] Loaded profile config "old-k8s-version-190698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0917 18:28:00.062530   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.065903   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.066354   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.066387   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.066851   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.067080   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.067272   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.067551   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.067782   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:00.068031   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:28:00.068058   78008 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:28:00.310378   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:28:00.310410   78008 machine.go:96] duration metric: took 971.72114ms to provisionDockerMachine
	I0917 18:28:00.310424   78008 start.go:293] postStartSetup for "old-k8s-version-190698" (driver="kvm2")
	I0917 18:28:00.310444   78008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:28:00.310465   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.310788   78008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:28:00.310822   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.313609   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.313975   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.314004   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.314158   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.314364   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.314518   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.314672   78008 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:28:00.402352   78008 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:28:00.407061   78008 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:28:00.407091   78008 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:28:00.407183   78008 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:28:00.407295   78008 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:28:00.407435   78008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:28:00.419527   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:28:00.449686   78008 start.go:296] duration metric: took 139.247596ms for postStartSetup
	I0917 18:28:00.449739   78008 fix.go:56] duration metric: took 20.027097941s for fixHost
	I0917 18:28:00.449764   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.452672   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.453033   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.453080   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.453218   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.453433   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.453637   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.453793   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.454001   78008 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:00.454175   78008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.143 22 <nil> <nil>}
	I0917 18:28:00.454185   78008 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:28:00.566377   78008 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726597680.523257617
	
	I0917 18:28:00.566403   78008 fix.go:216] guest clock: 1726597680.523257617
	I0917 18:28:00.566413   78008 fix.go:229] Guest: 2024-09-17 18:28:00.523257617 +0000 UTC Remote: 2024-09-17 18:28:00.449744487 +0000 UTC m=+249.811602656 (delta=73.51313ms)
	I0917 18:28:00.566439   78008 fix.go:200] guest clock delta is within tolerance: 73.51313ms
	I0917 18:28:00.566445   78008 start.go:83] releasing machines lock for "old-k8s-version-190698", held for 20.143843614s
	I0917 18:28:00.566478   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.566748   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:28:00.570065   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.570491   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.570520   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.570731   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.571320   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.571497   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .DriverName
	I0917 18:28:00.571584   78008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:28:00.571649   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.571803   78008 ssh_runner.go:195] Run: cat /version.json
	I0917 18:28:00.571830   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHHostname
	I0917 18:28:00.574802   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.575083   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.575343   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.575382   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.575506   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.575574   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:00.575600   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:00.575664   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.575881   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.575941   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHPort
	I0917 18:28:00.576030   78008 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:28:00.576082   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHKeyPath
	I0917 18:28:00.576278   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetSSHUsername
	I0917 18:28:00.576430   78008 sshutil.go:53] new ssh client: &{IP:192.168.61.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/old-k8s-version-190698/id_rsa Username:docker}
	I0917 18:28:00.592850   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Start
	I0917 18:28:00.593044   77264 main.go:141] libmachine: (embed-certs-081863) Ensuring networks are active...
	I0917 18:28:00.593996   77264 main.go:141] libmachine: (embed-certs-081863) Ensuring network default is active
	I0917 18:28:00.594404   77264 main.go:141] libmachine: (embed-certs-081863) Ensuring network mk-embed-certs-081863 is active
	I0917 18:28:00.594855   77264 main.go:141] libmachine: (embed-certs-081863) Getting domain xml...
	I0917 18:28:00.595603   77264 main.go:141] libmachine: (embed-certs-081863) Creating domain...
	I0917 18:28:00.685146   78008 ssh_runner.go:195] Run: systemctl --version
	I0917 18:28:00.692059   78008 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:28:00.844888   78008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:28:00.852326   78008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:28:00.852438   78008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:28:00.869907   78008 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:28:00.869934   78008 start.go:495] detecting cgroup driver to use...
	I0917 18:28:00.870010   78008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:28:00.888992   78008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:28:00.905438   78008 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:28:00.905495   78008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:28:00.920872   78008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:28:00.939154   78008 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:28:01.067061   78008 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:28:01.220976   78008 docker.go:233] disabling docker service ...
	I0917 18:28:01.221068   78008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:28:01.240350   78008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:28:01.257396   78008 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:28:01.407317   78008 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:28:01.552256   78008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:28:01.567151   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:28:01.589401   78008 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0917 18:28:01.589465   78008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:01.604462   78008 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:28:01.604527   78008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:01.617293   78008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:01.629766   78008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:01.643336   78008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:28:01.656308   78008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:28:01.667116   78008 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:28:01.667187   78008 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:28:01.683837   78008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:28:01.697438   78008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:28:01.843288   78008 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:28:01.951590   78008 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:28:01.951666   78008 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:28:01.957158   78008 start.go:563] Will wait 60s for crictl version
	I0917 18:28:01.957240   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:01.961218   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:28:02.001679   78008 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:28:02.001772   78008 ssh_runner.go:195] Run: crio --version
	I0917 18:28:02.032619   78008 ssh_runner.go:195] Run: crio --version
	I0917 18:28:02.064108   78008 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0917 18:27:57.695202   77433 pod_ready.go:93] pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:57.695235   77433 pod_ready.go:82] duration metric: took 8.506750324s for pod "coredns-7c65d6cfc9-cgmx9" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.695249   77433 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.700040   77433 pod_ready.go:93] pod "etcd-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:57.700062   77433 pod_ready.go:82] duration metric: took 4.804815ms for pod "etcd-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.700070   77433 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.705836   77433 pod_ready.go:93] pod "kube-apiserver-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:57.705867   77433 pod_ready.go:82] duration metric: took 5.789446ms for pod "kube-apiserver-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:57.705880   77433 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.215156   77433 pod_ready.go:93] pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:58.215180   77433 pod_ready.go:82] duration metric: took 509.29189ms for pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.215193   77433 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kpzxv" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.221031   77433 pod_ready.go:93] pod "kube-proxy-kpzxv" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:58.221054   77433 pod_ready.go:82] duration metric: took 5.853831ms for pod "kube-proxy-kpzxv" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.221065   77433 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.493958   77433 pod_ready.go:93] pod "kube-scheduler-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:58.493984   77433 pod_ready.go:82] duration metric: took 272.911397ms for pod "kube-scheduler-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:58.493994   77433 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:00.501591   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:27:59.707995   77819 pod_ready.go:93] pod "coredns-7c65d6cfc9-5wm4j" in "kube-system" namespace has status "Ready":"True"
	I0917 18:27:59.708017   77819 pod_ready.go:82] duration metric: took 4.007926053s for pod "coredns-7c65d6cfc9-5wm4j" in "kube-system" namespace to be "Ready" ...
	I0917 18:27:59.708026   77819 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:01.716326   77819 pod_ready.go:103] pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:02.065336   78008 main.go:141] libmachine: (old-k8s-version-190698) Calling .GetIP
	I0917 18:28:02.068703   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:02.069066   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:8a:43", ip: ""} in network mk-old-k8s-version-190698: {Iface:virbr3 ExpiryTime:2024-09-17 19:27:53 +0000 UTC Type:0 Mac:52:54:00:72:8a:43 Iaid: IPaddr:192.168.61.143 Prefix:24 Hostname:old-k8s-version-190698 Clientid:01:52:54:00:72:8a:43}
	I0917 18:28:02.069094   78008 main.go:141] libmachine: (old-k8s-version-190698) DBG | domain old-k8s-version-190698 has defined IP address 192.168.61.143 and MAC address 52:54:00:72:8a:43 in network mk-old-k8s-version-190698
	I0917 18:28:02.069321   78008 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0917 18:28:02.074550   78008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:28:02.091863   78008 kubeadm.go:883] updating cluster {Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:28:02.092006   78008 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0917 18:28:02.092069   78008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:28:02.152944   78008 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0917 18:28:02.153024   78008 ssh_runner.go:195] Run: which lz4
	I0917 18:28:02.157664   78008 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 18:28:02.162231   78008 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 18:28:02.162290   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0917 18:28:04.015315   78008 crio.go:462] duration metric: took 1.857697544s to copy over tarball
	I0917 18:28:04.015398   78008 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 18:28:01.931491   77264 main.go:141] libmachine: (embed-certs-081863) Waiting to get IP...
	I0917 18:28:01.932448   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:01.932939   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:01.933006   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:01.932914   79167 retry.go:31] will retry after 232.498944ms: waiting for machine to come up
	I0917 18:28:02.167642   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:02.168159   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:02.168187   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:02.168114   79167 retry.go:31] will retry after 297.644768ms: waiting for machine to come up
	I0917 18:28:02.467583   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:02.468395   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:02.468422   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:02.468356   79167 retry.go:31] will retry after 486.22753ms: waiting for machine to come up
	I0917 18:28:02.956719   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:02.957187   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:02.957212   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:02.957151   79167 retry.go:31] will retry after 602.094874ms: waiting for machine to come up
	I0917 18:28:03.560509   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:03.561150   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:03.561177   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:03.561102   79167 retry.go:31] will retry after 732.31608ms: waiting for machine to come up
	I0917 18:28:04.294713   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:04.295264   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:04.295306   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:04.295212   79167 retry.go:31] will retry after 826.461309ms: waiting for machine to come up
	I0917 18:28:05.123086   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:05.123570   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:05.123596   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:05.123528   79167 retry.go:31] will retry after 785.524779ms: waiting for machine to come up
	I0917 18:28:02.503063   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:05.002750   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:03.716871   77819 pod_ready.go:103] pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:05.718652   77819 pod_ready.go:93] pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:05.718685   77819 pod_ready.go:82] duration metric: took 6.010651123s for pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:05.718697   77819 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:07.727355   77819 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:07.199571   78008 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.184141166s)
	I0917 18:28:07.199605   78008 crio.go:469] duration metric: took 3.184259546s to extract the tarball
	I0917 18:28:07.199625   78008 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 18:28:07.247308   78008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:28:07.290580   78008 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0917 18:28:07.290605   78008 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0917 18:28:07.290641   78008 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:28:07.290664   78008 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.290685   78008 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.290705   78008 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.290772   78008 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.290865   78008 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.290898   78008 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0917 18:28:07.290896   78008 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.292426   78008 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.292473   78008 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.292479   78008 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.292525   78008 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:28:07.292555   78008 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.292544   78008 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.292594   78008 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.292796   78008 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0917 18:28:07.460802   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.466278   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.466439   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.473442   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.484306   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.490062   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.517285   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0917 18:28:07.550668   78008 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0917 18:28:07.550730   78008 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.550779   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.598383   78008 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0917 18:28:07.598426   78008 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.598468   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.627615   78008 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0917 18:28:07.627665   78008 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.627737   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.675687   78008 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0917 18:28:07.675733   78008 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.675769   78008 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0917 18:28:07.675806   78008 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.675848   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.675809   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.689052   78008 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0917 18:28:07.689106   78008 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.689141   78008 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0917 18:28:07.689169   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.689186   78008 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0917 18:28:07.689200   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.689224   78008 ssh_runner.go:195] Run: which crictl
	I0917 18:28:07.689252   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.689296   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.689336   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.689374   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.782923   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0917 18:28:07.783204   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.833121   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.833205   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:07.833278   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0917 18:28:07.833316   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:07.833343   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:07.880054   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0917 18:28:07.885156   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0917 18:28:07.982007   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:07.990252   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0917 18:28:08.005351   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0917 18:28:08.008118   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0917 18:28:08.008319   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0917 18:28:08.066339   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0917 18:28:08.066388   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0917 18:28:08.173842   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0917 18:28:08.173884   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0917 18:28:08.173951   78008 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0917 18:28:08.181801   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0917 18:28:08.181832   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0917 18:28:08.181952   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0917 18:28:08.196666   78008 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:28:08.219844   78008 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0917 18:28:08.351645   78008 cache_images.go:92] duration metric: took 1.061022994s to LoadCachedImages
	W0917 18:28:08.351739   78008 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19662-11085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0917 18:28:08.351760   78008 kubeadm.go:934] updating node { 192.168.61.143 8443 v1.20.0 crio true true} ...
	I0917 18:28:08.351869   78008 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-190698 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.143
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:28:08.351947   78008 ssh_runner.go:195] Run: crio config
	I0917 18:28:08.404304   78008 cni.go:84] Creating CNI manager for ""
	I0917 18:28:08.404333   78008 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:28:08.404347   78008 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:28:08.404369   78008 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.143 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-190698 NodeName:old-k8s-version-190698 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.143"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.143 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0917 18:28:08.404554   78008 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.143
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-190698"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.143
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.143"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:28:08.404636   78008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0917 18:28:08.415712   78008 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:28:08.415788   78008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:28:08.426074   78008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0917 18:28:08.446765   78008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:28:08.467884   78008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0917 18:28:08.489565   78008 ssh_runner.go:195] Run: grep 192.168.61.143	control-plane.minikube.internal$ /etc/hosts
	I0917 18:28:08.494030   78008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.143	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:28:08.510100   78008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:28:08.667598   78008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:28:08.686416   78008 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698 for IP: 192.168.61.143
	I0917 18:28:08.686453   78008 certs.go:194] generating shared ca certs ...
	I0917 18:28:08.686477   78008 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:28:08.686680   78008 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:28:08.686743   78008 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:28:08.686762   78008 certs.go:256] generating profile certs ...
	I0917 18:28:08.686886   78008 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/client.key
	I0917 18:28:08.686962   78008 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.key.8ffdb4af
	I0917 18:28:08.687069   78008 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/proxy-client.key
	I0917 18:28:08.687256   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:28:08.687302   78008 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:28:08.687318   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:28:08.687360   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:28:08.687397   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:28:08.687441   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:28:08.687511   78008 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:28:08.688412   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:28:08.729318   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:28:08.772932   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:28:08.815329   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:28:08.866305   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 18:28:08.910004   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 18:28:08.950902   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:28:08.993679   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/old-k8s-version-190698/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 18:28:09.021272   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:28:09.046848   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:28:09.078938   78008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:28:09.110919   78008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:28:09.134493   78008 ssh_runner.go:195] Run: openssl version
	I0917 18:28:09.142920   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:28:09.157440   78008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:28:09.163382   78008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:28:09.163460   78008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:28:09.170446   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:28:09.182690   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:28:09.195144   78008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:28:09.200544   78008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:28:09.200612   78008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:28:09.207418   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:28:09.219931   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:28:09.234765   78008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:09.240859   78008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:09.240930   78008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:09.249168   78008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:28:09.262225   78008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:28:09.267923   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 18:28:09.276136   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 18:28:09.284356   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 18:28:09.292809   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 18:28:09.301175   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 18:28:09.309486   78008 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 18:28:09.317652   78008 kubeadm.go:392] StartCluster: {Name:old-k8s-version-190698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-190698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:28:09.317788   78008 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:28:09.317862   78008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:28:09.367633   78008 cri.go:89] found id: ""
	I0917 18:28:09.367714   78008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:28:09.378721   78008 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 18:28:09.378751   78008 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 18:28:09.378823   78008 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 18:28:09.389949   78008 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 18:28:09.391438   78008 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-190698" does not appear in /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:28:09.392494   78008 kubeconfig.go:62] /home/jenkins/minikube-integration/19662-11085/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-190698" cluster setting kubeconfig missing "old-k8s-version-190698" context setting]
	I0917 18:28:09.393951   78008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:28:09.396482   78008 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 18:28:09.407488   78008 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.143
	I0917 18:28:09.407541   78008 kubeadm.go:1160] stopping kube-system containers ...
	I0917 18:28:09.407555   78008 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0917 18:28:09.407617   78008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:28:09.454529   78008 cri.go:89] found id: ""
	I0917 18:28:09.454609   78008 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 18:28:09.473001   78008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:28:09.483455   78008 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:28:09.483478   78008 kubeadm.go:157] found existing configuration files:
	
	I0917 18:28:09.483524   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:28:09.492941   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:28:09.493015   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:28:09.503733   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:28:09.513646   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:28:09.513744   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:28:09.523852   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:28:09.533964   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:28:09.534023   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:28:09.544196   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:28:09.554778   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:28:09.554867   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:28:09.565305   78008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:28:09.576177   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:09.717093   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:10.376689   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:10.619407   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:05.910824   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:05.911297   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:05.911326   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:05.911249   79167 retry.go:31] will retry after 994.146737ms: waiting for machine to come up
	I0917 18:28:06.906856   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:06.907429   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:06.907489   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:06.907376   79167 retry.go:31] will retry after 1.592998284s: waiting for machine to come up
	I0917 18:28:08.502438   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:08.502946   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:08.502969   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:08.502894   79167 retry.go:31] will retry after 1.71066586s: waiting for machine to come up
	I0917 18:28:10.215620   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:10.216060   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:10.216088   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:10.216019   79167 retry.go:31] will retry after 2.640762654s: waiting for machine to come up
	I0917 18:28:07.502981   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:10.000910   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:12.002029   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:09.068583   77819 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:09.068620   77819 pod_ready.go:82] duration metric: took 3.349915006s for pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.068634   77819 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.104652   77819 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:09.104685   77819 pod_ready.go:82] duration metric: took 36.042715ms for pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.104698   77819 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-pbjlc" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.111983   77819 pod_ready.go:93] pod "kube-proxy-pbjlc" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:09.112010   77819 pod_ready.go:82] duration metric: took 7.304378ms for pod "kube-proxy-pbjlc" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.112022   77819 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.118242   77819 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:09.118270   77819 pod_ready.go:82] duration metric: took 6.238909ms for pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:09.118284   77819 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:11.128221   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:10.743928   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:10.832172   78008 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:28:10.832275   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:11.333365   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:11.832631   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:12.332364   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:12.832978   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:13.333348   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:13.833325   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:14.333130   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:14.833200   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:15.333019   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:12.859438   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:12.859907   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:12.859933   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:12.859855   79167 retry.go:31] will retry after 2.872904917s: waiting for machine to come up
	I0917 18:28:15.734778   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:15.735248   77264 main.go:141] libmachine: (embed-certs-081863) DBG | unable to find current IP address of domain embed-certs-081863 in network mk-embed-certs-081863
	I0917 18:28:15.735276   77264 main.go:141] libmachine: (embed-certs-081863) DBG | I0917 18:28:15.735204   79167 retry.go:31] will retry after 3.980802088s: waiting for machine to come up
	I0917 18:28:14.002604   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:16.501220   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:13.625926   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:16.124315   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:18.125564   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:15.832326   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:16.333353   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:16.833183   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:17.332967   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:17.833315   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:18.333025   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:18.832727   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:19.333388   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:19.833387   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:20.332777   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:19.720378   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.720874   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has current primary IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.720895   77264 main.go:141] libmachine: (embed-certs-081863) Found IP for machine: 192.168.50.61
	I0917 18:28:19.720909   77264 main.go:141] libmachine: (embed-certs-081863) Reserving static IP address...
	I0917 18:28:19.721385   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "embed-certs-081863", mac: "52:54:00:3f:17:3d", ip: "192.168.50.61"} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:19.721428   77264 main.go:141] libmachine: (embed-certs-081863) DBG | skip adding static IP to network mk-embed-certs-081863 - found existing host DHCP lease matching {name: "embed-certs-081863", mac: "52:54:00:3f:17:3d", ip: "192.168.50.61"}
	I0917 18:28:19.721444   77264 main.go:141] libmachine: (embed-certs-081863) Reserved static IP address: 192.168.50.61
	I0917 18:28:19.721461   77264 main.go:141] libmachine: (embed-certs-081863) Waiting for SSH to be available...
	I0917 18:28:19.721478   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Getting to WaitForSSH function...
	I0917 18:28:19.723623   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.723932   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:19.723960   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.724082   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Using SSH client type: external
	I0917 18:28:19.724109   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Using SSH private key: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa (-rw-------)
	I0917 18:28:19.724139   77264 main.go:141] libmachine: (embed-certs-081863) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.61 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 18:28:19.724161   77264 main.go:141] libmachine: (embed-certs-081863) DBG | About to run SSH command:
	I0917 18:28:19.724173   77264 main.go:141] libmachine: (embed-certs-081863) DBG | exit 0
	I0917 18:28:19.849714   77264 main.go:141] libmachine: (embed-certs-081863) DBG | SSH cmd err, output: <nil>: 
	I0917 18:28:19.850124   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetConfigRaw
	I0917 18:28:19.850841   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetIP
	I0917 18:28:19.853490   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.853866   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:19.853891   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.854193   77264 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/config.json ...
	I0917 18:28:19.854396   77264 machine.go:93] provisionDockerMachine start ...
	I0917 18:28:19.854414   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:19.854653   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:19.857041   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.857395   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:19.857423   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.857547   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:19.857729   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:19.857863   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:19.857975   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:19.858079   77264 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:19.858237   77264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I0917 18:28:19.858247   77264 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 18:28:19.965775   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 18:28:19.965805   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetMachineName
	I0917 18:28:19.966057   77264 buildroot.go:166] provisioning hostname "embed-certs-081863"
	I0917 18:28:19.966091   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetMachineName
	I0917 18:28:19.966278   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:19.968957   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.969277   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:19.969308   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:19.969469   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:19.969656   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:19.969816   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:19.969923   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:19.970068   77264 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:19.970294   77264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I0917 18:28:19.970314   77264 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-081863 && echo "embed-certs-081863" | sudo tee /etc/hostname
	I0917 18:28:20.096717   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-081863
	
	I0917 18:28:20.096753   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.099788   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.100162   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.100195   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.100351   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.100571   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.100731   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.100864   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.101043   77264 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:20.101273   77264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I0917 18:28:20.101297   77264 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-081863' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-081863/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-081863' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 18:28:20.224405   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 18:28:20.224447   77264 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19662-11085/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-11085/.minikube}
	I0917 18:28:20.224468   77264 buildroot.go:174] setting up certificates
	I0917 18:28:20.224476   77264 provision.go:84] configureAuth start
	I0917 18:28:20.224487   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetMachineName
	I0917 18:28:20.224796   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetIP
	I0917 18:28:20.227642   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.227990   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.228020   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.228128   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.230411   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.230785   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.230819   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.230945   77264 provision.go:143] copyHostCerts
	I0917 18:28:20.231012   77264 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem, removing ...
	I0917 18:28:20.231026   77264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem
	I0917 18:28:20.231097   77264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/ca.pem (1082 bytes)
	I0917 18:28:20.231220   77264 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem, removing ...
	I0917 18:28:20.231232   77264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem
	I0917 18:28:20.231263   77264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/cert.pem (1123 bytes)
	I0917 18:28:20.231349   77264 exec_runner.go:144] found /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem, removing ...
	I0917 18:28:20.231361   77264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem
	I0917 18:28:20.231387   77264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-11085/.minikube/key.pem (1679 bytes)
	I0917 18:28:20.231460   77264 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem org=jenkins.embed-certs-081863 san=[127.0.0.1 192.168.50.61 embed-certs-081863 localhost minikube]
	I0917 18:28:20.293317   77264 provision.go:177] copyRemoteCerts
	I0917 18:28:20.293370   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 18:28:20.293395   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.296247   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.296611   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.296649   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.296878   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.297065   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.297251   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.297411   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:28:20.384577   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 18:28:20.409805   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0917 18:28:20.436199   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 18:28:20.463040   77264 provision.go:87] duration metric: took 238.548615ms to configureAuth
	I0917 18:28:20.463072   77264 buildroot.go:189] setting minikube options for container-runtime
	I0917 18:28:20.463270   77264 config.go:182] Loaded profile config "embed-certs-081863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:28:20.463371   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.466291   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.466656   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.466688   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.466942   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.467172   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.467363   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.467511   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.467661   77264 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:20.467850   77264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I0917 18:28:20.467864   77264 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 18:28:20.713934   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 18:28:20.713961   77264 machine.go:96] duration metric: took 859.552656ms to provisionDockerMachine
	I0917 18:28:20.713975   77264 start.go:293] postStartSetup for "embed-certs-081863" (driver="kvm2")
	I0917 18:28:20.713989   77264 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 18:28:20.714017   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:20.714338   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 18:28:20.714366   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.717415   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.717784   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.717810   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.717979   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.718181   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.718334   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.718489   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:28:18.501410   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:21.001625   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:20.808582   77264 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 18:28:20.812874   77264 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 18:28:20.812903   77264 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/addons for local assets ...
	I0917 18:28:20.812985   77264 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-11085/.minikube/files for local assets ...
	I0917 18:28:20.813082   77264 filesync.go:149] local asset: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem -> 182592.pem in /etc/ssl/certs
	I0917 18:28:20.813202   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 18:28:20.823533   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:28:20.853907   77264 start.go:296] duration metric: took 139.917603ms for postStartSetup
	I0917 18:28:20.853950   77264 fix.go:56] duration metric: took 20.287354242s for fixHost
	I0917 18:28:20.853974   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.856746   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.857114   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.857141   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.857324   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.857572   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.857749   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.857925   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.858084   77264 main.go:141] libmachine: Using SSH client type: native
	I0917 18:28:20.858314   77264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I0917 18:28:20.858329   77264 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 18:28:20.970530   77264 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726597700.949100009
	
	I0917 18:28:20.970553   77264 fix.go:216] guest clock: 1726597700.949100009
	I0917 18:28:20.970561   77264 fix.go:229] Guest: 2024-09-17 18:28:20.949100009 +0000 UTC Remote: 2024-09-17 18:28:20.853955257 +0000 UTC m=+355.105413575 (delta=95.144752ms)
	I0917 18:28:20.970581   77264 fix.go:200] guest clock delta is within tolerance: 95.144752ms
	I0917 18:28:20.970586   77264 start.go:83] releasing machines lock for "embed-certs-081863", held for 20.404030588s
	I0917 18:28:20.970604   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:20.970874   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetIP
	I0917 18:28:20.973477   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.973786   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.973813   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.973938   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:20.974529   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:20.974733   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:28:20.974825   77264 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 18:28:20.974881   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.974945   77264 ssh_runner.go:195] Run: cat /version.json
	I0917 18:28:20.974973   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:28:20.977671   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.977994   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.978044   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.978074   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.978203   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.978365   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.978517   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.978555   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:20.978590   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:20.978659   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:28:20.978775   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:28:20.978915   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:28:20.979042   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:28:20.979161   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:28:21.080649   77264 ssh_runner.go:195] Run: systemctl --version
	I0917 18:28:21.087412   77264 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 18:28:21.241355   77264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 18:28:21.249173   77264 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 18:28:21.249245   77264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 18:28:21.266337   77264 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 18:28:21.266369   77264 start.go:495] detecting cgroup driver to use...
	I0917 18:28:21.266441   77264 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 18:28:21.284535   77264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 18:28:21.300191   77264 docker.go:217] disabling cri-docker service (if available) ...
	I0917 18:28:21.300262   77264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 18:28:21.315687   77264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 18:28:21.331132   77264 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 18:28:21.469564   77264 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 18:28:21.618385   77264 docker.go:233] disabling docker service ...
	I0917 18:28:21.618465   77264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 18:28:21.635746   77264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 18:28:21.653011   77264 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 18:28:21.806397   77264 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 18:28:21.942768   77264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 18:28:21.957319   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 18:28:21.977409   77264 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 18:28:21.977479   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:21.989090   77264 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 18:28:21.989165   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.001555   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.013044   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.024634   77264 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 18:28:22.036482   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.048082   77264 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.067971   77264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 18:28:22.079429   77264 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 18:28:22.089772   77264 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 18:28:22.089841   77264 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 18:28:22.104492   77264 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 18:28:22.116429   77264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:28:22.250299   77264 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 18:28:22.353115   77264 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 18:28:22.353195   77264 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 18:28:22.359475   77264 start.go:563] Will wait 60s for crictl version
	I0917 18:28:22.359527   77264 ssh_runner.go:195] Run: which crictl
	I0917 18:28:22.363627   77264 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 18:28:22.402802   77264 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 18:28:22.402902   77264 ssh_runner.go:195] Run: crio --version
	I0917 18:28:22.432389   77264 ssh_runner.go:195] Run: crio --version
	I0917 18:28:22.463277   77264 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0917 18:28:20.625519   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:23.126788   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:20.832698   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:21.332644   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:21.832955   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:22.332859   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:22.832393   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:23.333067   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:23.833266   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:24.332837   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:24.832669   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:25.332772   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:22.464498   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetIP
	I0917 18:28:22.467595   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:22.468070   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:28:22.468104   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:28:22.468400   77264 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0917 18:28:22.473355   77264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:28:22.487043   77264 kubeadm.go:883] updating cluster {Name:embed-certs-081863 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-081863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.61 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 18:28:22.487162   77264 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 18:28:22.487204   77264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:28:22.525877   77264 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0917 18:28:22.525947   77264 ssh_runner.go:195] Run: which lz4
	I0917 18:28:22.530318   77264 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 18:28:22.534779   77264 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 18:28:22.534821   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0917 18:28:24.007808   77264 crio.go:462] duration metric: took 1.477544842s to copy over tarball
	I0917 18:28:24.007895   77264 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 18:28:23.002565   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:25.501068   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:25.627993   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:28.126373   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:25.832772   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:26.332949   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:26.833016   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:27.332604   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:27.833127   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:28.332337   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:28.832430   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.332564   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.833193   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:30.333057   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:26.210912   77264 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.202977006s)
	I0917 18:28:26.210942   77264 crio.go:469] duration metric: took 2.203106209s to extract the tarball
	I0917 18:28:26.210950   77264 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 18:28:26.249979   77264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 18:28:26.297086   77264 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 18:28:26.297112   77264 cache_images.go:84] Images are preloaded, skipping loading
	I0917 18:28:26.297122   77264 kubeadm.go:934] updating node { 192.168.50.61 8443 v1.31.1 crio true true} ...
	I0917 18:28:26.297238   77264 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-081863 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-081863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 18:28:26.297323   77264 ssh_runner.go:195] Run: crio config
	I0917 18:28:26.343491   77264 cni.go:84] Creating CNI manager for ""
	I0917 18:28:26.343516   77264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:28:26.343526   77264 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 18:28:26.343547   77264 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.61 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-081863 NodeName:embed-certs-081863 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 18:28:26.343711   77264 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-081863"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 18:28:26.343786   77264 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 18:28:26.354782   77264 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 18:28:26.354863   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 18:28:26.365347   77264 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0917 18:28:26.383377   77264 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 18:28:26.401629   77264 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0917 18:28:26.420595   77264 ssh_runner.go:195] Run: grep 192.168.50.61	control-plane.minikube.internal$ /etc/hosts
	I0917 18:28:26.424760   77264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.61	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 18:28:26.439152   77264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:28:26.582540   77264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:28:26.600662   77264 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863 for IP: 192.168.50.61
	I0917 18:28:26.600684   77264 certs.go:194] generating shared ca certs ...
	I0917 18:28:26.600701   77264 certs.go:226] acquiring lock for ca certs: {Name:mka88aa799a46cbbcd9c06f5d7ca84ae282f447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:28:26.600877   77264 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key
	I0917 18:28:26.600932   77264 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key
	I0917 18:28:26.600946   77264 certs.go:256] generating profile certs ...
	I0917 18:28:26.601065   77264 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/client.key
	I0917 18:28:26.601154   77264 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/apiserver.key.b407faea
	I0917 18:28:26.601218   77264 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/proxy-client.key
	I0917 18:28:26.601382   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem (1338 bytes)
	W0917 18:28:26.601423   77264 certs.go:480] ignoring /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259_empty.pem, impossibly tiny 0 bytes
	I0917 18:28:26.601438   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 18:28:26.601501   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/ca.pem (1082 bytes)
	I0917 18:28:26.601537   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/cert.pem (1123 bytes)
	I0917 18:28:26.601568   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/certs/key.pem (1679 bytes)
	I0917 18:28:26.601625   77264 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem (1708 bytes)
	I0917 18:28:26.602482   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 18:28:26.641066   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 18:28:26.665154   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 18:28:26.699573   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 18:28:26.749625   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0917 18:28:26.790757   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 18:28:26.818331   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 18:28:26.848575   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/embed-certs-081863/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 18:28:26.875901   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/ssl/certs/182592.pem --> /usr/share/ca-certificates/182592.pem (1708 bytes)
	I0917 18:28:26.902547   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 18:28:26.929873   77264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-11085/.minikube/certs/18259.pem --> /usr/share/ca-certificates/18259.pem (1338 bytes)
	I0917 18:28:26.954674   77264 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 18:28:26.972433   77264 ssh_runner.go:195] Run: openssl version
	I0917 18:28:26.978761   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182592.pem && ln -fs /usr/share/ca-certificates/182592.pem /etc/ssl/certs/182592.pem"
	I0917 18:28:26.991752   77264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182592.pem
	I0917 18:28:26.996704   77264 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:13 /usr/share/ca-certificates/182592.pem
	I0917 18:28:26.996771   77264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182592.pem
	I0917 18:28:27.003567   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182592.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 18:28:27.015305   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 18:28:27.027052   77264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:27.032815   77264 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:27.032880   77264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 18:28:27.039495   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 18:28:27.051331   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18259.pem && ln -fs /usr/share/ca-certificates/18259.pem /etc/ssl/certs/18259.pem"
	I0917 18:28:27.062771   77264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18259.pem
	I0917 18:28:27.067404   77264 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:13 /usr/share/ca-certificates/18259.pem
	I0917 18:28:27.067461   77264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18259.pem
	I0917 18:28:27.073663   77264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18259.pem /etc/ssl/certs/51391683.0"
	I0917 18:28:27.085283   77264 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 18:28:27.090171   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 18:28:27.096537   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 18:28:27.103011   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 18:28:27.110516   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 18:28:27.116647   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 18:28:27.123087   77264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 18:28:27.129689   77264 kubeadm.go:392] StartCluster: {Name:embed-certs-081863 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-081863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.61 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 18:28:27.129958   77264 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 18:28:27.130021   77264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:28:27.171240   77264 cri.go:89] found id: ""
	I0917 18:28:27.171312   77264 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 18:28:27.183474   77264 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 18:28:27.183494   77264 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 18:28:27.183555   77264 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 18:28:27.195418   77264 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 18:28:27.196485   77264 kubeconfig.go:125] found "embed-certs-081863" server: "https://192.168.50.61:8443"
	I0917 18:28:27.198613   77264 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 18:28:27.210454   77264 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.61
	I0917 18:28:27.210489   77264 kubeadm.go:1160] stopping kube-system containers ...
	I0917 18:28:27.210503   77264 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0917 18:28:27.210560   77264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 18:28:27.249423   77264 cri.go:89] found id: ""
	I0917 18:28:27.249495   77264 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 18:28:27.270900   77264 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:28:27.283556   77264 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:28:27.283577   77264 kubeadm.go:157] found existing configuration files:
	
	I0917 18:28:27.283636   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:28:27.293555   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:28:27.293619   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:28:27.303876   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:28:27.313465   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:28:27.313533   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:28:27.323675   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:28:27.333753   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:28:27.333828   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:28:27.345276   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:28:27.356223   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:28:27.356278   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:28:27.366916   77264 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:28:27.380179   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:27.518193   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:28.381642   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:28.600807   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:28.674888   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:28.751910   77264 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:28:28.752037   77264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.252499   77264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.752690   77264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:29.792406   77264 api_server.go:72] duration metric: took 1.040494132s to wait for apiserver process to appear ...
	I0917 18:28:29.792439   77264 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:28:29.792463   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:29.793008   77264 api_server.go:269] stopped: https://192.168.50.61:8443/healthz: Get "https://192.168.50.61:8443/healthz": dial tcp 192.168.50.61:8443: connect: connection refused
	I0917 18:28:30.292587   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:27.501185   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:29.501753   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:32.000659   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:30.626195   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:33.126180   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:30.832853   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:31.332521   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:31.832513   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:32.332347   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:32.833201   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:33.332485   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:33.833002   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:34.333150   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:34.832985   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:35.332584   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:32.308247   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:28:32.308273   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:28:32.308286   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:32.327248   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 18:28:32.327283   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 18:28:32.792628   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:32.798368   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:32.798399   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:33.292887   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:33.298137   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:33.298162   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:33.792634   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:33.797062   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:33.797095   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:34.292626   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:34.297161   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:34.297198   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:34.792621   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:34.797092   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:34.797124   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:35.292693   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:35.298774   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 18:28:35.298806   77264 api_server.go:103] status: https://192.168.50.61:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 18:28:35.793350   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:28:35.798559   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 200:
	ok
	I0917 18:28:35.805421   77264 api_server.go:141] control plane version: v1.31.1
	I0917 18:28:35.805455   77264 api_server.go:131] duration metric: took 6.013008084s to wait for apiserver health ...
	I0917 18:28:35.805467   77264 cni.go:84] Creating CNI manager for ""
	I0917 18:28:35.805476   77264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:28:35.807270   77264 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:28:34.500180   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:36.501455   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:35.625916   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:38.124412   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:35.833375   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:36.332518   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:36.833057   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:37.333093   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:37.832449   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:38.333260   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:38.832592   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:39.332352   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:39.833094   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:40.332401   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:35.808509   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:28:35.820438   77264 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:28:35.843308   77264 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:28:35.858341   77264 system_pods.go:59] 8 kube-system pods found
	I0917 18:28:35.858375   77264 system_pods.go:61] "coredns-7c65d6cfc9-fv5t2" [6d147703-1be6-4e14-b00a-00563bb9f05d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:28:35.858383   77264 system_pods.go:61] "etcd-embed-certs-081863" [e7da3a2f-02a8-4fb8-bcc1-2057560e2a99] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 18:28:35.858390   77264 system_pods.go:61] "kube-apiserver-embed-certs-081863" [f576f758-867b-45ff-83e7-c7ec010c784d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 18:28:35.858396   77264 system_pods.go:61] "kube-controller-manager-embed-certs-081863" [864cfdcd-bba9-41ef-a014-9b44f90d10fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 18:28:35.858400   77264 system_pods.go:61] "kube-proxy-5ctps" [adbf43b1-986e-4bef-b515-9bf20e847369] Running
	I0917 18:28:35.858407   77264 system_pods.go:61] "kube-scheduler-embed-certs-081863" [1c6dc904-888a-43e2-9edf-ad87025d9cd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 18:28:35.858425   77264 system_pods.go:61] "metrics-server-6867b74b74-g2ttm" [dbb935ab-664c-420e-8b8e-4c033c3e07d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:28:35.858438   77264 system_pods.go:61] "storage-provisioner" [3a81abf3-c894-4279-91ce-6a66e4517de9] Running
	I0917 18:28:35.858446   77264 system_pods.go:74] duration metric: took 15.115932ms to wait for pod list to return data ...
	I0917 18:28:35.858459   77264 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:28:35.865686   77264 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:28:35.865715   77264 node_conditions.go:123] node cpu capacity is 2
	I0917 18:28:35.865728   77264 node_conditions.go:105] duration metric: took 7.262354ms to run NodePressure ...
	I0917 18:28:35.865747   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 18:28:36.133217   77264 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0917 18:28:36.142193   77264 kubeadm.go:739] kubelet initialised
	I0917 18:28:36.142216   77264 kubeadm.go:740] duration metric: took 8.957553ms waiting for restarted kubelet to initialise ...
	I0917 18:28:36.142223   77264 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:28:36.148365   77264 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-fv5t2" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.154605   77264 pod_ready.go:98] node "embed-certs-081863" hosting pod "coredns-7c65d6cfc9-fv5t2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.154633   77264 pod_ready.go:82] duration metric: took 6.241589ms for pod "coredns-7c65d6cfc9-fv5t2" in "kube-system" namespace to be "Ready" ...
	E0917 18:28:36.154644   77264 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-081863" hosting pod "coredns-7c65d6cfc9-fv5t2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.154654   77264 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.160864   77264 pod_ready.go:98] node "embed-certs-081863" hosting pod "etcd-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.160888   77264 pod_ready.go:82] duration metric: took 6.224743ms for pod "etcd-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	E0917 18:28:36.160899   77264 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-081863" hosting pod "etcd-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.160906   77264 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.167006   77264 pod_ready.go:98] node "embed-certs-081863" hosting pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.167038   77264 pod_ready.go:82] duration metric: took 6.114714ms for pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	E0917 18:28:36.167049   77264 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-081863" hosting pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.167058   77264 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.247310   77264 pod_ready.go:98] node "embed-certs-081863" hosting pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.247349   77264 pod_ready.go:82] duration metric: took 80.274557ms for pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	E0917 18:28:36.247361   77264 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-081863" hosting pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-081863" has status "Ready":"False"
	I0917 18:28:36.247368   77264 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5ctps" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.647989   77264 pod_ready.go:93] pod "kube-proxy-5ctps" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:36.648012   77264 pod_ready.go:82] duration metric: took 400.635503ms for pod "kube-proxy-5ctps" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:36.648022   77264 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:38.654947   77264 pod_ready.go:103] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:40.658044   77264 pod_ready.go:103] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:39.000917   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:41.001794   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:40.124879   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:42.125939   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:40.832609   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:41.332438   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:41.832456   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:42.332846   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:42.832374   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:43.332703   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:43.832502   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:44.332845   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:44.832341   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:45.333377   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:43.154904   77264 pod_ready.go:103] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:45.155253   77264 pod_ready.go:103] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:43.001900   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:45.501989   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:44.625492   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:47.124276   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:45.832541   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:46.332842   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:46.832446   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:47.333344   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:47.833087   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:48.332527   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:48.832377   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:49.332937   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:49.833254   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:50.332394   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:47.157575   77264 pod_ready.go:93] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"True"
	I0917 18:28:47.157603   77264 pod_ready.go:82] duration metric: took 10.509573459s for pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:47.157614   77264 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace to be "Ready" ...
	I0917 18:28:49.163957   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:48.000696   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:50.001527   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:49.627381   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:52.125550   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:50.833049   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:51.333314   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:51.832959   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:52.332830   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:52.832394   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:53.333004   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:53.832841   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:54.333310   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:54.832648   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:55.332487   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:51.164376   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:53.164866   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:55.165065   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:52.501375   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:54.501792   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:57.006451   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:54.624863   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:57.125005   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:55.832339   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:56.333257   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:56.833293   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:57.332665   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:57.833189   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:58.332409   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:58.833030   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:59.333251   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:59.832903   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:00.333365   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:28:57.664921   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:00.165972   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:59.500173   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:01.501014   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:28:59.125299   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:01.125883   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:00.833018   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:01.332976   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:01.832860   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:02.332401   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:02.832409   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:03.333273   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:03.832435   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:04.332572   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:04.832618   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:05.333051   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:02.166251   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:04.665729   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:04.000731   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:06.000850   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:03.624799   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:05.625817   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:08.124471   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:05.833109   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:06.332870   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:06.833248   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:07.332856   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:07.832795   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:08.332779   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:08.832356   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:09.333340   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:09.832899   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:10.332646   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:06.666037   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:09.163623   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:08.501863   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:10.504311   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:10.125479   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:12.625676   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:10.833153   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:10.833224   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:10.877318   78008 cri.go:89] found id: ""
	I0917 18:29:10.877347   78008 logs.go:276] 0 containers: []
	W0917 18:29:10.877356   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:10.877363   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:10.877433   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:10.913506   78008 cri.go:89] found id: ""
	I0917 18:29:10.913532   78008 logs.go:276] 0 containers: []
	W0917 18:29:10.913540   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:10.913546   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:10.913607   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:10.952648   78008 cri.go:89] found id: ""
	I0917 18:29:10.952679   78008 logs.go:276] 0 containers: []
	W0917 18:29:10.952689   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:10.952699   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:10.952761   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:10.992819   78008 cri.go:89] found id: ""
	I0917 18:29:10.992851   78008 logs.go:276] 0 containers: []
	W0917 18:29:10.992863   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:10.992870   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:10.992923   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:11.032717   78008 cri.go:89] found id: ""
	I0917 18:29:11.032752   78008 logs.go:276] 0 containers: []
	W0917 18:29:11.032764   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:11.032772   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:11.032831   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:11.070909   78008 cri.go:89] found id: ""
	I0917 18:29:11.070934   78008 logs.go:276] 0 containers: []
	W0917 18:29:11.070944   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:11.070953   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:11.071005   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:11.111115   78008 cri.go:89] found id: ""
	I0917 18:29:11.111146   78008 logs.go:276] 0 containers: []
	W0917 18:29:11.111157   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:11.111164   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:11.111233   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:11.147704   78008 cri.go:89] found id: ""
	I0917 18:29:11.147738   78008 logs.go:276] 0 containers: []
	W0917 18:29:11.147751   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:11.147770   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:11.147783   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:11.222086   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:11.222131   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:11.268572   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:11.268598   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:11.320140   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:11.320179   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:11.336820   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:11.336862   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:11.476726   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:13.977359   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:13.991780   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:13.991861   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:14.029657   78008 cri.go:89] found id: ""
	I0917 18:29:14.029686   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.029697   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:14.029703   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:14.029761   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:14.070673   78008 cri.go:89] found id: ""
	I0917 18:29:14.070707   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.070716   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:14.070722   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:14.070781   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:14.109826   78008 cri.go:89] found id: ""
	I0917 18:29:14.109862   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.109872   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:14.109880   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:14.109938   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:14.156812   78008 cri.go:89] found id: ""
	I0917 18:29:14.156839   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.156848   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:14.156853   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:14.156909   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:14.203877   78008 cri.go:89] found id: ""
	I0917 18:29:14.203906   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.203915   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:14.203921   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:14.203973   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:14.263366   78008 cri.go:89] found id: ""
	I0917 18:29:14.263395   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.263403   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:14.263408   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:14.263469   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:14.305300   78008 cri.go:89] found id: ""
	I0917 18:29:14.305324   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.305331   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:14.305337   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:14.305393   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:14.342838   78008 cri.go:89] found id: ""
	I0917 18:29:14.342874   78008 logs.go:276] 0 containers: []
	W0917 18:29:14.342888   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:14.342900   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:14.342915   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:14.394814   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:14.394864   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:14.410058   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:14.410084   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:14.497503   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:14.497532   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:14.497547   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:14.578545   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:14.578582   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:11.164670   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:13.664310   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:15.664728   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:13.001122   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:15.001204   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:15.124476   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:17.125696   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:17.119953   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:17.134019   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:17.134078   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:17.174236   78008 cri.go:89] found id: ""
	I0917 18:29:17.174259   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.174268   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:17.174273   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:17.174317   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:17.208678   78008 cri.go:89] found id: ""
	I0917 18:29:17.208738   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.208749   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:17.208757   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:17.208820   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:17.242890   78008 cri.go:89] found id: ""
	I0917 18:29:17.242915   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.242923   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:17.242929   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:17.242983   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:17.281990   78008 cri.go:89] found id: ""
	I0917 18:29:17.282013   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.282038   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:17.282046   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:17.282105   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:17.320104   78008 cri.go:89] found id: ""
	I0917 18:29:17.320140   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.320153   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:17.320160   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:17.320220   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:17.361959   78008 cri.go:89] found id: ""
	I0917 18:29:17.361993   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.362004   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:17.362012   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:17.362120   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:17.400493   78008 cri.go:89] found id: ""
	I0917 18:29:17.400531   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.400543   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:17.400550   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:17.400611   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:17.435549   78008 cri.go:89] found id: ""
	I0917 18:29:17.435574   78008 logs.go:276] 0 containers: []
	W0917 18:29:17.435582   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:17.435590   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:17.435605   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:17.483883   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:17.483919   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:17.498771   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:17.498801   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:17.583654   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:17.583680   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:17.583695   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:17.670903   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:17.670935   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:20.213963   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:20.228410   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:20.228487   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:20.268252   78008 cri.go:89] found id: ""
	I0917 18:29:20.268290   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.268301   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:20.268308   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:20.268385   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:20.307725   78008 cri.go:89] found id: ""
	I0917 18:29:20.307765   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.307774   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:20.307779   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:20.307840   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:20.350112   78008 cri.go:89] found id: ""
	I0917 18:29:20.350138   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.350146   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:20.350151   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:20.350209   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:20.386658   78008 cri.go:89] found id: ""
	I0917 18:29:20.386683   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.386692   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:20.386697   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:20.386758   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:20.427135   78008 cri.go:89] found id: ""
	I0917 18:29:20.427168   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.427180   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:20.427186   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:20.427253   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:20.464054   78008 cri.go:89] found id: ""
	I0917 18:29:20.464081   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.464091   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:20.464098   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:20.464162   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:20.503008   78008 cri.go:89] found id: ""
	I0917 18:29:20.503034   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.503043   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:20.503048   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:20.503107   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:20.539095   78008 cri.go:89] found id: ""
	I0917 18:29:20.539125   78008 logs.go:276] 0 containers: []
	W0917 18:29:20.539137   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:20.539149   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:20.539165   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:20.552429   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:20.552457   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:20.631977   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:20.632000   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:20.632012   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:18.164593   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:20.164968   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:17.501184   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:19.503422   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:22.001605   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:19.624854   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:21.625397   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:20.709917   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:20.709950   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:20.752312   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:20.752349   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:23.310520   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:23.327230   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:23.327296   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:23.369648   78008 cri.go:89] found id: ""
	I0917 18:29:23.369677   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.369687   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:23.369692   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:23.369756   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:23.406968   78008 cri.go:89] found id: ""
	I0917 18:29:23.407002   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.407010   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:23.407017   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:23.407079   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:23.448246   78008 cri.go:89] found id: ""
	I0917 18:29:23.448275   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.448285   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:23.448290   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:23.448350   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:23.486975   78008 cri.go:89] found id: ""
	I0917 18:29:23.487006   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.487016   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:23.487024   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:23.487077   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:23.523614   78008 cri.go:89] found id: ""
	I0917 18:29:23.523645   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.523656   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:23.523672   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:23.523751   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:23.567735   78008 cri.go:89] found id: ""
	I0917 18:29:23.567763   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.567774   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:23.567781   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:23.567846   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:23.610952   78008 cri.go:89] found id: ""
	I0917 18:29:23.610985   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.610995   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:23.611002   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:23.611063   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:23.647601   78008 cri.go:89] found id: ""
	I0917 18:29:23.647633   78008 logs.go:276] 0 containers: []
	W0917 18:29:23.647645   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:23.647657   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:23.647674   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:23.720969   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:23.720998   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:23.721014   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:23.802089   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:23.802124   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:23.847641   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:23.847673   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:23.901447   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:23.901488   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:22.663696   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:25.164022   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:24.001853   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:26.002572   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:24.124362   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:26.125485   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:26.416524   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:26.432087   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:26.432148   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:26.473403   78008 cri.go:89] found id: ""
	I0917 18:29:26.473435   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.473446   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:26.473453   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:26.473516   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:26.510736   78008 cri.go:89] found id: ""
	I0917 18:29:26.510764   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.510774   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:26.510780   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:26.510847   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:26.549732   78008 cri.go:89] found id: ""
	I0917 18:29:26.549766   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.549779   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:26.549789   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:26.549857   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:26.586548   78008 cri.go:89] found id: ""
	I0917 18:29:26.586580   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.586592   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:26.586599   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:26.586664   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:26.624246   78008 cri.go:89] found id: ""
	I0917 18:29:26.624276   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.624286   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:26.624294   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:26.624353   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:26.662535   78008 cri.go:89] found id: ""
	I0917 18:29:26.662565   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.662576   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:26.662584   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:26.662648   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:26.697775   78008 cri.go:89] found id: ""
	I0917 18:29:26.697810   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.697820   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:26.697826   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:26.697885   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:26.734181   78008 cri.go:89] found id: ""
	I0917 18:29:26.734209   78008 logs.go:276] 0 containers: []
	W0917 18:29:26.734218   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:26.734228   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:26.734239   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:26.783128   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:26.783163   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:26.797674   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:26.797713   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:26.873548   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:26.873570   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:26.873581   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:26.954031   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:26.954066   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:29.494364   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:29.508545   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:29.508616   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:29.545854   78008 cri.go:89] found id: ""
	I0917 18:29:29.545880   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.545888   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:29.545893   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:29.545941   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:29.581646   78008 cri.go:89] found id: ""
	I0917 18:29:29.581680   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.581691   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:29.581698   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:29.581770   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:29.627071   78008 cri.go:89] found id: ""
	I0917 18:29:29.627101   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.627112   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:29.627119   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:29.627176   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:29.662514   78008 cri.go:89] found id: ""
	I0917 18:29:29.662544   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.662555   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:29.662562   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:29.662622   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:29.699246   78008 cri.go:89] found id: ""
	I0917 18:29:29.699278   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.699291   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:29.699299   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:29.699359   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:29.736018   78008 cri.go:89] found id: ""
	I0917 18:29:29.736057   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.736070   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:29.736077   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:29.736138   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:29.773420   78008 cri.go:89] found id: ""
	I0917 18:29:29.773449   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.773459   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:29.773467   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:29.773527   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:29.811530   78008 cri.go:89] found id: ""
	I0917 18:29:29.811556   78008 logs.go:276] 0 containers: []
	W0917 18:29:29.811568   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:29.811578   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:29.811592   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:29.870083   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:29.870123   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:29.885471   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:29.885500   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:29.964699   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:29.964730   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:29.964754   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:30.048858   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:30.048899   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:27.165404   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:29.166367   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:28.500007   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:30.500594   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:28.626043   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:31.125419   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:33.125872   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:32.597013   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:32.611613   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:32.611691   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:32.648043   78008 cri.go:89] found id: ""
	I0917 18:29:32.648074   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.648086   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:32.648093   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:32.648159   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:32.686471   78008 cri.go:89] found id: ""
	I0917 18:29:32.686514   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.686526   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:32.686533   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:32.686594   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:32.721495   78008 cri.go:89] found id: ""
	I0917 18:29:32.721521   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.721530   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:32.721536   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:32.721595   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:32.757916   78008 cri.go:89] found id: ""
	I0917 18:29:32.757949   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.757960   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:32.757968   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:32.758035   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:32.793880   78008 cri.go:89] found id: ""
	I0917 18:29:32.793913   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.793925   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:32.793933   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:32.794006   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:32.834944   78008 cri.go:89] found id: ""
	I0917 18:29:32.834965   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.834973   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:32.834983   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:32.835044   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:32.872852   78008 cri.go:89] found id: ""
	I0917 18:29:32.872875   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.872883   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:32.872888   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:32.872939   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:32.913506   78008 cri.go:89] found id: ""
	I0917 18:29:32.913530   78008 logs.go:276] 0 containers: []
	W0917 18:29:32.913538   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:32.913547   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:32.913562   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:32.928726   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:32.928751   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:33.001220   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:33.001259   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:33.001274   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:33.080268   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:33.080304   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:33.123977   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:33.124008   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:31.664513   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:34.164735   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:33.001341   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:35.500975   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:35.625484   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:37.625964   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:35.678936   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:35.692953   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:35.693036   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:35.736947   78008 cri.go:89] found id: ""
	I0917 18:29:35.736984   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.737004   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:35.737012   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:35.737076   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:35.776148   78008 cri.go:89] found id: ""
	I0917 18:29:35.776173   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.776184   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:35.776191   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:35.776253   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:35.814136   78008 cri.go:89] found id: ""
	I0917 18:29:35.814167   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.814179   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:35.814189   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:35.814252   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:35.854451   78008 cri.go:89] found id: ""
	I0917 18:29:35.854480   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.854492   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:35.854505   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:35.854573   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:35.893068   78008 cri.go:89] found id: ""
	I0917 18:29:35.893091   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.893102   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:35.893108   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:35.893174   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:35.929116   78008 cri.go:89] found id: ""
	I0917 18:29:35.929140   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.929148   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:35.929153   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:35.929211   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:35.964253   78008 cri.go:89] found id: ""
	I0917 18:29:35.964284   78008 logs.go:276] 0 containers: []
	W0917 18:29:35.964294   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:35.964300   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:35.964364   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:36.002761   78008 cri.go:89] found id: ""
	I0917 18:29:36.002790   78008 logs.go:276] 0 containers: []
	W0917 18:29:36.002800   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:36.002810   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:36.002825   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:36.017581   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:36.017614   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:36.086982   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:36.087008   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:36.087024   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:36.169886   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:36.169919   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:36.215327   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:36.215355   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:38.768619   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:38.781979   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:38.782049   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:38.818874   78008 cri.go:89] found id: ""
	I0917 18:29:38.818903   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.818911   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:38.818918   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:38.818967   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:38.857619   78008 cri.go:89] found id: ""
	I0917 18:29:38.857648   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.857664   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:38.857670   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:38.857747   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:38.896861   78008 cri.go:89] found id: ""
	I0917 18:29:38.896896   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.896907   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:38.896914   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:38.896977   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:38.934593   78008 cri.go:89] found id: ""
	I0917 18:29:38.934616   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.934625   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:38.934632   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:38.934707   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:38.972359   78008 cri.go:89] found id: ""
	I0917 18:29:38.972383   78008 logs.go:276] 0 containers: []
	W0917 18:29:38.972394   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:38.972400   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:38.972468   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:39.007529   78008 cri.go:89] found id: ""
	I0917 18:29:39.007554   78008 logs.go:276] 0 containers: []
	W0917 18:29:39.007561   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:39.007567   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:39.007613   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:39.042646   78008 cri.go:89] found id: ""
	I0917 18:29:39.042679   78008 logs.go:276] 0 containers: []
	W0917 18:29:39.042690   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:39.042697   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:39.042758   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:39.080077   78008 cri.go:89] found id: ""
	I0917 18:29:39.080106   78008 logs.go:276] 0 containers: []
	W0917 18:29:39.080118   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:39.080128   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:39.080144   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:39.094785   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:39.094812   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:39.168149   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:39.168173   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:39.168184   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:39.258912   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:39.258958   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:39.303103   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:39.303133   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:36.664761   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:38.664881   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:37.501339   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:40.001032   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:42.001645   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:40.124869   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:42.125730   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:41.860904   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:41.875574   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:41.875644   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:41.916576   78008 cri.go:89] found id: ""
	I0917 18:29:41.916603   78008 logs.go:276] 0 containers: []
	W0917 18:29:41.916615   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:41.916623   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:41.916674   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:41.952222   78008 cri.go:89] found id: ""
	I0917 18:29:41.952284   78008 logs.go:276] 0 containers: []
	W0917 18:29:41.952298   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:41.952307   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:41.952374   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:41.992584   78008 cri.go:89] found id: ""
	I0917 18:29:41.992611   78008 logs.go:276] 0 containers: []
	W0917 18:29:41.992621   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:41.992627   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:41.992689   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:42.030490   78008 cri.go:89] found id: ""
	I0917 18:29:42.030522   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.030534   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:42.030542   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:42.030621   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:42.067240   78008 cri.go:89] found id: ""
	I0917 18:29:42.067274   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.067287   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:42.067312   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:42.067394   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:42.106093   78008 cri.go:89] found id: ""
	I0917 18:29:42.106124   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.106137   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:42.106148   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:42.106227   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:42.148581   78008 cri.go:89] found id: ""
	I0917 18:29:42.148623   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.148635   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:42.148643   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:42.148729   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:42.188248   78008 cri.go:89] found id: ""
	I0917 18:29:42.188277   78008 logs.go:276] 0 containers: []
	W0917 18:29:42.188286   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:42.188294   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:42.188308   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:42.276866   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:42.276906   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:42.325636   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:42.325671   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:42.379370   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:42.379406   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:42.396321   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:42.396357   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:42.481770   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:44.982800   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:44.996898   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:44.997053   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:45.036594   78008 cri.go:89] found id: ""
	I0917 18:29:45.036623   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.036632   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:45.036638   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:45.036699   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:45.073760   78008 cri.go:89] found id: ""
	I0917 18:29:45.073788   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.073799   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:45.073807   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:45.073868   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:45.111080   78008 cri.go:89] found id: ""
	I0917 18:29:45.111106   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.111116   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:45.111127   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:45.111196   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:45.149986   78008 cri.go:89] found id: ""
	I0917 18:29:45.150017   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.150027   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:45.150035   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:45.150099   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:45.187597   78008 cri.go:89] found id: ""
	I0917 18:29:45.187620   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.187629   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:45.187635   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:45.187701   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:45.234149   78008 cri.go:89] found id: ""
	I0917 18:29:45.234174   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.234182   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:45.234188   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:45.234236   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:45.269840   78008 cri.go:89] found id: ""
	I0917 18:29:45.269867   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.269875   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:45.269882   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:45.269944   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:45.306377   78008 cri.go:89] found id: ""
	I0917 18:29:45.306407   78008 logs.go:276] 0 containers: []
	W0917 18:29:45.306418   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:45.306427   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:45.306441   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:45.388767   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:45.388788   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:45.388799   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:45.470114   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:45.470147   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:45.516157   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:45.516185   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:45.573857   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:45.573895   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:41.166141   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:43.664951   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:44.501916   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:47.000980   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:44.626656   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:47.124445   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:48.090706   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:48.105691   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:48.105776   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:48.150986   78008 cri.go:89] found id: ""
	I0917 18:29:48.151013   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.151024   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:48.151032   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:48.151100   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:48.192061   78008 cri.go:89] found id: ""
	I0917 18:29:48.192090   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.192099   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:48.192104   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:48.192161   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:48.229101   78008 cri.go:89] found id: ""
	I0917 18:29:48.229131   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.229148   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:48.229157   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:48.229220   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:48.265986   78008 cri.go:89] found id: ""
	I0917 18:29:48.266016   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.266027   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:48.266034   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:48.266095   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:48.303726   78008 cri.go:89] found id: ""
	I0917 18:29:48.303766   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.303776   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:48.303781   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:48.303830   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:48.339658   78008 cri.go:89] found id: ""
	I0917 18:29:48.339686   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.339696   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:48.339704   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:48.339774   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:48.379115   78008 cri.go:89] found id: ""
	I0917 18:29:48.379140   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.379157   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:48.379164   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:48.379218   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:48.414414   78008 cri.go:89] found id: ""
	I0917 18:29:48.414449   78008 logs.go:276] 0 containers: []
	W0917 18:29:48.414461   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:48.414472   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:48.414488   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:48.428450   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:48.428477   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:48.514098   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:48.514125   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:48.514140   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:48.593472   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:48.593505   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:48.644071   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:48.644108   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:46.165499   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:48.166008   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:50.663751   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:49.001133   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:51.001465   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:49.125957   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:51.126670   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:51.202414   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:51.216803   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:51.216880   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:51.258947   78008 cri.go:89] found id: ""
	I0917 18:29:51.258982   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.259000   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:51.259009   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:51.259075   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:51.298904   78008 cri.go:89] found id: ""
	I0917 18:29:51.298937   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.298949   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:51.298957   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:51.299019   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:51.340714   78008 cri.go:89] found id: ""
	I0917 18:29:51.340743   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.340755   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:51.340761   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:51.340823   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:51.382480   78008 cri.go:89] found id: ""
	I0917 18:29:51.382518   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.382527   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:51.382532   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:51.382584   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:51.423788   78008 cri.go:89] found id: ""
	I0917 18:29:51.423818   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.423829   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:51.423836   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:51.423905   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:51.459714   78008 cri.go:89] found id: ""
	I0917 18:29:51.459740   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.459755   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:51.459762   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:51.459810   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:51.495817   78008 cri.go:89] found id: ""
	I0917 18:29:51.495850   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.495862   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:51.495870   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:51.495926   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:51.531481   78008 cri.go:89] found id: ""
	I0917 18:29:51.531521   78008 logs.go:276] 0 containers: []
	W0917 18:29:51.531538   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:51.531550   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:51.531566   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:51.547085   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:51.547120   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:51.622717   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:51.622743   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:51.622758   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:51.701363   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:51.701404   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:51.749746   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:51.749779   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:54.306208   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:54.320659   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:54.320737   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:54.365488   78008 cri.go:89] found id: ""
	I0917 18:29:54.365513   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.365521   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:54.365527   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:54.365588   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:54.417659   78008 cri.go:89] found id: ""
	I0917 18:29:54.417689   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.417700   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:54.417706   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:54.417773   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:54.460760   78008 cri.go:89] found id: ""
	I0917 18:29:54.460795   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.460806   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:54.460814   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:54.460865   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:54.501371   78008 cri.go:89] found id: ""
	I0917 18:29:54.501405   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.501419   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:54.501428   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:54.501501   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:54.549810   78008 cri.go:89] found id: ""
	I0917 18:29:54.549844   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.549853   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:54.549859   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:54.549910   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:54.586837   78008 cri.go:89] found id: ""
	I0917 18:29:54.586860   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.586867   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:54.586881   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:54.586942   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:54.623858   78008 cri.go:89] found id: ""
	I0917 18:29:54.623887   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.623898   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:54.623905   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:54.623967   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:54.660913   78008 cri.go:89] found id: ""
	I0917 18:29:54.660945   78008 logs.go:276] 0 containers: []
	W0917 18:29:54.660955   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:54.660965   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:54.660979   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:54.716523   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:54.716560   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:54.731846   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:54.731877   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:54.812288   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:29:54.812311   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:54.812323   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:54.892779   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:54.892819   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:52.663861   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:54.664903   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:53.501802   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:56.001407   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:53.624682   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:56.124445   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:57.440435   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:29:57.454886   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:29:57.454964   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:29:57.491408   78008 cri.go:89] found id: ""
	I0917 18:29:57.491440   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.491453   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:29:57.491461   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:29:57.491523   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:29:57.535786   78008 cri.go:89] found id: ""
	I0917 18:29:57.535814   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.535829   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:29:57.535837   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:29:57.535904   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:57.578014   78008 cri.go:89] found id: ""
	I0917 18:29:57.578043   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.578051   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:29:57.578057   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:29:57.578108   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:29:57.615580   78008 cri.go:89] found id: ""
	I0917 18:29:57.615615   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.615626   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:29:57.615634   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:29:57.615699   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:29:57.660250   78008 cri.go:89] found id: ""
	I0917 18:29:57.660285   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.660296   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:29:57.660305   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:29:57.660366   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:29:57.700495   78008 cri.go:89] found id: ""
	I0917 18:29:57.700526   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.700536   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:29:57.700542   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:29:57.700600   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:29:57.740580   78008 cri.go:89] found id: ""
	I0917 18:29:57.740616   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.740627   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:29:57.740635   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:29:57.740694   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:29:57.776982   78008 cri.go:89] found id: ""
	I0917 18:29:57.777012   78008 logs.go:276] 0 containers: []
	W0917 18:29:57.777024   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:29:57.777035   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:29:57.777049   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:29:57.877144   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:29:57.877184   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:29:57.923875   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:29:57.923912   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:29:57.976988   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:29:57.977025   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:29:57.992196   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:29:57.992223   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:29:58.071161   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:00.571930   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:00.586999   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:00.587083   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:00.625833   78008 cri.go:89] found id: ""
	I0917 18:30:00.625856   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.625864   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:00.625869   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:00.625924   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:00.669976   78008 cri.go:89] found id: ""
	I0917 18:30:00.669999   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.670007   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:00.670012   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:00.670072   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:29:56.665386   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:59.163695   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:58.002576   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:00.500510   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:29:58.624759   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:00.633084   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:03.124695   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:00.708223   78008 cri.go:89] found id: ""
	I0917 18:30:00.708249   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.708257   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:00.708263   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:00.708315   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:00.743322   78008 cri.go:89] found id: ""
	I0917 18:30:00.743352   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.743364   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:00.743371   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:00.743508   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:00.778595   78008 cri.go:89] found id: ""
	I0917 18:30:00.778625   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.778635   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:00.778643   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:00.778706   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:00.816878   78008 cri.go:89] found id: ""
	I0917 18:30:00.816911   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.816923   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:00.816930   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:00.816983   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:00.855841   78008 cri.go:89] found id: ""
	I0917 18:30:00.855876   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.855889   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:00.855898   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:00.855974   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:00.897170   78008 cri.go:89] found id: ""
	I0917 18:30:00.897195   78008 logs.go:276] 0 containers: []
	W0917 18:30:00.897203   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:00.897210   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:00.897236   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:00.949640   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:00.949680   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:00.963799   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:00.963825   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:01.050102   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:01.050123   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:01.050135   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:01.129012   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:01.129061   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:03.672160   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:03.687572   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:03.687648   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:03.729586   78008 cri.go:89] found id: ""
	I0917 18:30:03.729615   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.729626   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:03.729632   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:03.729692   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:03.766993   78008 cri.go:89] found id: ""
	I0917 18:30:03.767022   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.767032   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:03.767039   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:03.767104   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:03.804340   78008 cri.go:89] found id: ""
	I0917 18:30:03.804368   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.804378   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:03.804385   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:03.804451   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:03.847020   78008 cri.go:89] found id: ""
	I0917 18:30:03.847050   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.847061   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:03.847068   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:03.847158   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:03.885900   78008 cri.go:89] found id: ""
	I0917 18:30:03.885927   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.885938   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:03.885946   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:03.886009   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:03.925137   78008 cri.go:89] found id: ""
	I0917 18:30:03.925167   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.925178   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:03.925184   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:03.925259   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:03.962225   78008 cri.go:89] found id: ""
	I0917 18:30:03.962261   78008 logs.go:276] 0 containers: []
	W0917 18:30:03.962275   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:03.962283   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:03.962352   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:04.005866   78008 cri.go:89] found id: ""
	I0917 18:30:04.005892   78008 logs.go:276] 0 containers: []
	W0917 18:30:04.005902   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:04.005909   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:04.005921   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:04.057578   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:04.057615   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:04.072178   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:04.072213   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:04.145219   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:04.145251   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:04.145285   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:04.234230   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:04.234282   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:01.165075   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:03.666085   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:05.672830   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:03.000954   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:05.501361   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:05.124840   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:07.126821   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:06.777988   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:06.793426   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:06.793500   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:06.833313   78008 cri.go:89] found id: ""
	I0917 18:30:06.833352   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.833360   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:06.833365   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:06.833424   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:06.870020   78008 cri.go:89] found id: ""
	I0917 18:30:06.870047   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.870056   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:06.870062   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:06.870124   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:06.906682   78008 cri.go:89] found id: ""
	I0917 18:30:06.906716   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.906728   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:06.906735   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:06.906810   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:06.946328   78008 cri.go:89] found id: ""
	I0917 18:30:06.946356   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.946365   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:06.946371   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:06.946418   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:06.983832   78008 cri.go:89] found id: ""
	I0917 18:30:06.983856   78008 logs.go:276] 0 containers: []
	W0917 18:30:06.983865   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:06.983871   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:06.983918   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:07.024526   78008 cri.go:89] found id: ""
	I0917 18:30:07.024560   78008 logs.go:276] 0 containers: []
	W0917 18:30:07.024571   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:07.024579   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:07.024637   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:07.066891   78008 cri.go:89] found id: ""
	I0917 18:30:07.066917   78008 logs.go:276] 0 containers: []
	W0917 18:30:07.066928   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:07.066935   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:07.066997   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:07.105669   78008 cri.go:89] found id: ""
	I0917 18:30:07.105709   78008 logs.go:276] 0 containers: []
	W0917 18:30:07.105721   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:07.105732   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:07.105754   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:07.120771   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:07.120802   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:07.195243   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:07.195272   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:07.195287   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:07.284377   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:07.284428   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:07.326894   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:07.326924   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:09.886998   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:09.900710   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:09.900773   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:09.943198   78008 cri.go:89] found id: ""
	I0917 18:30:09.943225   78008 logs.go:276] 0 containers: []
	W0917 18:30:09.943234   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:09.943240   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:09.943300   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:09.980113   78008 cri.go:89] found id: ""
	I0917 18:30:09.980148   78008 logs.go:276] 0 containers: []
	W0917 18:30:09.980160   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:09.980167   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:09.980226   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:10.017582   78008 cri.go:89] found id: ""
	I0917 18:30:10.017613   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.017625   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:10.017632   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:10.017681   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:10.053698   78008 cri.go:89] found id: ""
	I0917 18:30:10.053722   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.053731   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:10.053736   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:10.053784   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:10.091391   78008 cri.go:89] found id: ""
	I0917 18:30:10.091421   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.091433   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:10.091439   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:10.091496   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:10.130636   78008 cri.go:89] found id: ""
	I0917 18:30:10.130668   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.130677   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:10.130682   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:10.130736   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:10.168175   78008 cri.go:89] found id: ""
	I0917 18:30:10.168203   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.168214   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:10.168222   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:10.168313   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:10.207085   78008 cri.go:89] found id: ""
	I0917 18:30:10.207109   78008 logs.go:276] 0 containers: []
	W0917 18:30:10.207118   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:10.207126   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:10.207139   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:10.245978   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:10.246007   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:10.298522   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:10.298569   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:10.312878   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:10.312904   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:10.387530   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:10.387553   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:10.387565   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:08.165955   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:10.663887   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:08.000401   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:10.000928   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:12.001022   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:09.625405   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:12.124546   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:12.967663   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:12.982157   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:12.982215   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:13.020177   78008 cri.go:89] found id: ""
	I0917 18:30:13.020224   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.020235   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:13.020241   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:13.020310   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:13.056317   78008 cri.go:89] found id: ""
	I0917 18:30:13.056342   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.056351   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:13.056356   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:13.056404   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:13.091799   78008 cri.go:89] found id: ""
	I0917 18:30:13.091823   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.091832   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:13.091838   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:13.091888   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:13.130421   78008 cri.go:89] found id: ""
	I0917 18:30:13.130450   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.130460   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:13.130465   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:13.130518   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:13.170623   78008 cri.go:89] found id: ""
	I0917 18:30:13.170654   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.170664   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:13.170672   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:13.170732   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:13.206396   78008 cri.go:89] found id: ""
	I0917 18:30:13.206441   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.206452   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:13.206460   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:13.206514   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:13.243090   78008 cri.go:89] found id: ""
	I0917 18:30:13.243121   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.243132   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:13.243139   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:13.243192   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:13.285690   78008 cri.go:89] found id: ""
	I0917 18:30:13.285730   78008 logs.go:276] 0 containers: []
	W0917 18:30:13.285740   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:13.285747   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:13.285759   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:13.361992   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:13.362021   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:13.362043   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:13.448424   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:13.448467   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:13.489256   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:13.489284   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:13.544698   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:13.544735   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:12.665127   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:15.164296   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:14.501748   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:17.001119   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:14.124965   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:16.625638   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:16.060014   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:16.073504   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:16.073564   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:16.110538   78008 cri.go:89] found id: ""
	I0917 18:30:16.110567   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.110579   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:16.110587   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:16.110648   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:16.148521   78008 cri.go:89] found id: ""
	I0917 18:30:16.148551   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.148562   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:16.148570   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:16.148640   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:16.182772   78008 cri.go:89] found id: ""
	I0917 18:30:16.182796   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.182804   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:16.182809   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:16.182858   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:16.219617   78008 cri.go:89] found id: ""
	I0917 18:30:16.219642   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.219653   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:16.219660   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:16.219714   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:16.257320   78008 cri.go:89] found id: ""
	I0917 18:30:16.257345   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.257354   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:16.257359   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:16.257419   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:16.295118   78008 cri.go:89] found id: ""
	I0917 18:30:16.295150   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.295161   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:16.295168   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:16.295234   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:16.332448   78008 cri.go:89] found id: ""
	I0917 18:30:16.332482   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.332493   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:16.332500   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:16.332564   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:16.370155   78008 cri.go:89] found id: ""
	I0917 18:30:16.370182   78008 logs.go:276] 0 containers: []
	W0917 18:30:16.370189   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:16.370197   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:16.370208   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:16.410230   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:16.410260   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:16.462306   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:16.462342   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:16.476472   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:16.476506   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:16.550449   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:16.550479   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:16.550497   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:19.129550   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:19.143333   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:19.143415   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:19.184184   78008 cri.go:89] found id: ""
	I0917 18:30:19.184213   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.184224   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:19.184231   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:19.184289   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:19.219455   78008 cri.go:89] found id: ""
	I0917 18:30:19.219489   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.219501   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:19.219508   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:19.219568   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:19.257269   78008 cri.go:89] found id: ""
	I0917 18:30:19.257303   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.257315   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:19.257328   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:19.257405   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:19.293898   78008 cri.go:89] found id: ""
	I0917 18:30:19.293931   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.293943   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:19.293951   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:19.294009   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:19.339154   78008 cri.go:89] found id: ""
	I0917 18:30:19.339183   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.339194   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:19.339201   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:19.339268   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:19.378608   78008 cri.go:89] found id: ""
	I0917 18:30:19.378634   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.378646   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:19.378653   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:19.378720   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:19.415280   78008 cri.go:89] found id: ""
	I0917 18:30:19.415311   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.415322   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:19.415330   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:19.415396   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:19.454025   78008 cri.go:89] found id: ""
	I0917 18:30:19.454066   78008 logs.go:276] 0 containers: []
	W0917 18:30:19.454079   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:19.454089   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:19.454107   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:19.505918   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:19.505950   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:19.520996   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:19.521027   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:19.597408   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:19.597431   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:19.597442   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:19.678454   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:19.678487   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:17.165495   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:19.665976   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:19.001210   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:21.001549   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:19.123461   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:21.124423   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:23.124646   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:22.223094   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:22.238644   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:22.238722   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:22.279497   78008 cri.go:89] found id: ""
	I0917 18:30:22.279529   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.279541   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:22.279554   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:22.279616   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:22.315953   78008 cri.go:89] found id: ""
	I0917 18:30:22.315980   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.315990   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:22.315997   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:22.316061   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:22.355157   78008 cri.go:89] found id: ""
	I0917 18:30:22.355191   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.355204   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:22.355212   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:22.355278   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:22.393304   78008 cri.go:89] found id: ""
	I0917 18:30:22.393335   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.393346   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:22.393353   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:22.393405   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:22.437541   78008 cri.go:89] found id: ""
	I0917 18:30:22.437567   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.437576   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:22.437582   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:22.437637   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:22.478560   78008 cri.go:89] found id: ""
	I0917 18:30:22.478588   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.478596   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:22.478601   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:22.478661   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:22.516049   78008 cri.go:89] found id: ""
	I0917 18:30:22.516084   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.516093   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:22.516099   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:22.516151   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:22.554321   78008 cri.go:89] found id: ""
	I0917 18:30:22.554350   78008 logs.go:276] 0 containers: []
	W0917 18:30:22.554359   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:22.554367   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:22.554377   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:22.613073   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:22.613110   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:22.627768   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:22.627797   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:22.710291   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:22.710318   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:22.710333   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:22.807999   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:22.808035   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:25.350639   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:25.366302   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:25.366405   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:25.411585   78008 cri.go:89] found id: ""
	I0917 18:30:25.411613   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.411625   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:25.411632   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:25.411694   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:25.453414   78008 cri.go:89] found id: ""
	I0917 18:30:25.453441   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.453461   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:25.453467   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:25.453529   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:25.489776   78008 cri.go:89] found id: ""
	I0917 18:30:25.489803   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.489812   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:25.489817   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:25.489868   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:25.531594   78008 cri.go:89] found id: ""
	I0917 18:30:25.531624   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.531633   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:25.531638   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:25.531686   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:25.568796   78008 cri.go:89] found id: ""
	I0917 18:30:25.568820   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.568831   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:25.568837   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:25.568888   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:25.605612   78008 cri.go:89] found id: ""
	I0917 18:30:25.605643   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.605654   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:25.605661   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:25.605719   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:25.647673   78008 cri.go:89] found id: ""
	I0917 18:30:25.647698   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.647708   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:25.647713   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:25.647772   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:22.164631   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:24.165353   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:23.500355   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:25.503250   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:25.125192   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:27.125540   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:25.686943   78008 cri.go:89] found id: ""
	I0917 18:30:25.686976   78008 logs.go:276] 0 containers: []
	W0917 18:30:25.686989   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:25.687000   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:25.687022   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:25.728440   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:25.728477   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:25.778211   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:25.778254   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:25.792519   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:25.792547   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:25.879452   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:25.879477   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:25.879492   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:28.460531   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:28.474595   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:28.474689   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:28.531065   78008 cri.go:89] found id: ""
	I0917 18:30:28.531099   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.531108   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:28.531117   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:28.531184   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:28.571952   78008 cri.go:89] found id: ""
	I0917 18:30:28.571991   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.572002   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:28.572012   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:28.572081   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:28.608315   78008 cri.go:89] found id: ""
	I0917 18:30:28.608348   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.608364   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:28.608371   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:28.608433   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:28.647882   78008 cri.go:89] found id: ""
	I0917 18:30:28.647913   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.647925   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:28.647932   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:28.647997   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:28.684998   78008 cri.go:89] found id: ""
	I0917 18:30:28.685021   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.685030   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:28.685036   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:28.685098   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:28.724249   78008 cri.go:89] found id: ""
	I0917 18:30:28.724274   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.724282   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:28.724287   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:28.724348   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:28.765932   78008 cri.go:89] found id: ""
	I0917 18:30:28.765965   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.765976   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:28.765982   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:28.766047   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:28.803857   78008 cri.go:89] found id: ""
	I0917 18:30:28.803888   78008 logs.go:276] 0 containers: []
	W0917 18:30:28.803899   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:28.803910   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:28.803923   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:28.863667   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:28.863703   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:28.878148   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:28.878187   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:28.956714   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:28.956743   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:28.956760   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:29.036303   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:29.036342   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:26.664369   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:28.665390   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:28.001973   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:30.500284   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:29.126782   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:31.626235   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:31.581741   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:31.595509   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:31.595592   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:31.631185   78008 cri.go:89] found id: ""
	I0917 18:30:31.631215   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.631227   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:31.631234   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:31.631286   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:31.668059   78008 cri.go:89] found id: ""
	I0917 18:30:31.668091   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.668102   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:31.668109   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:31.668168   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:31.705807   78008 cri.go:89] found id: ""
	I0917 18:30:31.705838   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.705849   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:31.705856   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:31.705925   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:31.750168   78008 cri.go:89] found id: ""
	I0917 18:30:31.750198   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.750212   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:31.750220   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:31.750282   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:31.792032   78008 cri.go:89] found id: ""
	I0917 18:30:31.792054   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.792063   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:31.792069   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:31.792130   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:31.828596   78008 cri.go:89] found id: ""
	I0917 18:30:31.828632   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.828646   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:31.828654   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:31.828708   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:31.871963   78008 cri.go:89] found id: ""
	I0917 18:30:31.872000   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.872013   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:31.872023   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:31.872094   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:31.906688   78008 cri.go:89] found id: ""
	I0917 18:30:31.906718   78008 logs.go:276] 0 containers: []
	W0917 18:30:31.906727   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:31.906735   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:31.906746   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:31.920311   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:31.920339   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:32.009966   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:32.009992   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:32.010006   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:32.088409   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:32.088447   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:32.132771   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:32.132806   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:34.686159   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:34.700133   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:34.700211   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:34.739392   78008 cri.go:89] found id: ""
	I0917 18:30:34.739431   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.739445   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:34.739453   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:34.739522   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:34.779141   78008 cri.go:89] found id: ""
	I0917 18:30:34.779175   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.779188   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:34.779195   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:34.779260   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:34.819883   78008 cri.go:89] found id: ""
	I0917 18:30:34.819907   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.819915   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:34.819920   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:34.819967   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:34.855886   78008 cri.go:89] found id: ""
	I0917 18:30:34.855912   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.855923   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:34.855931   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:34.855999   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:34.903919   78008 cri.go:89] found id: ""
	I0917 18:30:34.903956   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.903968   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:34.903975   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:34.904042   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:34.951895   78008 cri.go:89] found id: ""
	I0917 18:30:34.951925   78008 logs.go:276] 0 containers: []
	W0917 18:30:34.951936   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:34.951943   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:34.952007   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:35.013084   78008 cri.go:89] found id: ""
	I0917 18:30:35.013124   78008 logs.go:276] 0 containers: []
	W0917 18:30:35.013132   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:35.013137   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:35.013189   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:35.051565   78008 cri.go:89] found id: ""
	I0917 18:30:35.051589   78008 logs.go:276] 0 containers: []
	W0917 18:30:35.051598   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:35.051606   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:35.051616   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:35.092723   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:35.092753   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:35.147996   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:35.148037   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:35.164989   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:35.165030   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:35.246216   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:35.246239   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:35.246252   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:31.163920   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:33.664255   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:32.500662   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:35.002015   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:34.124883   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:36.125144   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:38.125514   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:37.828811   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:37.846467   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:37.846534   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:37.884725   78008 cri.go:89] found id: ""
	I0917 18:30:37.884758   78008 logs.go:276] 0 containers: []
	W0917 18:30:37.884769   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:37.884777   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:37.884836   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:37.923485   78008 cri.go:89] found id: ""
	I0917 18:30:37.923517   78008 logs.go:276] 0 containers: []
	W0917 18:30:37.923525   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:37.923531   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:37.923597   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:37.962829   78008 cri.go:89] found id: ""
	I0917 18:30:37.962857   78008 logs.go:276] 0 containers: []
	W0917 18:30:37.962867   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:37.962873   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:37.962938   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:38.003277   78008 cri.go:89] found id: ""
	I0917 18:30:38.003305   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.003313   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:38.003319   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:38.003380   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:38.047919   78008 cri.go:89] found id: ""
	I0917 18:30:38.047952   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.047963   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:38.047971   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:38.048043   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:38.084853   78008 cri.go:89] found id: ""
	I0917 18:30:38.084883   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.084896   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:38.084904   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:38.084967   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:38.122340   78008 cri.go:89] found id: ""
	I0917 18:30:38.122369   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.122379   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:38.122387   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:38.122446   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:38.163071   78008 cri.go:89] found id: ""
	I0917 18:30:38.163101   78008 logs.go:276] 0 containers: []
	W0917 18:30:38.163112   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:38.163121   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:38.163134   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:38.243772   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:38.243812   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:38.291744   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:38.291777   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:38.346738   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:38.346778   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:38.361908   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:38.361953   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:38.441730   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:36.165051   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:38.165173   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:40.664192   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:37.500496   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:39.501199   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:42.000608   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:40.626165   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:43.125533   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:40.942693   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:40.960643   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:40.960713   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:41.016226   78008 cri.go:89] found id: ""
	I0917 18:30:41.016255   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.016265   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:41.016270   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:41.016328   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:41.054315   78008 cri.go:89] found id: ""
	I0917 18:30:41.054342   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.054353   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:41.054360   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:41.054426   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:41.092946   78008 cri.go:89] found id: ""
	I0917 18:30:41.092978   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.092991   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:41.092998   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:41.093058   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:41.133385   78008 cri.go:89] found id: ""
	I0917 18:30:41.133415   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.133423   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:41.133430   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:41.133487   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:41.173993   78008 cri.go:89] found id: ""
	I0917 18:30:41.174017   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.174025   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:41.174030   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:41.174083   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:41.211127   78008 cri.go:89] found id: ""
	I0917 18:30:41.211154   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.211168   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:41.211174   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:41.211244   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:41.248607   78008 cri.go:89] found id: ""
	I0917 18:30:41.248632   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.248645   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:41.248652   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:41.248714   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:41.284580   78008 cri.go:89] found id: ""
	I0917 18:30:41.284612   78008 logs.go:276] 0 containers: []
	W0917 18:30:41.284621   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:41.284629   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:41.284640   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:41.336573   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:41.336613   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:41.352134   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:41.352167   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:41.419061   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:41.419085   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:41.419099   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:41.499758   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:41.499792   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:44.043361   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:44.057270   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:44.057339   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:44.096130   78008 cri.go:89] found id: ""
	I0917 18:30:44.096165   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.096176   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:44.096184   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:44.096238   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:44.134483   78008 cri.go:89] found id: ""
	I0917 18:30:44.134514   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.134526   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:44.134534   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:44.134601   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:44.172723   78008 cri.go:89] found id: ""
	I0917 18:30:44.172759   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.172774   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:44.172782   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:44.172855   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:44.208478   78008 cri.go:89] found id: ""
	I0917 18:30:44.208506   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.208514   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:44.208519   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:44.208577   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:44.249352   78008 cri.go:89] found id: ""
	I0917 18:30:44.249381   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.249391   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:44.249398   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:44.249457   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:44.291156   78008 cri.go:89] found id: ""
	I0917 18:30:44.291180   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.291188   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:44.291194   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:44.291243   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:44.331580   78008 cri.go:89] found id: ""
	I0917 18:30:44.331612   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.331623   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:44.331632   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:44.331705   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:44.370722   78008 cri.go:89] found id: ""
	I0917 18:30:44.370750   78008 logs.go:276] 0 containers: []
	W0917 18:30:44.370763   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:44.370774   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:44.370797   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:44.421126   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:44.421161   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:44.478581   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:44.478624   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:44.493492   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:44.493522   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:44.566317   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:44.566347   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:44.566358   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:42.664631   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:44.664871   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:44.001209   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:46.003437   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:45.625415   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:47.626515   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:47.147466   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:47.162590   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:47.162654   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:47.201382   78008 cri.go:89] found id: ""
	I0917 18:30:47.201409   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.201418   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:47.201423   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:47.201474   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:47.249536   78008 cri.go:89] found id: ""
	I0917 18:30:47.249561   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.249569   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:47.249574   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:47.249631   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:47.292337   78008 cri.go:89] found id: ""
	I0917 18:30:47.292361   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.292369   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:47.292376   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:47.292438   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:47.341387   78008 cri.go:89] found id: ""
	I0917 18:30:47.341421   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.341433   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:47.341447   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:47.341531   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:47.382687   78008 cri.go:89] found id: ""
	I0917 18:30:47.382719   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.382748   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:47.382762   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:47.382827   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:47.419598   78008 cri.go:89] found id: ""
	I0917 18:30:47.419632   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.419644   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:47.419650   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:47.419717   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:47.456104   78008 cri.go:89] found id: ""
	I0917 18:30:47.456131   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.456141   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:47.456148   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:47.456210   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:47.498610   78008 cri.go:89] found id: ""
	I0917 18:30:47.498643   78008 logs.go:276] 0 containers: []
	W0917 18:30:47.498654   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:47.498665   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:47.498706   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:47.573796   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:47.573819   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:47.573830   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:47.651234   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:47.651271   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:47.692875   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:47.692902   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:47.747088   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:47.747128   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:50.262789   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:50.277262   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:50.277415   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:50.314866   78008 cri.go:89] found id: ""
	I0917 18:30:50.314902   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.314911   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:50.314916   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:50.314971   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:50.353490   78008 cri.go:89] found id: ""
	I0917 18:30:50.353527   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.353536   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:50.353542   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:50.353590   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:50.391922   78008 cri.go:89] found id: ""
	I0917 18:30:50.391944   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.391952   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:50.391957   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:50.392003   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:50.431088   78008 cri.go:89] found id: ""
	I0917 18:30:50.431118   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.431129   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:50.431136   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:50.431186   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:50.469971   78008 cri.go:89] found id: ""
	I0917 18:30:50.469999   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.470010   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:50.470018   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:50.470083   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:50.509121   78008 cri.go:89] found id: ""
	I0917 18:30:50.509153   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.509165   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:50.509172   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:50.509256   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:50.546569   78008 cri.go:89] found id: ""
	I0917 18:30:50.546594   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.546602   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:50.546607   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:50.546656   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:50.586045   78008 cri.go:89] found id: ""
	I0917 18:30:50.586071   78008 logs.go:276] 0 containers: []
	W0917 18:30:50.586080   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:50.586088   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:50.586098   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:50.642994   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:50.643040   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:50.658018   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:50.658050   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 18:30:46.665597   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:49.164714   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:48.501502   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:51.001554   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:50.124526   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:52.625006   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	W0917 18:30:50.730760   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:50.730792   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:50.730808   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:50.810154   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:50.810185   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:53.356859   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:53.371313   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:53.371406   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:53.412822   78008 cri.go:89] found id: ""
	I0917 18:30:53.412847   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.412858   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:53.412865   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:53.412931   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:53.448900   78008 cri.go:89] found id: ""
	I0917 18:30:53.448932   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.448943   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:53.448950   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:53.449014   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:53.487141   78008 cri.go:89] found id: ""
	I0917 18:30:53.487167   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.487176   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:53.487182   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:53.487251   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:53.528899   78008 cri.go:89] found id: ""
	I0917 18:30:53.528928   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.528940   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:53.528947   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:53.529008   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:53.564795   78008 cri.go:89] found id: ""
	I0917 18:30:53.564827   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.564839   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:53.564847   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:53.564914   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:53.605208   78008 cri.go:89] found id: ""
	I0917 18:30:53.605257   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.605268   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:53.605277   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:53.605339   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:53.647177   78008 cri.go:89] found id: ""
	I0917 18:30:53.647205   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.647214   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:53.647219   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:53.647278   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:53.694030   78008 cri.go:89] found id: ""
	I0917 18:30:53.694057   78008 logs.go:276] 0 containers: []
	W0917 18:30:53.694067   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:53.694075   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:53.694085   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:53.746611   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:53.746645   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:53.761563   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:53.761595   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:53.835663   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:53.835694   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:53.835709   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:53.920796   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:53.920848   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:51.166015   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:53.665173   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:53.001959   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:55.501150   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:54.625124   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:56.626246   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:56.468452   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:56.482077   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:56.482148   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:56.518569   78008 cri.go:89] found id: ""
	I0917 18:30:56.518593   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.518601   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:56.518607   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:56.518665   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:56.560000   78008 cri.go:89] found id: ""
	I0917 18:30:56.560033   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.560045   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:56.560054   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:56.560117   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:56.600391   78008 cri.go:89] found id: ""
	I0917 18:30:56.600423   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.600435   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:56.600442   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:56.600519   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:56.637674   78008 cri.go:89] found id: ""
	I0917 18:30:56.637706   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.637720   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:56.637728   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:56.637781   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:56.673297   78008 cri.go:89] found id: ""
	I0917 18:30:56.673329   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.673340   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:56.673348   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:56.673414   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:56.708863   78008 cri.go:89] found id: ""
	I0917 18:30:56.708898   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.708910   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:56.708917   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:56.708979   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:56.745165   78008 cri.go:89] found id: ""
	I0917 18:30:56.745199   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.745211   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:56.745220   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:56.745297   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:56.793206   78008 cri.go:89] found id: ""
	I0917 18:30:56.793260   78008 logs.go:276] 0 containers: []
	W0917 18:30:56.793273   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:56.793284   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:30:56.793297   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:30:56.880661   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:30:56.880699   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:56.926789   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:56.926820   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:56.978914   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:56.978965   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:56.993199   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:56.993236   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:30:57.065180   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:30:59.565927   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:30:59.579838   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:30:59.579921   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:30:59.616623   78008 cri.go:89] found id: ""
	I0917 18:30:59.616648   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.616656   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:30:59.616662   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:30:59.616716   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:30:59.659048   78008 cri.go:89] found id: ""
	I0917 18:30:59.659074   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.659084   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:30:59.659091   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:30:59.659153   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:30:59.694874   78008 cri.go:89] found id: ""
	I0917 18:30:59.694899   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.694910   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:30:59.694921   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:30:59.694988   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:30:59.732858   78008 cri.go:89] found id: ""
	I0917 18:30:59.732889   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.732902   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:30:59.732909   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:30:59.732972   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:30:59.771178   78008 cri.go:89] found id: ""
	I0917 18:30:59.771203   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.771212   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:30:59.771218   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:30:59.771271   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:30:59.812456   78008 cri.go:89] found id: ""
	I0917 18:30:59.812481   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.812490   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:30:59.812498   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:30:59.812560   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:30:59.849876   78008 cri.go:89] found id: ""
	I0917 18:30:59.849906   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.849917   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:30:59.849924   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:30:59.849988   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:30:59.889796   78008 cri.go:89] found id: ""
	I0917 18:30:59.889827   78008 logs.go:276] 0 containers: []
	W0917 18:30:59.889839   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:30:59.889850   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:30:59.889865   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:30:59.942735   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:30:59.942774   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:30:59.957159   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:30:59.957186   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:00.030497   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:00.030522   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:00.030537   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:00.112077   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:00.112134   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:30:56.164011   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:58.164643   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:00.164831   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:57.502585   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:00.002013   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:02.002047   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:30:59.125188   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:01.127691   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:02.656525   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:02.671313   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:02.671379   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:02.710779   78008 cri.go:89] found id: ""
	I0917 18:31:02.710807   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.710820   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:02.710827   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:02.710890   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:02.750285   78008 cri.go:89] found id: ""
	I0917 18:31:02.750315   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.750326   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:02.750335   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:02.750399   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:02.790676   78008 cri.go:89] found id: ""
	I0917 18:31:02.790704   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.790712   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:02.790718   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:02.790766   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:02.832124   78008 cri.go:89] found id: ""
	I0917 18:31:02.832154   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.832166   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:02.832174   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:02.832236   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:02.868769   78008 cri.go:89] found id: ""
	I0917 18:31:02.868801   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.868813   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:02.868820   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:02.868886   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:02.910482   78008 cri.go:89] found id: ""
	I0917 18:31:02.910512   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.910524   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:02.910533   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:02.910587   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:02.948128   78008 cri.go:89] found id: ""
	I0917 18:31:02.948154   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.948165   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:02.948172   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:02.948239   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:02.987981   78008 cri.go:89] found id: ""
	I0917 18:31:02.988007   78008 logs.go:276] 0 containers: []
	W0917 18:31:02.988018   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:02.988028   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:02.988042   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:03.044116   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:03.044157   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:03.059837   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:03.059866   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:03.134048   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:03.134073   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:03.134086   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:03.214751   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:03.214792   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:02.169026   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:04.664829   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:04.501493   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:07.001722   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:03.625165   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:06.126203   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:05.768145   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:05.782375   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:05.782455   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:05.820083   78008 cri.go:89] found id: ""
	I0917 18:31:05.820116   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.820127   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:05.820134   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:05.820188   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:05.856626   78008 cri.go:89] found id: ""
	I0917 18:31:05.856655   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.856666   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:05.856673   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:05.856737   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:05.893119   78008 cri.go:89] found id: ""
	I0917 18:31:05.893149   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.893162   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:05.893172   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:05.893299   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:05.931892   78008 cri.go:89] found id: ""
	I0917 18:31:05.931916   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.931924   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:05.931930   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:05.931991   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:05.968770   78008 cri.go:89] found id: ""
	I0917 18:31:05.968802   78008 logs.go:276] 0 containers: []
	W0917 18:31:05.968814   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:05.968820   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:05.968888   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:06.008183   78008 cri.go:89] found id: ""
	I0917 18:31:06.008208   78008 logs.go:276] 0 containers: []
	W0917 18:31:06.008217   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:06.008222   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:06.008267   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:06.043161   78008 cri.go:89] found id: ""
	I0917 18:31:06.043189   78008 logs.go:276] 0 containers: []
	W0917 18:31:06.043199   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:06.043204   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:06.043271   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:06.079285   78008 cri.go:89] found id: ""
	I0917 18:31:06.079315   78008 logs.go:276] 0 containers: []
	W0917 18:31:06.079326   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:06.079336   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:06.079347   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:06.160863   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:06.160913   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:06.202101   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:06.202127   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:06.255482   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:06.255517   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:06.271518   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:06.271545   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:06.344034   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:08.844243   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:08.859312   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:08.859381   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:08.896915   78008 cri.go:89] found id: ""
	I0917 18:31:08.896942   78008 logs.go:276] 0 containers: []
	W0917 18:31:08.896952   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:08.896959   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:08.897022   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:08.937979   78008 cri.go:89] found id: ""
	I0917 18:31:08.938005   78008 logs.go:276] 0 containers: []
	W0917 18:31:08.938014   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:08.938022   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:08.938072   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:08.978502   78008 cri.go:89] found id: ""
	I0917 18:31:08.978536   78008 logs.go:276] 0 containers: []
	W0917 18:31:08.978548   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:08.978556   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:08.978616   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:09.044664   78008 cri.go:89] found id: ""
	I0917 18:31:09.044699   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.044711   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:09.044719   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:09.044796   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:09.082888   78008 cri.go:89] found id: ""
	I0917 18:31:09.082923   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.082944   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:09.082954   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:09.083027   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:09.120314   78008 cri.go:89] found id: ""
	I0917 18:31:09.120339   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.120350   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:09.120357   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:09.120418   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:09.160137   78008 cri.go:89] found id: ""
	I0917 18:31:09.160165   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.160176   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:09.160183   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:09.160241   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:09.198711   78008 cri.go:89] found id: ""
	I0917 18:31:09.198741   78008 logs.go:276] 0 containers: []
	W0917 18:31:09.198749   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:09.198756   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:09.198766   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:09.253431   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:09.253485   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:09.270520   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:09.270554   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:09.349865   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:09.349889   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:09.349909   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:09.436606   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:09.436650   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:07.165101   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:09.165704   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:09.001786   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:11.500557   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:08.625085   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:11.124817   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:13.125531   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:11.981998   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:11.995472   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:11.995556   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:12.035854   78008 cri.go:89] found id: ""
	I0917 18:31:12.035883   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.035894   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:12.035902   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:12.035953   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:12.070923   78008 cri.go:89] found id: ""
	I0917 18:31:12.070953   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.070965   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:12.070973   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:12.071041   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:12.108151   78008 cri.go:89] found id: ""
	I0917 18:31:12.108176   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.108185   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:12.108190   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:12.108238   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:12.146050   78008 cri.go:89] found id: ""
	I0917 18:31:12.146081   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.146092   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:12.146100   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:12.146158   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:12.185355   78008 cri.go:89] found id: ""
	I0917 18:31:12.185387   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.185396   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:12.185402   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:12.185449   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:12.222377   78008 cri.go:89] found id: ""
	I0917 18:31:12.222403   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.222412   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:12.222418   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:12.222488   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:12.258190   78008 cri.go:89] found id: ""
	I0917 18:31:12.258231   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.258242   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:12.258249   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:12.258326   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:12.295674   78008 cri.go:89] found id: ""
	I0917 18:31:12.295710   78008 logs.go:276] 0 containers: []
	W0917 18:31:12.295722   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:12.295731   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:12.295742   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:12.348185   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:12.348223   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:12.363961   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:12.363992   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:12.438630   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:12.438661   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:12.438676   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:12.520086   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:12.520133   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:15.061926   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:15.079141   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:15.079206   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:15.122722   78008 cri.go:89] found id: ""
	I0917 18:31:15.122812   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.122828   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:15.122837   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:15.122895   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:15.168184   78008 cri.go:89] found id: ""
	I0917 18:31:15.168209   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.168218   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:15.168225   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:15.168288   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:15.208219   78008 cri.go:89] found id: ""
	I0917 18:31:15.208246   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.208259   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:15.208266   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:15.208318   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:15.248082   78008 cri.go:89] found id: ""
	I0917 18:31:15.248114   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.248126   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:15.248133   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:15.248197   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:15.285215   78008 cri.go:89] found id: ""
	I0917 18:31:15.285263   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.285274   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:15.285281   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:15.285339   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:15.328617   78008 cri.go:89] found id: ""
	I0917 18:31:15.328650   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.328669   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:15.328675   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:15.328738   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:15.371869   78008 cri.go:89] found id: ""
	I0917 18:31:15.371895   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.371903   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:15.371909   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:15.371955   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:15.418109   78008 cri.go:89] found id: ""
	I0917 18:31:15.418136   78008 logs.go:276] 0 containers: []
	W0917 18:31:15.418145   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:15.418153   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:15.418166   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:15.443709   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:15.443741   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:15.540475   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:15.540499   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:15.540511   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:15.627751   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:15.627781   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:15.671027   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:15.671056   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:11.664755   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:14.164563   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:14.001567   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:16.500724   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:15.127715   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:17.624831   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:18.223732   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:18.239161   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:18.239242   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:18.280252   78008 cri.go:89] found id: ""
	I0917 18:31:18.280282   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.280294   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:18.280301   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:18.280350   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:18.318774   78008 cri.go:89] found id: ""
	I0917 18:31:18.318805   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.318815   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:18.318821   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:18.318877   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:18.354755   78008 cri.go:89] found id: ""
	I0917 18:31:18.354785   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.354796   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:18.354804   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:18.354862   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:18.391283   78008 cri.go:89] found id: ""
	I0917 18:31:18.391310   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.391318   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:18.391324   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:18.391372   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:18.429026   78008 cri.go:89] found id: ""
	I0917 18:31:18.429062   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.429074   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:18.429081   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:18.429135   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:18.468318   78008 cri.go:89] found id: ""
	I0917 18:31:18.468351   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.468365   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:18.468372   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:18.468421   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:18.509871   78008 cri.go:89] found id: ""
	I0917 18:31:18.509903   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.509914   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:18.509922   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:18.509979   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:18.548662   78008 cri.go:89] found id: ""
	I0917 18:31:18.548694   78008 logs.go:276] 0 containers: []
	W0917 18:31:18.548705   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:18.548714   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:18.548729   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:18.587633   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:18.587662   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:18.640867   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:18.640910   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:18.658020   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:18.658054   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:18.729643   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:18.729674   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:18.729686   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:16.664372   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:18.666834   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:18.501952   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:21.001547   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:20.125423   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:22.626597   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:21.313013   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:21.329702   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:21.329768   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:21.378972   78008 cri.go:89] found id: ""
	I0917 18:31:21.378996   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.379004   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:21.379010   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:21.379065   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:21.433355   78008 cri.go:89] found id: ""
	I0917 18:31:21.433382   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.433393   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:21.433400   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:21.433462   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:21.489030   78008 cri.go:89] found id: ""
	I0917 18:31:21.489055   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.489063   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:21.489068   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:21.489124   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:21.529089   78008 cri.go:89] found id: ""
	I0917 18:31:21.529119   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.529131   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:21.529138   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:21.529188   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:21.566892   78008 cri.go:89] found id: ""
	I0917 18:31:21.566919   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.566929   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:21.566935   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:21.566985   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:21.605453   78008 cri.go:89] found id: ""
	I0917 18:31:21.605484   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.605496   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:21.605504   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:21.605569   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:21.647710   78008 cri.go:89] found id: ""
	I0917 18:31:21.647732   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.647740   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:21.647745   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:21.647804   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:21.687002   78008 cri.go:89] found id: ""
	I0917 18:31:21.687036   78008 logs.go:276] 0 containers: []
	W0917 18:31:21.687048   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:21.687058   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:21.687074   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:21.738591   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:21.738631   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:21.752950   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:21.752987   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:21.826268   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:21.826292   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:21.826306   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:21.906598   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:21.906646   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:24.453057   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:24.468867   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:24.468930   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:24.511103   78008 cri.go:89] found id: ""
	I0917 18:31:24.511129   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.511140   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:24.511147   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:24.511200   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:24.546392   78008 cri.go:89] found id: ""
	I0917 18:31:24.546423   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.546434   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:24.546443   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:24.546505   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:24.583266   78008 cri.go:89] found id: ""
	I0917 18:31:24.583299   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.583310   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:24.583320   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:24.583381   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:24.620018   78008 cri.go:89] found id: ""
	I0917 18:31:24.620051   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.620063   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:24.620070   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:24.620133   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:24.659528   78008 cri.go:89] found id: ""
	I0917 18:31:24.659556   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.659566   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:24.659573   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:24.659636   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:24.699115   78008 cri.go:89] found id: ""
	I0917 18:31:24.699153   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.699167   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:24.699175   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:24.699234   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:24.745358   78008 cri.go:89] found id: ""
	I0917 18:31:24.745392   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.745404   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:24.745414   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:24.745483   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:24.786606   78008 cri.go:89] found id: ""
	I0917 18:31:24.786635   78008 logs.go:276] 0 containers: []
	W0917 18:31:24.786644   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:24.786657   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:24.786671   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:24.838417   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:24.838462   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:24.852959   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:24.852988   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:24.927013   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:24.927039   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:24.927058   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:25.008679   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:25.008720   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:21.164500   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:23.165380   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:25.665618   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:23.501265   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:26.002113   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:25.126406   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:27.627599   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:27.549945   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:27.565336   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:27.565450   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:27.605806   78008 cri.go:89] found id: ""
	I0917 18:31:27.605844   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.605853   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:27.605860   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:27.605909   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:27.652915   78008 cri.go:89] found id: ""
	I0917 18:31:27.652955   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.652968   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:27.652977   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:27.653044   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:27.701732   78008 cri.go:89] found id: ""
	I0917 18:31:27.701759   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.701771   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:27.701778   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:27.701841   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:27.744587   78008 cri.go:89] found id: ""
	I0917 18:31:27.744616   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.744628   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:27.744635   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:27.744705   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:27.789161   78008 cri.go:89] found id: ""
	I0917 18:31:27.789196   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.789207   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:27.789214   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:27.789296   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:27.833484   78008 cri.go:89] found id: ""
	I0917 18:31:27.833513   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.833525   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:27.833532   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:27.833591   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:27.873669   78008 cri.go:89] found id: ""
	I0917 18:31:27.873703   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.873715   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:27.873722   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:27.873792   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:27.911270   78008 cri.go:89] found id: ""
	I0917 18:31:27.911301   78008 logs.go:276] 0 containers: []
	W0917 18:31:27.911313   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:27.911323   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:27.911336   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:27.951769   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:27.951798   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:28.002220   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:28.002254   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:28.017358   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:28.017392   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:28.091456   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:28.091481   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:28.091492   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:27.666003   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:30.164548   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:28.501094   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:31.005569   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:30.124439   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:32.126247   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:30.679643   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:30.693877   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:30.693948   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:30.732196   78008 cri.go:89] found id: ""
	I0917 18:31:30.732228   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.732240   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:30.732247   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:30.732320   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:30.774700   78008 cri.go:89] found id: ""
	I0917 18:31:30.774730   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.774742   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:30.774749   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:30.774838   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:30.814394   78008 cri.go:89] found id: ""
	I0917 18:31:30.814420   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.814428   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:30.814434   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:30.814487   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:30.854746   78008 cri.go:89] found id: ""
	I0917 18:31:30.854788   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.854801   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:30.854830   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:30.854899   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:30.893533   78008 cri.go:89] found id: ""
	I0917 18:31:30.893564   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.893574   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:30.893580   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:30.893649   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:30.932719   78008 cri.go:89] found id: ""
	I0917 18:31:30.932746   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.932757   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:30.932763   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:30.932811   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:30.974004   78008 cri.go:89] found id: ""
	I0917 18:31:30.974047   78008 logs.go:276] 0 containers: []
	W0917 18:31:30.974056   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:30.974061   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:30.974117   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:31.017469   78008 cri.go:89] found id: ""
	I0917 18:31:31.017498   78008 logs.go:276] 0 containers: []
	W0917 18:31:31.017509   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:31.017517   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:31.017529   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:31.094385   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:31.094409   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:31.094424   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:31.177975   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:31.178012   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:31.218773   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:31.218804   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:31.272960   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:31.272996   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:33.788825   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:33.804904   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:33.804985   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:33.847149   78008 cri.go:89] found id: ""
	I0917 18:31:33.847178   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.847190   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:33.847198   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:33.847259   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:33.883548   78008 cri.go:89] found id: ""
	I0917 18:31:33.883573   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.883581   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:33.883586   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:33.883632   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:33.917495   78008 cri.go:89] found id: ""
	I0917 18:31:33.917523   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.917535   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:33.917542   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:33.917634   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:33.954931   78008 cri.go:89] found id: ""
	I0917 18:31:33.954955   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.954963   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:33.954969   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:33.955019   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:33.991535   78008 cri.go:89] found id: ""
	I0917 18:31:33.991568   78008 logs.go:276] 0 containers: []
	W0917 18:31:33.991577   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:33.991582   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:33.991639   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:34.039451   78008 cri.go:89] found id: ""
	I0917 18:31:34.039478   78008 logs.go:276] 0 containers: []
	W0917 18:31:34.039489   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:34.039497   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:34.039557   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:34.081258   78008 cri.go:89] found id: ""
	I0917 18:31:34.081300   78008 logs.go:276] 0 containers: []
	W0917 18:31:34.081311   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:34.081317   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:34.081379   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:34.119557   78008 cri.go:89] found id: ""
	I0917 18:31:34.119586   78008 logs.go:276] 0 containers: []
	W0917 18:31:34.119597   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:34.119608   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:34.119623   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:34.163345   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:34.163379   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:34.218399   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:34.218454   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:34.232705   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:34.232736   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:34.309948   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:34.309972   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:34.309984   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:32.164688   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:34.165267   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:33.500604   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:35.501094   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:34.624847   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:36.624971   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:36.896504   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:36.913784   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:36.913870   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:36.954525   78008 cri.go:89] found id: ""
	I0917 18:31:36.954557   78008 logs.go:276] 0 containers: []
	W0917 18:31:36.954568   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:36.954578   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:36.954648   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:36.989379   78008 cri.go:89] found id: ""
	I0917 18:31:36.989408   78008 logs.go:276] 0 containers: []
	W0917 18:31:36.989419   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:36.989426   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:36.989491   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:37.029078   78008 cri.go:89] found id: ""
	I0917 18:31:37.029107   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.029119   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:37.029126   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:37.029180   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:37.066636   78008 cri.go:89] found id: ""
	I0917 18:31:37.066670   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.066683   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:37.066691   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:37.066754   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:37.109791   78008 cri.go:89] found id: ""
	I0917 18:31:37.109827   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.109838   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:37.109849   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:37.109925   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:37.153415   78008 cri.go:89] found id: ""
	I0917 18:31:37.153448   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.153459   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:37.153467   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:37.153527   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:37.192826   78008 cri.go:89] found id: ""
	I0917 18:31:37.192853   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.192864   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:37.192871   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:37.192930   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:37.230579   78008 cri.go:89] found id: ""
	I0917 18:31:37.230632   78008 logs.go:276] 0 containers: []
	W0917 18:31:37.230647   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:37.230665   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:37.230677   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:37.315392   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:37.315430   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:37.356521   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:37.356554   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:37.410552   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:37.410591   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:37.426013   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:37.426040   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:37.499352   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:39.999538   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:40.014515   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:40.014590   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:40.051511   78008 cri.go:89] found id: ""
	I0917 18:31:40.051548   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.051558   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:40.051564   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:40.051623   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:40.089707   78008 cri.go:89] found id: ""
	I0917 18:31:40.089733   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.089747   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:40.089752   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:40.089802   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:40.137303   78008 cri.go:89] found id: ""
	I0917 18:31:40.137326   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.137335   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:40.137341   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:40.137389   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:40.176721   78008 cri.go:89] found id: ""
	I0917 18:31:40.176746   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.176755   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:40.176761   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:40.176809   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:40.212369   78008 cri.go:89] found id: ""
	I0917 18:31:40.212401   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.212412   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:40.212421   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:40.212494   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:40.255798   78008 cri.go:89] found id: ""
	I0917 18:31:40.255828   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.255838   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:40.255847   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:40.255982   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:40.293643   78008 cri.go:89] found id: ""
	I0917 18:31:40.293672   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.293682   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:40.293689   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:40.293752   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:40.332300   78008 cri.go:89] found id: ""
	I0917 18:31:40.332330   78008 logs.go:276] 0 containers: []
	W0917 18:31:40.332340   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:40.332350   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:40.332365   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:40.389068   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:40.389115   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:40.403118   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:40.403149   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:40.476043   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:40.476067   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:40.476081   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:40.563164   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:40.563204   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:36.664291   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:38.666750   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:37.501943   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:40.000891   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:42.001550   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:38.625406   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:41.124655   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:43.126544   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:43.112107   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:43.127968   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:43.128034   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:43.166351   78008 cri.go:89] found id: ""
	I0917 18:31:43.166371   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.166379   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:43.166384   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:43.166433   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:43.201124   78008 cri.go:89] found id: ""
	I0917 18:31:43.201160   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.201173   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:43.201181   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:43.201265   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:43.245684   78008 cri.go:89] found id: ""
	I0917 18:31:43.245717   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.245728   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:43.245735   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:43.245796   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:43.282751   78008 cri.go:89] found id: ""
	I0917 18:31:43.282777   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.282785   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:43.282791   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:43.282844   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:43.322180   78008 cri.go:89] found id: ""
	I0917 18:31:43.322212   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.322223   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:43.322230   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:43.322294   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:43.359575   78008 cri.go:89] found id: ""
	I0917 18:31:43.359608   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.359620   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:43.359627   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:43.359689   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:43.398782   78008 cri.go:89] found id: ""
	I0917 18:31:43.398811   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.398824   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:43.398833   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:43.398913   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:43.437747   78008 cri.go:89] found id: ""
	I0917 18:31:43.437771   78008 logs.go:276] 0 containers: []
	W0917 18:31:43.437779   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:43.437787   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:43.437800   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:43.477986   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:43.478019   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:43.532637   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:43.532674   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:43.547552   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:43.547577   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:43.632556   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:43.632578   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:43.632592   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:41.163988   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:43.165378   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:45.664803   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:44.500302   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:46.500489   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:45.128136   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:47.626024   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:46.214890   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:46.229327   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:46.229408   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:46.268605   78008 cri.go:89] found id: ""
	I0917 18:31:46.268632   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.268642   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:46.268649   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:46.268711   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:46.309508   78008 cri.go:89] found id: ""
	I0917 18:31:46.309539   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.309549   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:46.309558   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:46.309620   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:46.352610   78008 cri.go:89] found id: ""
	I0917 18:31:46.352639   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.352648   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:46.352654   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:46.352723   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:46.398702   78008 cri.go:89] found id: ""
	I0917 18:31:46.398738   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.398747   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:46.398753   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:46.398811   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:46.437522   78008 cri.go:89] found id: ""
	I0917 18:31:46.437545   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.437554   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:46.437559   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:46.437641   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:46.474865   78008 cri.go:89] found id: ""
	I0917 18:31:46.474893   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.474902   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:46.474909   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:46.474957   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:46.514497   78008 cri.go:89] found id: ""
	I0917 18:31:46.514525   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.514536   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:46.514543   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:46.514605   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:46.556948   78008 cri.go:89] found id: ""
	I0917 18:31:46.556979   78008 logs.go:276] 0 containers: []
	W0917 18:31:46.556988   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:46.556997   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:46.557008   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:46.609300   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:46.609337   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:46.626321   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:46.626351   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:46.707669   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:46.707701   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:46.707714   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:46.789774   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:46.789815   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:49.332780   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:49.347262   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:49.347334   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:49.388368   78008 cri.go:89] found id: ""
	I0917 18:31:49.388411   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.388423   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:49.388431   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:49.388493   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:49.423664   78008 cri.go:89] found id: ""
	I0917 18:31:49.423694   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.423707   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:49.423714   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:49.423776   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:49.462882   78008 cri.go:89] found id: ""
	I0917 18:31:49.462911   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.462924   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:49.462931   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:49.462988   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:49.524014   78008 cri.go:89] found id: ""
	I0917 18:31:49.524047   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.524056   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:49.524062   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:49.524114   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:49.564703   78008 cri.go:89] found id: ""
	I0917 18:31:49.564737   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.564748   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:49.564762   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:49.564827   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:49.609460   78008 cri.go:89] found id: ""
	I0917 18:31:49.609484   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.609493   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:49.609499   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:49.609554   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:49.651008   78008 cri.go:89] found id: ""
	I0917 18:31:49.651032   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.651040   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:49.651045   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:49.651106   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:49.693928   78008 cri.go:89] found id: ""
	I0917 18:31:49.693954   78008 logs.go:276] 0 containers: []
	W0917 18:31:49.693961   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:49.693969   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:49.693981   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:49.774940   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:49.774977   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:49.820362   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:49.820398   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:49.875508   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:49.875549   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:49.890690   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:49.890723   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:49.967803   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:47.664890   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:49.664943   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:48.502246   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:51.001296   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:50.125915   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:52.625169   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:52.468533   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:52.483749   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:52.483812   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:52.523017   78008 cri.go:89] found id: ""
	I0917 18:31:52.523040   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.523048   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:52.523055   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:52.523101   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:52.559848   78008 cri.go:89] found id: ""
	I0917 18:31:52.559879   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.559889   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:52.559895   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:52.559955   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:52.597168   78008 cri.go:89] found id: ""
	I0917 18:31:52.597192   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.597200   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:52.597207   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:52.597278   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:52.634213   78008 cri.go:89] found id: ""
	I0917 18:31:52.634241   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.634252   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:52.634265   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:52.634326   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:52.673842   78008 cri.go:89] found id: ""
	I0917 18:31:52.673865   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.673873   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:52.673878   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:52.673926   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:52.711568   78008 cri.go:89] found id: ""
	I0917 18:31:52.711596   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.711609   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:52.711617   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:52.711676   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:52.757002   78008 cri.go:89] found id: ""
	I0917 18:31:52.757030   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.757038   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:52.757043   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:52.757092   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:52.793092   78008 cri.go:89] found id: ""
	I0917 18:31:52.793126   78008 logs.go:276] 0 containers: []
	W0917 18:31:52.793135   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:52.793143   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:52.793155   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:52.847641   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:52.847682   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:52.862287   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:52.862314   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:52.941307   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:52.941331   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:52.941344   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:53.026114   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:53.026155   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:55.573116   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:55.588063   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:55.588125   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:55.633398   78008 cri.go:89] found id: ""
	I0917 18:31:55.633422   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.633430   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:55.633437   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:55.633511   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:55.669754   78008 cri.go:89] found id: ""
	I0917 18:31:55.669785   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.669796   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:55.669803   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:55.669876   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:52.165645   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:54.166228   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:53.500688   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:55.501849   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:55.126327   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:57.624683   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:55.711492   78008 cri.go:89] found id: ""
	I0917 18:31:55.711521   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.711533   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:55.711541   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:55.711603   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:55.749594   78008 cri.go:89] found id: ""
	I0917 18:31:55.749628   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.749638   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:55.749643   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:55.749695   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:55.786114   78008 cri.go:89] found id: ""
	I0917 18:31:55.786143   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.786155   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:55.786162   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:55.786222   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:55.824254   78008 cri.go:89] found id: ""
	I0917 18:31:55.824282   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.824293   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:55.824301   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:55.824361   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:55.861690   78008 cri.go:89] found id: ""
	I0917 18:31:55.861718   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.861728   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:55.861733   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:55.861794   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:55.913729   78008 cri.go:89] found id: ""
	I0917 18:31:55.913754   78008 logs.go:276] 0 containers: []
	W0917 18:31:55.913766   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:55.913775   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:55.913788   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:55.976835   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:55.976880   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:56.003201   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:56.003236   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:56.092101   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:56.092123   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:56.092137   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:56.170498   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:56.170533   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:58.714212   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:31:58.730997   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:31:58.731072   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:31:58.775640   78008 cri.go:89] found id: ""
	I0917 18:31:58.775678   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.775693   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:31:58.775701   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:31:58.775770   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:31:58.811738   78008 cri.go:89] found id: ""
	I0917 18:31:58.811764   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.811776   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:31:58.811783   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:31:58.811852   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:31:58.849803   78008 cri.go:89] found id: ""
	I0917 18:31:58.849827   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.849836   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:31:58.849841   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:31:58.849898   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:31:58.885827   78008 cri.go:89] found id: ""
	I0917 18:31:58.885856   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.885871   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:31:58.885878   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:31:58.885943   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:31:58.925539   78008 cri.go:89] found id: ""
	I0917 18:31:58.925565   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.925574   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:31:58.925579   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:31:58.925628   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:31:58.961074   78008 cri.go:89] found id: ""
	I0917 18:31:58.961104   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.961116   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:31:58.961123   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:31:58.961190   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:31:58.997843   78008 cri.go:89] found id: ""
	I0917 18:31:58.997878   78008 logs.go:276] 0 containers: []
	W0917 18:31:58.997889   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:31:58.997896   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:31:58.997962   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:31:59.034836   78008 cri.go:89] found id: ""
	I0917 18:31:59.034866   78008 logs.go:276] 0 containers: []
	W0917 18:31:59.034876   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:31:59.034884   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:31:59.034899   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:31:59.049085   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:31:59.049116   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:31:59.126143   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:31:59.126168   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:31:59.126183   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:31:59.210043   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:31:59.210096   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:31:59.258546   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:31:59.258575   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:31:56.664145   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:58.664990   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:58.000809   77433 pod_ready.go:103] pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace has status "Ready":"False"
	I0917 18:31:58.494554   77433 pod_ready.go:82] duration metric: took 4m0.000545882s for pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace to be "Ready" ...
	E0917 18:31:58.494588   77433 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-l8n57" in "kube-system" namespace to be "Ready" (will not retry!)
	I0917 18:31:58.494611   77433 pod_ready.go:39] duration metric: took 4m9.313096637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:31:58.494638   77433 kubeadm.go:597] duration metric: took 4m19.208089477s to restartPrimaryControlPlane
	W0917 18:31:58.494716   77433 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 18:31:58.494760   77433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:31:59.625888   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:02.125831   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:01.811930   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:01.833160   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:32:01.833223   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:32:01.891148   78008 cri.go:89] found id: ""
	I0917 18:32:01.891178   78008 logs.go:276] 0 containers: []
	W0917 18:32:01.891189   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:32:01.891197   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:32:01.891260   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:32:01.954367   78008 cri.go:89] found id: ""
	I0917 18:32:01.954407   78008 logs.go:276] 0 containers: []
	W0917 18:32:01.954418   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:32:01.954425   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:32:01.954490   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:32:01.998154   78008 cri.go:89] found id: ""
	I0917 18:32:01.998187   78008 logs.go:276] 0 containers: []
	W0917 18:32:01.998199   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:32:01.998206   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:32:01.998267   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:32:02.035412   78008 cri.go:89] found id: ""
	I0917 18:32:02.035446   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.035457   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:32:02.035464   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:32:02.035539   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:32:02.070552   78008 cri.go:89] found id: ""
	I0917 18:32:02.070586   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.070599   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:32:02.070604   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:32:02.070673   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:32:02.108680   78008 cri.go:89] found id: ""
	I0917 18:32:02.108717   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.108729   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:32:02.108737   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:32:02.108787   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:32:02.148560   78008 cri.go:89] found id: ""
	I0917 18:32:02.148585   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.148594   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:32:02.148600   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:32:02.148647   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:32:02.186398   78008 cri.go:89] found id: ""
	I0917 18:32:02.186434   78008 logs.go:276] 0 containers: []
	W0917 18:32:02.186445   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:32:02.186454   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:32:02.186468   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:32:02.273674   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:32:02.273695   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:32:02.273708   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:32:02.359656   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:32:02.359704   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:32:02.405465   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:32:02.405494   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:32:02.466534   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:32:02.466568   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:32:04.983572   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:04.998711   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:32:04.998796   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:32:05.038080   78008 cri.go:89] found id: ""
	I0917 18:32:05.038111   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.038121   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:32:05.038129   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:32:05.038189   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:32:05.074542   78008 cri.go:89] found id: ""
	I0917 18:32:05.074571   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.074582   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:32:05.074588   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:32:05.074652   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:32:05.113115   78008 cri.go:89] found id: ""
	I0917 18:32:05.113140   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.113149   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:32:05.113156   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:32:05.113216   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:32:05.151752   78008 cri.go:89] found id: ""
	I0917 18:32:05.151777   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.151786   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:32:05.151791   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:32:05.151840   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:32:05.191014   78008 cri.go:89] found id: ""
	I0917 18:32:05.191044   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.191056   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:32:05.191064   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:32:05.191126   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:32:05.226738   78008 cri.go:89] found id: ""
	I0917 18:32:05.226774   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.226787   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:32:05.226794   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:32:05.226856   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:32:05.263072   78008 cri.go:89] found id: ""
	I0917 18:32:05.263102   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.263115   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:32:05.263124   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:32:05.263184   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:32:05.302622   78008 cri.go:89] found id: ""
	I0917 18:32:05.302651   78008 logs.go:276] 0 containers: []
	W0917 18:32:05.302666   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:32:05.302677   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:32:05.302691   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:32:05.358101   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:32:05.358150   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:32:05.373289   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:32:05.373326   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:32:05.451451   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:32:05.451484   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:32:05.451496   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:32:05.532529   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:32:05.532570   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:32:01.165911   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:03.665523   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:04.126090   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:06.625207   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:08.079204   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:08.093914   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:32:08.093996   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:32:08.131132   78008 cri.go:89] found id: ""
	I0917 18:32:08.131164   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.131173   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:32:08.131178   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:32:08.131230   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:32:08.168017   78008 cri.go:89] found id: ""
	I0917 18:32:08.168044   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.168055   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:32:08.168062   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:32:08.168124   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:32:08.210190   78008 cri.go:89] found id: ""
	I0917 18:32:08.210212   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.210221   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:32:08.210226   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:32:08.210279   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:32:08.250264   78008 cri.go:89] found id: ""
	I0917 18:32:08.250291   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.250299   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:32:08.250304   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:32:08.250352   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:32:08.287732   78008 cri.go:89] found id: ""
	I0917 18:32:08.287760   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.287768   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:32:08.287775   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:32:08.287826   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:32:08.325131   78008 cri.go:89] found id: ""
	I0917 18:32:08.325161   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.325170   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:32:08.325176   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:32:08.325243   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:32:08.365979   78008 cri.go:89] found id: ""
	I0917 18:32:08.366008   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.366019   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:32:08.366027   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:32:08.366088   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:32:08.403430   78008 cri.go:89] found id: ""
	I0917 18:32:08.403472   78008 logs.go:276] 0 containers: []
	W0917 18:32:08.403484   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:32:08.403495   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:32:08.403511   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:32:08.444834   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:32:08.444869   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:32:08.500363   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:32:08.500408   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 18:32:08.516624   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:32:08.516653   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:32:08.591279   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:32:08.591304   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:32:08.591317   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:32:06.165279   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:08.168012   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:10.665050   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:11.173345   78008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:11.187689   78008 kubeadm.go:597] duration metric: took 4m1.808927826s to restartPrimaryControlPlane
	W0917 18:32:11.187762   78008 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 18:32:11.187786   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:32:12.794262   78008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.606454478s)
	I0917 18:32:12.794344   78008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:32:12.809379   78008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:32:12.821912   78008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:32:12.833176   78008 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:32:12.833201   78008 kubeadm.go:157] found existing configuration files:
	
	I0917 18:32:12.833279   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:32:12.843175   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:32:12.843245   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:32:12.855310   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:32:12.866777   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:32:12.866846   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:32:12.878436   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:32:12.889677   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:32:12.889763   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:32:12.900141   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:32:12.909916   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:32:12.909994   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:32:12.920578   78008 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:32:12.993663   78008 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0917 18:32:12.993743   78008 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:32:13.145113   78008 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:32:13.145321   78008 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:32:13.145451   78008 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0917 18:32:13.346279   78008 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:32:08.627002   77819 pod_ready.go:103] pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:09.118558   77819 pod_ready.go:82] duration metric: took 4m0.00024297s for pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace to be "Ready" ...
	E0917 18:32:09.118584   77819 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-gpdsn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0917 18:32:09.118600   77819 pod_ready.go:39] duration metric: took 4m13.424544466s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:09.118628   77819 kubeadm.go:597] duration metric: took 4m21.847475999s to restartPrimaryControlPlane
	W0917 18:32:09.118695   77819 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 18:32:09.118723   77819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:32:13.348308   78008 out.go:235]   - Generating certificates and keys ...
	I0917 18:32:13.348411   78008 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:32:13.348505   78008 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:32:13.348622   78008 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:32:13.348719   78008 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:32:13.348814   78008 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:32:13.348895   78008 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:32:13.348991   78008 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:32:13.349126   78008 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:32:13.349595   78008 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:32:13.349939   78008 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:32:13.350010   78008 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:32:13.350096   78008 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:32:13.677314   78008 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:32:13.840807   78008 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:32:13.886801   78008 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:32:13.937675   78008 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:32:13.956057   78008 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:32:13.957185   78008 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:32:13.957266   78008 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:32:14.099317   78008 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:32:14.101339   78008 out.go:235]   - Booting up control plane ...
	I0917 18:32:14.101446   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:32:14.107518   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:32:14.107630   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:32:14.107964   78008 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:32:14.118995   78008 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0917 18:32:13.164003   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:15.165309   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:17.664956   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:20.165073   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:24.890884   77433 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.396095322s)
	I0917 18:32:24.890966   77433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:32:24.915367   77433 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:32:24.928191   77433 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:32:24.945924   77433 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:32:24.945943   77433 kubeadm.go:157] found existing configuration files:
	
	I0917 18:32:24.945988   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:32:24.961382   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:32:24.961454   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:32:24.977324   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:32:24.989771   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:32:24.989861   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:32:25.001342   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:32:25.035933   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:32:25.036004   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:32:25.047185   77433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:32:25.058299   77433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:32:25.058358   77433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:32:25.070264   77433 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:32:25.124517   77433 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 18:32:25.124634   77433 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:32:25.257042   77433 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:32:25.257211   77433 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:32:25.257378   77433 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 18:32:25.267568   77433 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:32:22.663592   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:24.665849   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:25.269902   77433 out.go:235]   - Generating certificates and keys ...
	I0917 18:32:25.270012   77433 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:32:25.270115   77433 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:32:25.270221   77433 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:32:25.270288   77433 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:32:25.270379   77433 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:32:25.270462   77433 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:32:25.270563   77433 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:32:25.270664   77433 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:32:25.270747   77433 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:32:25.270810   77433 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:32:25.270844   77433 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:32:25.270892   77433 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:32:25.425276   77433 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:32:25.498604   77433 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 18:32:25.848094   77433 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:32:26.011742   77433 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:32:26.097462   77433 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:32:26.097929   77433 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:32:26.100735   77433 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:32:26.102662   77433 out.go:235]   - Booting up control plane ...
	I0917 18:32:26.102777   77433 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:32:26.102880   77433 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:32:26.102954   77433 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:32:26.123221   77433 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:32:26.130932   77433 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:32:26.131021   77433 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:32:26.291311   77433 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 18:32:26.291462   77433 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 18:32:27.164870   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:29.165716   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:27.298734   77433 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00350356s
	I0917 18:32:27.298851   77433 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 18:32:32.298994   77433 kubeadm.go:310] [api-check] The API server is healthy after 5.002867585s
	I0917 18:32:32.319430   77433 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 18:32:32.345527   77433 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 18:32:32.381518   77433 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 18:32:32.381817   77433 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-328741 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 18:32:32.398185   77433 kubeadm.go:310] [bootstrap-token] Using token: jgy27g.uvhet1w3psx1hofx
	I0917 18:32:32.399853   77433 out.go:235]   - Configuring RBAC rules ...
	I0917 18:32:32.400009   77433 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 18:32:32.407740   77433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 18:32:32.421320   77433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 18:32:32.427046   77433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 18:32:32.434506   77433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 18:32:32.438950   77433 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 18:32:32.705233   77433 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 18:32:33.140761   77433 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 18:32:33.720560   77433 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 18:32:33.720589   77433 kubeadm.go:310] 
	I0917 18:32:33.720679   77433 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 18:32:33.720690   77433 kubeadm.go:310] 
	I0917 18:32:33.720803   77433 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 18:32:33.720823   77433 kubeadm.go:310] 
	I0917 18:32:33.720869   77433 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 18:32:33.720932   77433 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 18:32:33.721010   77433 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 18:32:33.721021   77433 kubeadm.go:310] 
	I0917 18:32:33.721094   77433 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 18:32:33.721103   77433 kubeadm.go:310] 
	I0917 18:32:33.721168   77433 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 18:32:33.721176   77433 kubeadm.go:310] 
	I0917 18:32:33.721291   77433 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 18:32:33.721406   77433 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 18:32:33.721515   77433 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 18:32:33.721527   77433 kubeadm.go:310] 
	I0917 18:32:33.721653   77433 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 18:32:33.721780   77433 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 18:32:33.721797   77433 kubeadm.go:310] 
	I0917 18:32:33.721923   77433 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jgy27g.uvhet1w3psx1hofx \
	I0917 18:32:33.722093   77433 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 \
	I0917 18:32:33.722131   77433 kubeadm.go:310] 	--control-plane 
	I0917 18:32:33.722140   77433 kubeadm.go:310] 
	I0917 18:32:33.722267   77433 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 18:32:33.722278   77433 kubeadm.go:310] 
	I0917 18:32:33.722389   77433 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jgy27g.uvhet1w3psx1hofx \
	I0917 18:32:33.722565   77433 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 
	I0917 18:32:33.723290   77433 kubeadm.go:310] W0917 18:32:25.090856    2941 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:32:33.723705   77433 kubeadm.go:310] W0917 18:32:25.092716    2941 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:32:33.723861   77433 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:32:33.723883   77433 cni.go:84] Creating CNI manager for ""
	I0917 18:32:33.723896   77433 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:32:33.725956   77433 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:32:31.665048   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:34.166586   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:33.727153   77433 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:32:33.739127   77433 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:32:33.759704   77433 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 18:32:33.759766   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:33.759799   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-328741 minikube.k8s.io/updated_at=2024_09_17T18_32_33_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=no-preload-328741 minikube.k8s.io/primary=true
	I0917 18:32:33.977462   77433 ops.go:34] apiserver oom_adj: -16
	I0917 18:32:33.977485   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:34.477572   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:34.977644   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:35.477829   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:35.977732   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:36.477549   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:36.978147   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:37.477629   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:37.977554   77433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:38.125930   77433 kubeadm.go:1113] duration metric: took 4.366225265s to wait for elevateKubeSystemPrivileges
	I0917 18:32:38.125973   77433 kubeadm.go:394] duration metric: took 4m58.899335742s to StartCluster
	I0917 18:32:38.125999   77433 settings.go:142] acquiring lock: {Name:mk34898861b3ed534da4bd8404feab95fe690001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:32:38.126117   77433 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:32:38.128667   77433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:32:38.129071   77433 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 18:32:38.129134   77433 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 18:32:38.129258   77433 addons.go:69] Setting storage-provisioner=true in profile "no-preload-328741"
	I0917 18:32:38.129284   77433 addons.go:234] Setting addon storage-provisioner=true in "no-preload-328741"
	W0917 18:32:38.129295   77433 addons.go:243] addon storage-provisioner should already be in state true
	I0917 18:32:38.129331   77433 host.go:66] Checking if "no-preload-328741" exists ...
	I0917 18:32:38.129364   77433 config.go:182] Loaded profile config "no-preload-328741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:32:38.129374   77433 addons.go:69] Setting default-storageclass=true in profile "no-preload-328741"
	I0917 18:32:38.129397   77433 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-328741"
	I0917 18:32:38.129397   77433 addons.go:69] Setting metrics-server=true in profile "no-preload-328741"
	I0917 18:32:38.129440   77433 addons.go:234] Setting addon metrics-server=true in "no-preload-328741"
	W0917 18:32:38.129451   77433 addons.go:243] addon metrics-server should already be in state true
	I0917 18:32:38.129491   77433 host.go:66] Checking if "no-preload-328741" exists ...
	I0917 18:32:38.129831   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.129832   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.129875   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.129965   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.129980   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.129992   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.130833   77433 out.go:177] * Verifying Kubernetes components...
	I0917 18:32:38.132232   77433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:32:38.151440   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37047
	I0917 18:32:38.151521   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43717
	I0917 18:32:38.151524   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46631
	I0917 18:32:38.152003   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.152216   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.152574   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.152591   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.152728   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.152743   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.153076   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.153077   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.153304   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetState
	I0917 18:32:38.153689   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.153731   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.156960   77433 addons.go:234] Setting addon default-storageclass=true in "no-preload-328741"
	W0917 18:32:38.156980   77433 addons.go:243] addon default-storageclass should already be in state true
	I0917 18:32:38.157007   77433 host.go:66] Checking if "no-preload-328741" exists ...
	I0917 18:32:38.157358   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.157404   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.157700   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.158314   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.158332   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.158738   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.159296   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.159332   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.179409   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41819
	I0917 18:32:38.179948   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.180402   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.180433   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.180922   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.181082   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetState
	I0917 18:32:38.183522   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35935
	I0917 18:32:38.183904   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.184373   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.184389   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.184750   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.184911   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetState
	I0917 18:32:38.187520   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37647
	I0917 18:32:38.187560   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:32:38.187560   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:32:38.188071   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.188750   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.188768   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.189208   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.189573   77433 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:32:38.189597   77433 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0917 18:32:35.488250   77819 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.369501216s)
	I0917 18:32:35.488328   77819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:32:35.507245   77819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:32:35.522739   77819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:32:35.537981   77819 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:32:35.538002   77819 kubeadm.go:157] found existing configuration files:
	
	I0917 18:32:35.538060   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0917 18:32:35.552269   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:32:35.552346   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:32:35.566005   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0917 18:32:35.577402   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:32:35.577482   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:32:35.588633   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0917 18:32:35.600487   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:32:35.600559   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:32:35.612243   77819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0917 18:32:35.623548   77819 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:32:35.623630   77819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:32:35.635837   77819 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:32:35.690169   77819 kubeadm.go:310] W0917 18:32:35.657767    2589 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:32:35.690728   77819 kubeadm.go:310] W0917 18:32:35.658500    2589 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:32:35.819945   77819 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:32:38.189867   77433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:38.189904   77433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:38.191297   77433 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 18:32:38.191318   77433 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 18:32:38.191340   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:32:38.191421   77433 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:32:38.191441   77433 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 18:32:38.191467   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:32:38.195617   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.196040   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:32:38.196070   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.196098   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.196292   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:32:38.196554   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:32:38.196633   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:32:38.196645   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.196829   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:32:38.196868   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:32:38.196999   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:32:38.197320   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:32:38.197549   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:32:38.197724   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:32:38.211021   77433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34023
	I0917 18:32:38.211713   77433 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:38.212330   77433 main.go:141] libmachine: Using API Version  1
	I0917 18:32:38.212349   77433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:38.212900   77433 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:38.213161   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetState
	I0917 18:32:38.214937   77433 main.go:141] libmachine: (no-preload-328741) Calling .DriverName
	I0917 18:32:38.215252   77433 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 18:32:38.215267   77433 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 18:32:38.215284   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHHostname
	I0917 18:32:38.218542   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.219120   77433 main.go:141] libmachine: (no-preload-328741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:bd:6d", ip: ""} in network mk-no-preload-328741: {Iface:virbr4 ExpiryTime:2024-09-17 19:27:14 +0000 UTC Type:0 Mac:52:54:00:de:bd:6d Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:no-preload-328741 Clientid:01:52:54:00:de:bd:6d}
	I0917 18:32:38.219141   77433 main.go:141] libmachine: (no-preload-328741) DBG | domain no-preload-328741 has defined IP address 192.168.72.182 and MAC address 52:54:00:de:bd:6d in network mk-no-preload-328741
	I0917 18:32:38.219398   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHPort
	I0917 18:32:38.219649   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHKeyPath
	I0917 18:32:38.219795   77433 main.go:141] libmachine: (no-preload-328741) Calling .GetSSHUsername
	I0917 18:32:38.219983   77433 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/no-preload-328741/id_rsa Username:docker}
	I0917 18:32:38.350631   77433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:32:38.420361   77433 node_ready.go:35] waiting up to 6m0s for node "no-preload-328741" to be "Ready" ...
	I0917 18:32:38.445121   77433 node_ready.go:49] node "no-preload-328741" has status "Ready":"True"
	I0917 18:32:38.445147   77433 node_ready.go:38] duration metric: took 24.749282ms for node "no-preload-328741" to be "Ready" ...
	I0917 18:32:38.445159   77433 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:38.468481   77433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:38.473593   77433 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 18:32:38.529563   77433 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 18:32:38.529592   77433 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0917 18:32:38.569714   77433 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:32:38.611817   77433 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 18:32:38.611845   77433 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 18:32:38.681763   77433 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:32:38.681791   77433 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 18:32:38.754936   77433 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:32:38.771115   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:38.771142   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:38.771564   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:38.771583   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:38.771603   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:38.771612   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:38.773362   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:38.773370   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:38.773381   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:38.782423   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:38.782468   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:38.782821   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:38.782877   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:38.782889   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:39.826176   77433 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.256415127s)
	I0917 18:32:39.826230   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:39.826241   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:39.826591   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:39.826618   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:39.826619   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:39.826627   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:39.826638   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:39.826905   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:39.828259   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:39.828279   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:40.095498   77433 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.340502717s)
	I0917 18:32:40.095562   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:40.095578   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:40.096020   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:40.096018   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:40.096047   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:40.096056   77433 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:40.096064   77433 main.go:141] libmachine: (no-preload-328741) Calling .Close
	I0917 18:32:40.096372   77433 main.go:141] libmachine: (no-preload-328741) DBG | Closing plugin on server side
	I0917 18:32:40.096391   77433 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:40.097299   77433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:40.097317   77433 addons.go:475] Verifying addon metrics-server=true in "no-preload-328741"
	I0917 18:32:40.099125   77433 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0917 18:32:36.663739   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:38.666621   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:40.100317   77433 addons.go:510] duration metric: took 1.971194765s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0917 18:32:40.481646   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:44.319473   77819 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 18:32:44.319570   77819 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:32:44.319698   77819 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:32:44.319793   77819 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:32:44.319888   77819 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 18:32:44.319977   77819 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:32:44.322424   77819 out.go:235]   - Generating certificates and keys ...
	I0917 18:32:44.322509   77819 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:32:44.322570   77819 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:32:44.322640   77819 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:32:44.322732   77819 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:32:44.322806   77819 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:32:44.322854   77819 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:32:44.322911   77819 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:32:44.322993   77819 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:32:44.323071   77819 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:32:44.323150   77819 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:32:44.323197   77819 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:32:44.323246   77819 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:32:44.323289   77819 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:32:44.323337   77819 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 18:32:44.323390   77819 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:32:44.323456   77819 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:32:44.323521   77819 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:32:44.323613   77819 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:32:44.323704   77819 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:32:44.324959   77819 out.go:235]   - Booting up control plane ...
	I0917 18:32:44.325043   77819 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:32:44.325120   77819 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:32:44.325187   77819 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:32:44.325303   77819 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:32:44.325371   77819 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:32:44.325404   77819 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:32:44.325533   77819 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 18:32:44.325635   77819 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 18:32:44.325710   77819 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001958745s
	I0917 18:32:44.325774   77819 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 18:32:44.325830   77819 kubeadm.go:310] [api-check] The API server is healthy after 5.002835169s
	I0917 18:32:44.325919   77819 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 18:32:44.326028   77819 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 18:32:44.326086   77819 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 18:32:44.326239   77819 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-438836 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 18:32:44.326311   77819 kubeadm.go:310] [bootstrap-token] Using token: xgap2f.3rz1qjyfivkbqx8u
	I0917 18:32:44.327661   77819 out.go:235]   - Configuring RBAC rules ...
	I0917 18:32:44.327770   77819 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 18:32:44.327838   77819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 18:32:44.328050   77819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 18:32:44.328166   77819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 18:32:44.328266   77819 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 18:32:44.328337   77819 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 18:32:44.328483   77819 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 18:32:44.328519   77819 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 18:32:44.328564   77819 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 18:32:44.328573   77819 kubeadm.go:310] 
	I0917 18:32:44.328628   77819 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 18:32:44.328634   77819 kubeadm.go:310] 
	I0917 18:32:44.328702   77819 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 18:32:44.328710   77819 kubeadm.go:310] 
	I0917 18:32:44.328736   77819 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 18:32:44.328798   77819 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 18:32:44.328849   77819 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 18:32:44.328858   77819 kubeadm.go:310] 
	I0917 18:32:44.328940   77819 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 18:32:44.328949   77819 kubeadm.go:310] 
	I0917 18:32:44.328997   77819 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 18:32:44.329011   77819 kubeadm.go:310] 
	I0917 18:32:44.329054   77819 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 18:32:44.329122   77819 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 18:32:44.329184   77819 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 18:32:44.329191   77819 kubeadm.go:310] 
	I0917 18:32:44.329281   77819 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 18:32:44.329359   77819 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 18:32:44.329372   77819 kubeadm.go:310] 
	I0917 18:32:44.329487   77819 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token xgap2f.3rz1qjyfivkbqx8u \
	I0917 18:32:44.329599   77819 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 \
	I0917 18:32:44.329619   77819 kubeadm.go:310] 	--control-plane 
	I0917 18:32:44.329625   77819 kubeadm.go:310] 
	I0917 18:32:44.329709   77819 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 18:32:44.329716   77819 kubeadm.go:310] 
	I0917 18:32:44.329784   77819 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token xgap2f.3rz1qjyfivkbqx8u \
	I0917 18:32:44.329896   77819 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 
	I0917 18:32:44.329910   77819 cni.go:84] Creating CNI manager for ""
	I0917 18:32:44.329916   77819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:32:44.331403   77819 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:32:41.165452   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:43.167184   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:45.664612   77264 pod_ready.go:103] pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:42.976970   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:45.475620   77433 pod_ready.go:103] pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:44.332786   77819 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:32:44.344553   77819 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:32:44.365355   77819 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 18:32:44.365417   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:44.365457   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-438836 minikube.k8s.io/updated_at=2024_09_17T18_32_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=default-k8s-diff-port-438836 minikube.k8s.io/primary=true
	I0917 18:32:44.393987   77819 ops.go:34] apiserver oom_adj: -16
	I0917 18:32:44.608512   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:45.109295   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:45.609455   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:46.108538   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:46.609062   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:47.108933   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:47.608565   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:48.109355   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:48.609390   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:49.109204   77819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:32:49.305574   77819 kubeadm.go:1113] duration metric: took 4.940218828s to wait for elevateKubeSystemPrivileges
	I0917 18:32:49.305616   77819 kubeadm.go:394] duration metric: took 5m2.086280483s to StartCluster
	I0917 18:32:49.305640   77819 settings.go:142] acquiring lock: {Name:mk34898861b3ed534da4bd8404feab95fe690001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:32:49.305743   77819 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:32:49.308226   77819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:32:49.308590   77819 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.58 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 18:32:49.308755   77819 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 18:32:49.308838   77819 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-438836"
	I0917 18:32:49.308861   77819 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-438836"
	I0917 18:32:49.308863   77819 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-438836"
	I0917 18:32:49.308882   77819 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-438836"
	I0917 18:32:49.308881   77819 config.go:182] Loaded profile config "default-k8s-diff-port-438836": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:32:49.308895   77819 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-438836"
	W0917 18:32:49.308946   77819 addons.go:243] addon metrics-server should already be in state true
	I0917 18:32:49.309006   77819 host.go:66] Checking if "default-k8s-diff-port-438836" exists ...
	I0917 18:32:49.308895   77819 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-438836"
	W0917 18:32:49.308873   77819 addons.go:243] addon storage-provisioner should already be in state true
	I0917 18:32:49.309151   77819 host.go:66] Checking if "default-k8s-diff-port-438836" exists ...
	I0917 18:32:49.309458   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.309509   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.309544   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.309580   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.309585   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.309613   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.310410   77819 out.go:177] * Verifying Kubernetes components...
	I0917 18:32:49.311819   77819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:32:49.326762   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40995
	I0917 18:32:49.327055   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41171
	I0917 18:32:49.327287   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.327615   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.327869   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.327888   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.328171   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.328194   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.328215   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.328403   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetState
	I0917 18:32:49.328622   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.329285   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.329330   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.329573   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42919
	I0917 18:32:49.330145   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.330651   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.330674   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.331084   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.331715   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.331767   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.332232   77819 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-438836"
	W0917 18:32:49.332250   77819 addons.go:243] addon default-storageclass should already be in state true
	I0917 18:32:49.332278   77819 host.go:66] Checking if "default-k8s-diff-port-438836" exists ...
	I0917 18:32:49.332550   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.332595   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.346536   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35171
	I0917 18:32:49.347084   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.347712   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.347737   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.348229   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.348469   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetState
	I0917 18:32:49.350631   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35929
	I0917 18:32:49.351520   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.351581   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:32:49.352110   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.352138   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.352297   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35853
	I0917 18:32:49.352720   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.352736   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.353270   77819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:32:49.353310   77819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:32:49.353318   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.353334   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.353707   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.353861   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetState
	I0917 18:32:49.354855   77819 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0917 18:32:49.356031   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:32:49.356123   77819 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 18:32:49.356153   77819 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 18:32:49.356181   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:32:49.358023   77819 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:32:47.475181   77433 pod_ready.go:93] pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:47.475212   77433 pod_ready.go:82] duration metric: took 9.006699747s for pod "coredns-7c65d6cfc9-gddwk" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:47.475230   77433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qv4pq" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.483276   77433 pod_ready.go:93] pod "coredns-7c65d6cfc9-qv4pq" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:48.483301   77433 pod_ready.go:82] duration metric: took 1.008063055s for pod "coredns-7c65d6cfc9-qv4pq" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.483310   77433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.488897   77433 pod_ready.go:93] pod "etcd-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:48.488922   77433 pod_ready.go:82] duration metric: took 5.605818ms for pod "etcd-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.488931   77433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.493809   77433 pod_ready.go:93] pod "kube-apiserver-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:48.493840   77433 pod_ready.go:82] duration metric: took 4.899361ms for pod "kube-apiserver-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.493853   77433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.498703   77433 pod_ready.go:93] pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:48.498730   77433 pod_ready.go:82] duration metric: took 4.869599ms for pod "kube-controller-manager-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.498741   77433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2945m" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.673260   77433 pod_ready.go:93] pod "kube-proxy-2945m" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:48.673288   77433 pod_ready.go:82] duration metric: took 174.539603ms for pod "kube-proxy-2945m" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:48.673300   77433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:49.073094   77433 pod_ready.go:93] pod "kube-scheduler-no-preload-328741" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:49.073121   77433 pod_ready.go:82] duration metric: took 399.810804ms for pod "kube-scheduler-no-preload-328741" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:49.073132   77433 pod_ready.go:39] duration metric: took 10.627960333s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:49.073148   77433 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:32:49.073220   77433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:49.089310   77433 api_server.go:72] duration metric: took 10.960186006s to wait for apiserver process to appear ...
	I0917 18:32:49.089337   77433 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:32:49.089360   77433 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0917 18:32:49.094838   77433 api_server.go:279] https://192.168.72.182:8443/healthz returned 200:
	ok
	I0917 18:32:49.095838   77433 api_server.go:141] control plane version: v1.31.1
	I0917 18:32:49.095862   77433 api_server.go:131] duration metric: took 6.516706ms to wait for apiserver health ...
	I0917 18:32:49.095872   77433 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:32:49.278262   77433 system_pods.go:59] 9 kube-system pods found
	I0917 18:32:49.278306   77433 system_pods.go:61] "coredns-7c65d6cfc9-gddwk" [57f85dd3-be48-4648-8d70-7a06aeaecdc2] Running
	I0917 18:32:49.278312   77433 system_pods.go:61] "coredns-7c65d6cfc9-qv4pq" [31f7e4b5-3870-41a1-96f8-8e13511fe684] Running
	I0917 18:32:49.278315   77433 system_pods.go:61] "etcd-no-preload-328741" [42b632f3-5576-4779-8895-3adcecfb278c] Running
	I0917 18:32:49.278319   77433 system_pods.go:61] "kube-apiserver-no-preload-328741" [ff2d44e3-dad5-4c24-a24d-2df425466747] Running
	I0917 18:32:49.278323   77433 system_pods.go:61] "kube-controller-manager-no-preload-328741" [eec3bebd-16ed-428e-8411-bca31800b36c] Running
	I0917 18:32:49.278326   77433 system_pods.go:61] "kube-proxy-2945m" [8a7b75b4-28c5-476a-b002-05313976c138] Running
	I0917 18:32:49.278329   77433 system_pods.go:61] "kube-scheduler-no-preload-328741" [06c97bf5-3ad3-45c5-8eaa-aa3cdbf51f12] Running
	I0917 18:32:49.278337   77433 system_pods.go:61] "metrics-server-6867b74b74-cvttg" [1b2d6700-2e3c-4a35-9794-0ec095eed0d4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:32:49.278341   77433 system_pods.go:61] "storage-provisioner" [03a8e7f5-ea70-4653-837b-5ad54de48136] Running
	I0917 18:32:49.278348   77433 system_pods.go:74] duration metric: took 182.470522ms to wait for pod list to return data ...
	I0917 18:32:49.278355   77433 default_sa.go:34] waiting for default service account to be created ...
	I0917 18:32:49.474126   77433 default_sa.go:45] found service account: "default"
	I0917 18:32:49.474155   77433 default_sa.go:55] duration metric: took 195.79307ms for default service account to be created ...
	I0917 18:32:49.474166   77433 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 18:32:49.678032   77433 system_pods.go:86] 9 kube-system pods found
	I0917 18:32:49.678062   77433 system_pods.go:89] "coredns-7c65d6cfc9-gddwk" [57f85dd3-be48-4648-8d70-7a06aeaecdc2] Running
	I0917 18:32:49.678068   77433 system_pods.go:89] "coredns-7c65d6cfc9-qv4pq" [31f7e4b5-3870-41a1-96f8-8e13511fe684] Running
	I0917 18:32:49.678072   77433 system_pods.go:89] "etcd-no-preload-328741" [42b632f3-5576-4779-8895-3adcecfb278c] Running
	I0917 18:32:49.678076   77433 system_pods.go:89] "kube-apiserver-no-preload-328741" [ff2d44e3-dad5-4c24-a24d-2df425466747] Running
	I0917 18:32:49.678080   77433 system_pods.go:89] "kube-controller-manager-no-preload-328741" [eec3bebd-16ed-428e-8411-bca31800b36c] Running
	I0917 18:32:49.678083   77433 system_pods.go:89] "kube-proxy-2945m" [8a7b75b4-28c5-476a-b002-05313976c138] Running
	I0917 18:32:49.678086   77433 system_pods.go:89] "kube-scheduler-no-preload-328741" [06c97bf5-3ad3-45c5-8eaa-aa3cdbf51f12] Running
	I0917 18:32:49.678095   77433 system_pods.go:89] "metrics-server-6867b74b74-cvttg" [1b2d6700-2e3c-4a35-9794-0ec095eed0d4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:32:49.678101   77433 system_pods.go:89] "storage-provisioner" [03a8e7f5-ea70-4653-837b-5ad54de48136] Running
	I0917 18:32:49.678111   77433 system_pods.go:126] duration metric: took 203.938016ms to wait for k8s-apps to be running ...
	I0917 18:32:49.678120   77433 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 18:32:49.678169   77433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:32:49.698139   77433 system_svc.go:56] duration metric: took 20.008261ms WaitForService to wait for kubelet
	I0917 18:32:49.698169   77433 kubeadm.go:582] duration metric: took 11.569050863s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 18:32:49.698188   77433 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:32:49.873214   77433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:32:49.873286   77433 node_conditions.go:123] node cpu capacity is 2
	I0917 18:32:49.873304   77433 node_conditions.go:105] duration metric: took 175.108582ms to run NodePressure ...
	I0917 18:32:49.873319   77433 start.go:241] waiting for startup goroutines ...
	I0917 18:32:49.873329   77433 start.go:246] waiting for cluster config update ...
	I0917 18:32:49.873342   77433 start.go:255] writing updated cluster config ...
	I0917 18:32:49.873719   77433 ssh_runner.go:195] Run: rm -f paused
	I0917 18:32:49.928157   77433 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 18:32:49.930136   77433 out.go:177] * Done! kubectl is now configured to use "no-preload-328741" cluster and "default" namespace by default
	I0917 18:32:47.158355   77264 pod_ready.go:82] duration metric: took 4m0.000722561s for pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace to be "Ready" ...
	E0917 18:32:47.158398   77264 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-g2ttm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0917 18:32:47.158416   77264 pod_ready.go:39] duration metric: took 4m11.016184959s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:47.158443   77264 kubeadm.go:597] duration metric: took 4m19.974943276s to restartPrimaryControlPlane
	W0917 18:32:47.158508   77264 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 18:32:47.158539   77264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:32:49.359450   77819 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:32:49.359475   77819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 18:32:49.359496   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:32:49.360356   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.361125   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:32:49.360783   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:32:49.361427   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:32:49.361439   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.361615   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:32:49.361803   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:32:49.363091   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.363388   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:32:49.363420   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.363601   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:32:49.363803   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:32:49.363956   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:32:49.364108   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:32:49.374395   77819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40731
	I0917 18:32:49.374937   77819 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:32:49.375474   77819 main.go:141] libmachine: Using API Version  1
	I0917 18:32:49.375506   77819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:32:49.375858   77819 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:32:49.376073   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetState
	I0917 18:32:49.377667   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .DriverName
	I0917 18:32:49.377884   77819 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 18:32:49.377899   77819 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 18:32:49.377912   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHHostname
	I0917 18:32:49.381821   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.381992   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:fb:fd", ip: ""} in network mk-default-k8s-diff-port-438836: {Iface:virbr1 ExpiryTime:2024-09-17 19:27:32 +0000 UTC Type:0 Mac:52:54:00:78:fb:fd Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:default-k8s-diff-port-438836 Clientid:01:52:54:00:78:fb:fd}
	I0917 18:32:49.382009   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | domain default-k8s-diff-port-438836 has defined IP address 192.168.39.58 and MAC address 52:54:00:78:fb:fd in network mk-default-k8s-diff-port-438836
	I0917 18:32:49.382202   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHPort
	I0917 18:32:49.382366   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHKeyPath
	I0917 18:32:49.382534   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .GetSSHUsername
	I0917 18:32:49.382855   77819 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/default-k8s-diff-port-438836/id_rsa Username:docker}
	I0917 18:32:49.601072   77819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:32:49.657872   77819 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-438836" to be "Ready" ...
	I0917 18:32:49.669721   77819 node_ready.go:49] node "default-k8s-diff-port-438836" has status "Ready":"True"
	I0917 18:32:49.669750   77819 node_ready.go:38] duration metric: took 11.838649ms for node "default-k8s-diff-port-438836" to be "Ready" ...
	I0917 18:32:49.669761   77819 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:49.692344   77819 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8nrnc" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:49.774555   77819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:32:49.821754   77819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 18:32:49.826676   77819 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 18:32:49.826694   77819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0917 18:32:49.941685   77819 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 18:32:49.941712   77819 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 18:32:50.121418   77819 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:32:50.121444   77819 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 18:32:50.233586   77819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:32:50.948870   77819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.174278798s)
	I0917 18:32:50.948915   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:50.948926   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:50.948941   77819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.12715113s)
	I0917 18:32:50.948983   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:50.948997   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:50.949213   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:50.949240   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:50.949249   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:50.949257   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:50.949335   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:50.949346   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Closing plugin on server side
	I0917 18:32:50.949349   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:50.949367   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:50.949375   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:50.949484   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Closing plugin on server side
	I0917 18:32:50.949517   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:50.949530   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:50.949689   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Closing plugin on server side
	I0917 18:32:50.949700   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:50.949720   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:50.971989   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:50.972009   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:50.972307   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:50.972326   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:51.167019   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:51.167041   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:51.167324   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:51.167350   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:51.167358   77819 main.go:141] libmachine: Making call to close driver server
	I0917 18:32:51.167356   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) DBG | Closing plugin on server side
	I0917 18:32:51.167366   77819 main.go:141] libmachine: (default-k8s-diff-port-438836) Calling .Close
	I0917 18:32:51.167581   77819 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:32:51.167593   77819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:32:51.167605   77819 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-438836"
	I0917 18:32:51.170208   77819 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0917 18:32:51.171345   77819 addons.go:510] duration metric: took 1.86260047s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0917 18:32:51.701056   77819 pod_ready.go:103] pod "coredns-7c65d6cfc9-8nrnc" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:53.199802   77819 pod_ready.go:93] pod "coredns-7c65d6cfc9-8nrnc" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:53.199832   77819 pod_ready.go:82] duration metric: took 3.507449551s for pod "coredns-7c65d6cfc9-8nrnc" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:53.199846   77819 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x4l48" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:54.116602   78008 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0917 18:32:54.116783   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:32:54.117004   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:32:55.207337   77819 pod_ready.go:103] pod "coredns-7c65d6cfc9-x4l48" in "kube-system" namespace has status "Ready":"False"
	I0917 18:32:56.207361   77819 pod_ready.go:93] pod "coredns-7c65d6cfc9-x4l48" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:56.207390   77819 pod_ready.go:82] duration metric: took 3.007535449s for pod "coredns-7c65d6cfc9-x4l48" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.207403   77819 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.212003   77819 pod_ready.go:93] pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:56.212025   77819 pod_ready.go:82] duration metric: took 4.613897ms for pod "etcd-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.212034   77819 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.216625   77819 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:56.216645   77819 pod_ready.go:82] duration metric: took 4.604444ms for pod "kube-apiserver-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.216654   77819 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.724223   77819 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:56.724257   77819 pod_ready.go:82] duration metric: took 507.594976ms for pod "kube-controller-manager-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.724277   77819 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xwqtr" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.729284   77819 pod_ready.go:93] pod "kube-proxy-xwqtr" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:56.729312   77819 pod_ready.go:82] duration metric: took 5.025818ms for pod "kube-proxy-xwqtr" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:56.729324   77819 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:57.004900   77819 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace has status "Ready":"True"
	I0917 18:32:57.004926   77819 pod_ready.go:82] duration metric: took 275.593421ms for pod "kube-scheduler-default-k8s-diff-port-438836" in "kube-system" namespace to be "Ready" ...
	I0917 18:32:57.004935   77819 pod_ready.go:39] duration metric: took 7.335162837s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:32:57.004951   77819 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:32:57.004999   77819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:32:57.020042   77819 api_server.go:72] duration metric: took 7.711410338s to wait for apiserver process to appear ...
	I0917 18:32:57.020070   77819 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:32:57.020095   77819 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8444/healthz ...
	I0917 18:32:57.024504   77819 api_server.go:279] https://192.168.39.58:8444/healthz returned 200:
	ok
	I0917 18:32:57.025722   77819 api_server.go:141] control plane version: v1.31.1
	I0917 18:32:57.025749   77819 api_server.go:131] duration metric: took 5.670742ms to wait for apiserver health ...
	I0917 18:32:57.025759   77819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:32:57.206512   77819 system_pods.go:59] 9 kube-system pods found
	I0917 18:32:57.206548   77819 system_pods.go:61] "coredns-7c65d6cfc9-8nrnc" [96eeb328-605e-468b-a022-dbb7b5b44501] Running
	I0917 18:32:57.206555   77819 system_pods.go:61] "coredns-7c65d6cfc9-x4l48" [12a20eeb-edd1-4f5b-bf64-ba3d2c8ae05b] Running
	I0917 18:32:57.206561   77819 system_pods.go:61] "etcd-default-k8s-diff-port-438836" [091ba47e-1133-4557-b3d7-eb39578840ab] Running
	I0917 18:32:57.206567   77819 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-438836" [cbb0e5fe-7583-4f3e-a0cd-dc32b00bb161] Running
	I0917 18:32:57.206573   77819 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-438836" [fe0a5927-2747-4e04-b9fc-c3071cb01ceb] Running
	I0917 18:32:57.206577   77819 system_pods.go:61] "kube-proxy-xwqtr" [5875ff28-7e41-4887-94da-d7632d8141e8] Running
	I0917 18:32:57.206582   77819 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-438836" [b25c5a55-a0e5-432a-a490-69b75d3a48d8] Running
	I0917 18:32:57.206593   77819 system_pods.go:61] "metrics-server-6867b74b74-qnfv2" [75be5ed8-b62d-42c8-8ea9-5809187be05a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:32:57.206599   77819 system_pods.go:61] "storage-provisioner" [a1ae1ecf-9311-4d61-a56d-9147876d4a9d] Running
	I0917 18:32:57.206609   77819 system_pods.go:74] duration metric: took 180.842325ms to wait for pod list to return data ...
	I0917 18:32:57.206619   77819 default_sa.go:34] waiting for default service account to be created ...
	I0917 18:32:57.404368   77819 default_sa.go:45] found service account: "default"
	I0917 18:32:57.404395   77819 default_sa.go:55] duration metric: took 197.770326ms for default service account to be created ...
	I0917 18:32:57.404404   77819 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 18:32:57.607472   77819 system_pods.go:86] 9 kube-system pods found
	I0917 18:32:57.607504   77819 system_pods.go:89] "coredns-7c65d6cfc9-8nrnc" [96eeb328-605e-468b-a022-dbb7b5b44501] Running
	I0917 18:32:57.607513   77819 system_pods.go:89] "coredns-7c65d6cfc9-x4l48" [12a20eeb-edd1-4f5b-bf64-ba3d2c8ae05b] Running
	I0917 18:32:57.607519   77819 system_pods.go:89] "etcd-default-k8s-diff-port-438836" [091ba47e-1133-4557-b3d7-eb39578840ab] Running
	I0917 18:32:57.607523   77819 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-438836" [cbb0e5fe-7583-4f3e-a0cd-dc32b00bb161] Running
	I0917 18:32:57.607529   77819 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-438836" [fe0a5927-2747-4e04-b9fc-c3071cb01ceb] Running
	I0917 18:32:57.607536   77819 system_pods.go:89] "kube-proxy-xwqtr" [5875ff28-7e41-4887-94da-d7632d8141e8] Running
	I0917 18:32:57.607542   77819 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-438836" [b25c5a55-a0e5-432a-a490-69b75d3a48d8] Running
	I0917 18:32:57.607552   77819 system_pods.go:89] "metrics-server-6867b74b74-qnfv2" [75be5ed8-b62d-42c8-8ea9-5809187be05a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:32:57.607558   77819 system_pods.go:89] "storage-provisioner" [a1ae1ecf-9311-4d61-a56d-9147876d4a9d] Running
	I0917 18:32:57.607573   77819 system_pods.go:126] duration metric: took 203.161716ms to wait for k8s-apps to be running ...
	I0917 18:32:57.607584   77819 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 18:32:57.607642   77819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:32:57.623570   77819 system_svc.go:56] duration metric: took 15.976138ms WaitForService to wait for kubelet
	I0917 18:32:57.623607   77819 kubeadm.go:582] duration metric: took 8.314980472s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 18:32:57.623629   77819 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:32:57.804485   77819 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:32:57.804510   77819 node_conditions.go:123] node cpu capacity is 2
	I0917 18:32:57.804520   77819 node_conditions.go:105] duration metric: took 180.885929ms to run NodePressure ...
	I0917 18:32:57.804532   77819 start.go:241] waiting for startup goroutines ...
	I0917 18:32:57.804539   77819 start.go:246] waiting for cluster config update ...
	I0917 18:32:57.804549   77819 start.go:255] writing updated cluster config ...
	I0917 18:32:57.804868   77819 ssh_runner.go:195] Run: rm -f paused
	I0917 18:32:57.854248   77819 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 18:32:57.856295   77819 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-438836" cluster and "default" namespace by default
	I0917 18:32:59.116802   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:32:59.117073   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:33:09.116772   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:33:09.117022   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:33:13.480418   77264 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.32185403s)
	I0917 18:33:13.480497   77264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:33:13.497676   77264 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 18:33:13.509036   77264 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:33:13.519901   77264 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:33:13.519927   77264 kubeadm.go:157] found existing configuration files:
	
	I0917 18:33:13.519985   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:33:13.530704   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:33:13.530784   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:33:13.541442   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:33:13.553771   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:33:13.553844   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:33:13.566357   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:33:13.576787   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:33:13.576871   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:33:13.587253   77264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:33:13.597253   77264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:33:13.597331   77264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:33:13.607686   77264 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:33:13.657294   77264 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 18:33:13.657416   77264 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:33:13.784063   77264 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:33:13.784228   77264 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:33:13.784388   77264 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 18:33:13.797531   77264 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:33:13.799464   77264 out.go:235]   - Generating certificates and keys ...
	I0917 18:33:13.799555   77264 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:33:13.799626   77264 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:33:13.799735   77264 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:33:13.799849   77264 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:33:13.799964   77264 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:33:13.800059   77264 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:33:13.800305   77264 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:33:13.800620   77264 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:33:13.800843   77264 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:33:13.801056   77264 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:33:13.801220   77264 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:33:13.801361   77264 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:33:13.949574   77264 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:33:14.002216   77264 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 18:33:14.113507   77264 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:33:14.328861   77264 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:33:14.452448   77264 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:33:14.452956   77264 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:33:14.456029   77264 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:33:14.458085   77264 out.go:235]   - Booting up control plane ...
	I0917 18:33:14.458197   77264 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:33:14.458298   77264 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:33:14.458418   77264 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:33:14.480556   77264 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:33:14.490011   77264 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:33:14.490108   77264 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:33:14.641550   77264 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 18:33:14.641680   77264 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 18:33:16.163986   77264 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.521637216s
	I0917 18:33:16.164081   77264 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 18:33:21.167283   77264 kubeadm.go:310] [api-check] The API server is healthy after 5.003926265s
	I0917 18:33:21.187439   77264 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 18:33:21.214590   77264 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 18:33:21.256056   77264 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 18:33:21.256319   77264 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-081863 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 18:33:21.274920   77264 kubeadm.go:310] [bootstrap-token] Using token: tkf10q.2xx4v0n14dywt5kc
	I0917 18:33:21.276557   77264 out.go:235]   - Configuring RBAC rules ...
	I0917 18:33:21.276707   77264 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 18:33:21.286544   77264 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 18:33:21.299514   77264 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 18:33:21.304466   77264 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 18:33:21.309218   77264 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 18:33:21.315113   77264 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 18:33:21.575303   77264 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 18:33:22.022249   77264 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 18:33:22.576184   77264 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 18:33:22.576211   77264 kubeadm.go:310] 
	I0917 18:33:22.576279   77264 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 18:33:22.576291   77264 kubeadm.go:310] 
	I0917 18:33:22.576360   77264 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 18:33:22.576367   77264 kubeadm.go:310] 
	I0917 18:33:22.576388   77264 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 18:33:22.576480   77264 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 18:33:22.576565   77264 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 18:33:22.576576   77264 kubeadm.go:310] 
	I0917 18:33:22.576640   77264 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 18:33:22.576649   77264 kubeadm.go:310] 
	I0917 18:33:22.576725   77264 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 18:33:22.576742   77264 kubeadm.go:310] 
	I0917 18:33:22.576802   77264 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 18:33:22.576884   77264 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 18:33:22.576987   77264 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 18:33:22.577008   77264 kubeadm.go:310] 
	I0917 18:33:22.577111   77264 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 18:33:22.577221   77264 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 18:33:22.577246   77264 kubeadm.go:310] 
	I0917 18:33:22.577361   77264 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tkf10q.2xx4v0n14dywt5kc \
	I0917 18:33:22.577505   77264 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 \
	I0917 18:33:22.577543   77264 kubeadm.go:310] 	--control-plane 
	I0917 18:33:22.577552   77264 kubeadm.go:310] 
	I0917 18:33:22.577660   77264 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 18:33:22.577671   77264 kubeadm.go:310] 
	I0917 18:33:22.577778   77264 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tkf10q.2xx4v0n14dywt5kc \
	I0917 18:33:22.577908   77264 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0381772327c4b2bd99fe5752fbb1717549bd6502f62dea97db4e562fc85cbd67 
	I0917 18:33:22.579092   77264 kubeadm.go:310] W0917 18:33:13.630065    2521 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:33:22.579481   77264 kubeadm.go:310] W0917 18:33:13.630936    2521 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 18:33:22.579593   77264 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:33:22.579621   77264 cni.go:84] Creating CNI manager for ""
	I0917 18:33:22.579630   77264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 18:33:22.581566   77264 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 18:33:22.582849   77264 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 18:33:22.595489   77264 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 18:33:22.627349   77264 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 18:33:22.627411   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:22.627448   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-081863 minikube.k8s.io/updated_at=2024_09_17T18_33_22_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=embed-certs-081863 minikube.k8s.io/primary=true
	I0917 18:33:22.862361   77264 ops.go:34] apiserver oom_adj: -16
	I0917 18:33:22.862491   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:23.362641   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:23.863054   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:24.363374   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:24.862744   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:25.362644   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:25.863065   77264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 18:33:25.974152   77264 kubeadm.go:1113] duration metric: took 3.346801442s to wait for elevateKubeSystemPrivileges
	I0917 18:33:25.974185   77264 kubeadm.go:394] duration metric: took 4m58.844504582s to StartCluster
	I0917 18:33:25.974203   77264 settings.go:142] acquiring lock: {Name:mk34898861b3ed534da4bd8404feab95fe690001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:33:25.974289   77264 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:33:25.976039   77264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-11085/kubeconfig: {Name:mkc0149e204c347d5458377942fa73da18a9229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 18:33:25.976296   77264 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.61 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 18:33:25.976407   77264 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 18:33:25.976517   77264 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-081863"
	I0917 18:33:25.976528   77264 addons.go:69] Setting default-storageclass=true in profile "embed-certs-081863"
	I0917 18:33:25.976535   77264 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-081863"
	W0917 18:33:25.976543   77264 addons.go:243] addon storage-provisioner should already be in state true
	I0917 18:33:25.976547   77264 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-081863"
	I0917 18:33:25.976573   77264 host.go:66] Checking if "embed-certs-081863" exists ...
	I0917 18:33:25.976624   77264 config.go:182] Loaded profile config "embed-certs-081863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:33:25.976662   77264 addons.go:69] Setting metrics-server=true in profile "embed-certs-081863"
	I0917 18:33:25.976672   77264 addons.go:234] Setting addon metrics-server=true in "embed-certs-081863"
	W0917 18:33:25.976679   77264 addons.go:243] addon metrics-server should already be in state true
	I0917 18:33:25.976698   77264 host.go:66] Checking if "embed-certs-081863" exists ...
	I0917 18:33:25.976964   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.976984   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.976997   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:25.977013   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:25.977030   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.977050   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:25.978439   77264 out.go:177] * Verifying Kubernetes components...
	I0917 18:33:25.980250   77264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 18:33:25.993034   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46451
	I0917 18:33:25.993038   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45343
	I0917 18:33:25.993551   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38487
	I0917 18:33:25.993589   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:25.993625   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:25.993887   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:25.994098   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:25.994122   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:25.994193   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:25.994211   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:25.994442   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:25.994466   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:25.994523   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:25.994523   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:25.994762   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetState
	I0917 18:33:25.994791   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:25.995118   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.995168   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:25.995251   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.995284   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:25.998228   77264 addons.go:234] Setting addon default-storageclass=true in "embed-certs-081863"
	W0917 18:33:25.998260   77264 addons.go:243] addon default-storageclass should already be in state true
	I0917 18:33:25.998301   77264 host.go:66] Checking if "embed-certs-081863" exists ...
	I0917 18:33:25.998642   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:25.998688   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:26.011862   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0917 18:33:26.012556   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:26.013142   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:26.013168   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:26.013578   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:26.014129   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36901
	I0917 18:33:26.014246   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43409
	I0917 18:33:26.014331   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetState
	I0917 18:33:26.014633   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:26.014703   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:26.015086   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:26.015108   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:26.015379   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:26.015396   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:26.015451   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:26.015895   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:26.016078   77264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 18:33:26.016113   77264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 18:33:26.016486   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetState
	I0917 18:33:26.016525   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:33:26.018385   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:33:26.019139   77264 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0917 18:33:26.020119   77264 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 18:33:26.020991   77264 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 18:33:26.021013   77264 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 18:33:26.021035   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:33:26.021810   77264 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:33:26.021825   77264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 18:33:26.021839   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:33:26.025804   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.026074   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:33:26.026097   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.025803   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.026468   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:33:26.026649   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:33:26.026937   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:33:26.026982   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:33:26.026991   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.027025   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:33:26.027114   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:33:26.027232   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:33:26.027417   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:33:26.027580   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:33:26.035905   77264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42799
	I0917 18:33:26.036621   77264 main.go:141] libmachine: () Calling .GetVersion
	I0917 18:33:26.037566   77264 main.go:141] libmachine: Using API Version  1
	I0917 18:33:26.037597   77264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 18:33:26.038013   77264 main.go:141] libmachine: () Calling .GetMachineName
	I0917 18:33:26.038317   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetState
	I0917 18:33:26.040464   77264 main.go:141] libmachine: (embed-certs-081863) Calling .DriverName
	I0917 18:33:26.040887   77264 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 18:33:26.040908   77264 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 18:33:26.040922   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHHostname
	I0917 18:33:26.043857   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.044291   77264 main.go:141] libmachine: (embed-certs-081863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:17:3d", ip: ""} in network mk-embed-certs-081863: {Iface:virbr2 ExpiryTime:2024-09-17 19:28:13 +0000 UTC Type:0 Mac:52:54:00:3f:17:3d Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:embed-certs-081863 Clientid:01:52:54:00:3f:17:3d}
	I0917 18:33:26.044325   77264 main.go:141] libmachine: (embed-certs-081863) DBG | domain embed-certs-081863 has defined IP address 192.168.50.61 and MAC address 52:54:00:3f:17:3d in network mk-embed-certs-081863
	I0917 18:33:26.044488   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHPort
	I0917 18:33:26.044682   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHKeyPath
	I0917 18:33:26.044838   77264 main.go:141] libmachine: (embed-certs-081863) Calling .GetSSHUsername
	I0917 18:33:26.045034   77264 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/embed-certs-081863/id_rsa Username:docker}
	I0917 18:33:26.155880   77264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 18:33:26.182293   77264 node_ready.go:35] waiting up to 6m0s for node "embed-certs-081863" to be "Ready" ...
	I0917 18:33:26.191336   77264 node_ready.go:49] node "embed-certs-081863" has status "Ready":"True"
	I0917 18:33:26.191358   77264 node_ready.go:38] duration metric: took 9.032061ms for node "embed-certs-081863" to be "Ready" ...
	I0917 18:33:26.191366   77264 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:33:26.196333   77264 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:26.260819   77264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 18:33:26.270678   77264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 18:33:26.270701   77264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0917 18:33:26.306169   77264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 18:33:26.310271   77264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 18:33:26.310300   77264 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 18:33:26.367576   77264 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:33:26.367603   77264 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 18:33:26.424838   77264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 18:33:27.088293   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.088326   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.088329   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.088352   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.088726   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.088759   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.088782   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Closing plugin on server side
	I0917 18:33:27.088794   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.088831   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.088845   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.088853   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.088798   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.088923   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.089075   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.089088   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.089200   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.089210   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.089242   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Closing plugin on server side
	I0917 18:33:27.162204   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.162227   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.162597   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.162619   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.423795   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.423824   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.424110   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.424127   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.424136   77264 main.go:141] libmachine: Making call to close driver server
	I0917 18:33:27.424145   77264 main.go:141] libmachine: (embed-certs-081863) Calling .Close
	I0917 18:33:27.424369   77264 main.go:141] libmachine: Successfully made call to close driver server
	I0917 18:33:27.424385   77264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 18:33:27.424395   77264 addons.go:475] Verifying addon metrics-server=true in "embed-certs-081863"
	I0917 18:33:27.424390   77264 main.go:141] libmachine: (embed-certs-081863) DBG | Closing plugin on server side
	I0917 18:33:27.426548   77264 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0917 18:33:29.116398   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:33:29.116681   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:33:27.427684   77264 addons.go:510] duration metric: took 1.451280405s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0917 18:33:28.311561   77264 pod_ready.go:103] pod "etcd-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:33:30.703554   77264 pod_ready.go:103] pod "etcd-embed-certs-081863" in "kube-system" namespace has status "Ready":"False"
	I0917 18:33:31.203018   77264 pod_ready.go:93] pod "etcd-embed-certs-081863" in "kube-system" namespace has status "Ready":"True"
	I0917 18:33:31.203047   77264 pod_ready.go:82] duration metric: took 5.006684537s for pod "etcd-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:31.203057   77264 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:31.207921   77264 pod_ready.go:93] pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace has status "Ready":"True"
	I0917 18:33:31.207949   77264 pod_ready.go:82] duration metric: took 4.88424ms for pod "kube-apiserver-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:31.207964   77264 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:31.212804   77264 pod_ready.go:93] pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace has status "Ready":"True"
	I0917 18:33:31.212830   77264 pod_ready.go:82] duration metric: took 4.856814ms for pod "kube-controller-manager-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:31.212842   77264 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:32.221895   77264 pod_ready.go:93] pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace has status "Ready":"True"
	I0917 18:33:32.221921   77264 pod_ready.go:82] duration metric: took 1.009071567s for pod "kube-scheduler-embed-certs-081863" in "kube-system" namespace to be "Ready" ...
	I0917 18:33:32.221929   77264 pod_ready.go:39] duration metric: took 6.030554324s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 18:33:32.221942   77264 api_server.go:52] waiting for apiserver process to appear ...
	I0917 18:33:32.221991   77264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:33:32.242087   77264 api_server.go:72] duration metric: took 6.265747566s to wait for apiserver process to appear ...
	I0917 18:33:32.242113   77264 api_server.go:88] waiting for apiserver healthz status ...
	I0917 18:33:32.242129   77264 api_server.go:253] Checking apiserver healthz at https://192.168.50.61:8443/healthz ...
	I0917 18:33:32.246960   77264 api_server.go:279] https://192.168.50.61:8443/healthz returned 200:
	ok
	I0917 18:33:32.248201   77264 api_server.go:141] control plane version: v1.31.1
	I0917 18:33:32.248223   77264 api_server.go:131] duration metric: took 6.105102ms to wait for apiserver health ...
	I0917 18:33:32.248231   77264 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 18:33:32.257513   77264 system_pods.go:59] 9 kube-system pods found
	I0917 18:33:32.257546   77264 system_pods.go:61] "coredns-7c65d6cfc9-662sf" [dc7d0fc2-f0dd-420c-b2f9-16ddb84bf3c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:33:32.257557   77264 system_pods.go:61] "coredns-7c65d6cfc9-dxjr7" [16ebe197-5fcf-4988-968b-c9edd71886ff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:33:32.257563   77264 system_pods.go:61] "etcd-embed-certs-081863" [305d6255-3a64-42e2-ad46-cfb94470289d] Running
	I0917 18:33:32.257569   77264 system_pods.go:61] "kube-apiserver-embed-certs-081863" [693ee853-314d-49fc-884c-aaaa2ac17a59] Running
	I0917 18:33:32.257575   77264 system_pods.go:61] "kube-controller-manager-embed-certs-081863" [ff8d98db-0214-405a-858d-e720dccd0492] Running
	I0917 18:33:32.257579   77264 system_pods.go:61] "kube-proxy-7w64h" [46f3bcbd-64c9-4b30-9aa9-6f6e1eb9833b] Running
	I0917 18:33:32.257585   77264 system_pods.go:61] "kube-scheduler-embed-certs-081863" [fb3b40eb-5f37-486c-a897-c7d3574ea408] Running
	I0917 18:33:32.257593   77264 system_pods.go:61] "metrics-server-6867b74b74-98t8z" [941996a1-2109-4c06-88d1-19c6987f81bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:33:32.257602   77264 system_pods.go:61] "storage-provisioner" [107868ba-cf29-42b0-bb0d-c0da9b6b4c8c] Running
	I0917 18:33:32.257612   77264 system_pods.go:74] duration metric: took 9.373269ms to wait for pod list to return data ...
	I0917 18:33:32.257625   77264 default_sa.go:34] waiting for default service account to be created ...
	I0917 18:33:32.264675   77264 default_sa.go:45] found service account: "default"
	I0917 18:33:32.264700   77264 default_sa.go:55] duration metric: took 7.064658ms for default service account to be created ...
	I0917 18:33:32.264711   77264 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 18:33:32.270932   77264 system_pods.go:86] 9 kube-system pods found
	I0917 18:33:32.270964   77264 system_pods.go:89] "coredns-7c65d6cfc9-662sf" [dc7d0fc2-f0dd-420c-b2f9-16ddb84bf3c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:33:32.270975   77264 system_pods.go:89] "coredns-7c65d6cfc9-dxjr7" [16ebe197-5fcf-4988-968b-c9edd71886ff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 18:33:32.270983   77264 system_pods.go:89] "etcd-embed-certs-081863" [305d6255-3a64-42e2-ad46-cfb94470289d] Running
	I0917 18:33:32.270990   77264 system_pods.go:89] "kube-apiserver-embed-certs-081863" [693ee853-314d-49fc-884c-aaaa2ac17a59] Running
	I0917 18:33:32.270996   77264 system_pods.go:89] "kube-controller-manager-embed-certs-081863" [ff8d98db-0214-405a-858d-e720dccd0492] Running
	I0917 18:33:32.271002   77264 system_pods.go:89] "kube-proxy-7w64h" [46f3bcbd-64c9-4b30-9aa9-6f6e1eb9833b] Running
	I0917 18:33:32.271009   77264 system_pods.go:89] "kube-scheduler-embed-certs-081863" [fb3b40eb-5f37-486c-a897-c7d3574ea408] Running
	I0917 18:33:32.271018   77264 system_pods.go:89] "metrics-server-6867b74b74-98t8z" [941996a1-2109-4c06-88d1-19c6987f81bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 18:33:32.271024   77264 system_pods.go:89] "storage-provisioner" [107868ba-cf29-42b0-bb0d-c0da9b6b4c8c] Running
	I0917 18:33:32.271037   77264 system_pods.go:126] duration metric: took 6.318783ms to wait for k8s-apps to be running ...
	I0917 18:33:32.271049   77264 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 18:33:32.271102   77264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:33:32.287483   77264 system_svc.go:56] duration metric: took 16.427006ms WaitForService to wait for kubelet
	I0917 18:33:32.287516   77264 kubeadm.go:582] duration metric: took 6.311184714s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 18:33:32.287535   77264 node_conditions.go:102] verifying NodePressure condition ...
	I0917 18:33:32.406700   77264 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 18:33:32.406738   77264 node_conditions.go:123] node cpu capacity is 2
	I0917 18:33:32.406754   77264 node_conditions.go:105] duration metric: took 119.213403ms to run NodePressure ...
	I0917 18:33:32.406767   77264 start.go:241] waiting for startup goroutines ...
	I0917 18:33:32.406777   77264 start.go:246] waiting for cluster config update ...
	I0917 18:33:32.406791   77264 start.go:255] writing updated cluster config ...
	I0917 18:33:32.407061   77264 ssh_runner.go:195] Run: rm -f paused
	I0917 18:33:32.455606   77264 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 18:33:32.457636   77264 out.go:177] * Done! kubectl is now configured to use "embed-certs-081863" cluster and "default" namespace by default
	I0917 18:34:09.116050   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:34:09.116348   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:34:09.116382   78008 kubeadm.go:310] 
	I0917 18:34:09.116437   78008 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0917 18:34:09.116522   78008 kubeadm.go:310] 		timed out waiting for the condition
	I0917 18:34:09.116546   78008 kubeadm.go:310] 
	I0917 18:34:09.116595   78008 kubeadm.go:310] 	This error is likely caused by:
	I0917 18:34:09.116645   78008 kubeadm.go:310] 		- The kubelet is not running
	I0917 18:34:09.116792   78008 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0917 18:34:09.116804   78008 kubeadm.go:310] 
	I0917 18:34:09.116949   78008 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0917 18:34:09.116993   78008 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0917 18:34:09.117053   78008 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0917 18:34:09.117070   78008 kubeadm.go:310] 
	I0917 18:34:09.117199   78008 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0917 18:34:09.117318   78008 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0917 18:34:09.117331   78008 kubeadm.go:310] 
	I0917 18:34:09.117467   78008 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0917 18:34:09.117585   78008 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0917 18:34:09.117689   78008 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0917 18:34:09.117782   78008 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0917 18:34:09.117793   78008 kubeadm.go:310] 
	I0917 18:34:09.118509   78008 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:34:09.118613   78008 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0917 18:34:09.118740   78008 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0917 18:34:09.118821   78008 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0917 18:34:09.118869   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0917 18:34:09.597153   78008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:34:09.614431   78008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 18:34:09.627627   78008 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 18:34:09.627653   78008 kubeadm.go:157] found existing configuration files:
	
	I0917 18:34:09.627702   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 18:34:09.639927   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 18:34:09.639997   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 18:34:09.651694   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 18:34:09.662886   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 18:34:09.662951   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 18:34:09.675194   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 18:34:09.686971   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 18:34:09.687040   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 18:34:09.699343   78008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 18:34:09.711202   78008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 18:34:09.711259   78008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 18:34:09.722049   78008 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 18:34:09.800536   78008 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0917 18:34:09.800589   78008 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 18:34:09.951244   78008 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 18:34:09.951389   78008 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 18:34:09.951517   78008 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0917 18:34:10.148311   78008 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 18:34:10.150656   78008 out.go:235]   - Generating certificates and keys ...
	I0917 18:34:10.150769   78008 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 18:34:10.150858   78008 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 18:34:10.150978   78008 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 18:34:10.151065   78008 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 18:34:10.151169   78008 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 18:34:10.151256   78008 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 18:34:10.151519   78008 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 18:34:10.151757   78008 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 18:34:10.152388   78008 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 18:34:10.152908   78008 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 18:34:10.153071   78008 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 18:34:10.153159   78008 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 18:34:10.298790   78008 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 18:34:10.463403   78008 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 18:34:10.699997   78008 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 18:34:10.983279   78008 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 18:34:11.006708   78008 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 18:34:11.008239   78008 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 18:34:11.008306   78008 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 18:34:11.173261   78008 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 18:34:11.175163   78008 out.go:235]   - Booting up control plane ...
	I0917 18:34:11.175324   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 18:34:11.188834   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 18:34:11.189874   78008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 18:34:11.190719   78008 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 18:34:11.193221   78008 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0917 18:34:51.193814   78008 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0917 18:34:51.194231   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:34:51.194466   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:34:56.194972   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:34:56.195214   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:35:06.195454   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:35:06.195700   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:35:26.196645   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:35:26.196867   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:36:06.199013   78008 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0917 18:36:06.199291   78008 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0917 18:36:06.199313   78008 kubeadm.go:310] 
	I0917 18:36:06.199374   78008 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0917 18:36:06.199427   78008 kubeadm.go:310] 		timed out waiting for the condition
	I0917 18:36:06.199434   78008 kubeadm.go:310] 
	I0917 18:36:06.199481   78008 kubeadm.go:310] 	This error is likely caused by:
	I0917 18:36:06.199514   78008 kubeadm.go:310] 		- The kubelet is not running
	I0917 18:36:06.199643   78008 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0917 18:36:06.199663   78008 kubeadm.go:310] 
	I0917 18:36:06.199785   78008 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0917 18:36:06.199835   78008 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0917 18:36:06.199878   78008 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0917 18:36:06.199882   78008 kubeadm.go:310] 
	I0917 18:36:06.200017   78008 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0917 18:36:06.200218   78008 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0917 18:36:06.200235   78008 kubeadm.go:310] 
	I0917 18:36:06.200391   78008 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0917 18:36:06.200515   78008 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0917 18:36:06.200640   78008 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0917 18:36:06.200746   78008 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0917 18:36:06.200763   78008 kubeadm.go:310] 
	I0917 18:36:06.201520   78008 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 18:36:06.201636   78008 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0917 18:36:06.201798   78008 kubeadm.go:394] duration metric: took 7m56.884157814s to StartCluster
	I0917 18:36:06.201852   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 18:36:06.201800   78008 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0917 18:36:06.201920   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 18:36:06.251742   78008 cri.go:89] found id: ""
	I0917 18:36:06.251773   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.251781   78008 logs.go:278] No container was found matching "kube-apiserver"
	I0917 18:36:06.251787   78008 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 18:36:06.251853   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 18:36:06.292437   78008 cri.go:89] found id: ""
	I0917 18:36:06.292471   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.292483   78008 logs.go:278] No container was found matching "etcd"
	I0917 18:36:06.292490   78008 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 18:36:06.292548   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 18:36:06.334539   78008 cri.go:89] found id: ""
	I0917 18:36:06.334571   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.334580   78008 logs.go:278] No container was found matching "coredns"
	I0917 18:36:06.334590   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 18:36:06.334641   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 18:36:06.372231   78008 cri.go:89] found id: ""
	I0917 18:36:06.372267   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.372279   78008 logs.go:278] No container was found matching "kube-scheduler"
	I0917 18:36:06.372287   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 18:36:06.372346   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 18:36:06.411995   78008 cri.go:89] found id: ""
	I0917 18:36:06.412023   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.412031   78008 logs.go:278] No container was found matching "kube-proxy"
	I0917 18:36:06.412036   78008 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 18:36:06.412100   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 18:36:06.450809   78008 cri.go:89] found id: ""
	I0917 18:36:06.450834   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.450842   78008 logs.go:278] No container was found matching "kube-controller-manager"
	I0917 18:36:06.450848   78008 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 18:36:06.450897   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 18:36:06.486772   78008 cri.go:89] found id: ""
	I0917 18:36:06.486802   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.486814   78008 logs.go:278] No container was found matching "kindnet"
	I0917 18:36:06.486831   78008 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0917 18:36:06.486886   78008 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0917 18:36:06.528167   78008 cri.go:89] found id: ""
	I0917 18:36:06.528198   78008 logs.go:276] 0 containers: []
	W0917 18:36:06.528210   78008 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0917 18:36:06.528222   78008 logs.go:123] Gathering logs for describe nodes ...
	I0917 18:36:06.528234   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0917 18:36:06.610415   78008 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0917 18:36:06.610445   78008 logs.go:123] Gathering logs for CRI-O ...
	I0917 18:36:06.610461   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 18:36:06.745881   78008 logs.go:123] Gathering logs for container status ...
	I0917 18:36:06.745921   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 18:36:06.788764   78008 logs.go:123] Gathering logs for kubelet ...
	I0917 18:36:06.788802   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 18:36:06.843477   78008 logs.go:123] Gathering logs for dmesg ...
	I0917 18:36:06.843514   78008 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0917 18:36:06.858338   78008 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0917 18:36:06.858388   78008 out.go:270] * 
	W0917 18:36:06.858456   78008 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0917 18:36:06.858485   78008 out.go:270] * 
	W0917 18:36:06.859898   78008 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 18:36:06.863606   78008 out.go:201] 
	W0917 18:36:06.865246   78008 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0917 18:36:06.865293   78008 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0917 18:36:06.865313   78008 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0917 18:36:06.866942   78008 out.go:201] 
	
	
	==> CRI-O <==
	Sep 17 18:47:26 old-k8s-version-190698 crio[631]: time="2024-09-17 18:47:26.247240797Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598846247211069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3955db16-6d68-41be-bbf0-17d36a6d0433 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:47:26 old-k8s-version-190698 crio[631]: time="2024-09-17 18:47:26.247919795Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=176e37b6-f92c-4c26-aca3-f343a0dabe31 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:47:26 old-k8s-version-190698 crio[631]: time="2024-09-17 18:47:26.247975877Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=176e37b6-f92c-4c26-aca3-f343a0dabe31 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:47:26 old-k8s-version-190698 crio[631]: time="2024-09-17 18:47:26.248008940Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=176e37b6-f92c-4c26-aca3-f343a0dabe31 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:47:26 old-k8s-version-190698 crio[631]: time="2024-09-17 18:47:26.282353003Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=557fe70f-4a98-46a9-a25f-372f3016ea2a name=/runtime.v1.RuntimeService/Version
	Sep 17 18:47:26 old-k8s-version-190698 crio[631]: time="2024-09-17 18:47:26.282484603Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=557fe70f-4a98-46a9-a25f-372f3016ea2a name=/runtime.v1.RuntimeService/Version
	Sep 17 18:47:26 old-k8s-version-190698 crio[631]: time="2024-09-17 18:47:26.283682214Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d291cfd2-5745-4c3b-949f-b6eb649be8ae name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:47:26 old-k8s-version-190698 crio[631]: time="2024-09-17 18:47:26.284073705Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598846284051616,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d291cfd2-5745-4c3b-949f-b6eb649be8ae name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:47:26 old-k8s-version-190698 crio[631]: time="2024-09-17 18:47:26.284763885Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e0ecdd7-b9db-4dc1-b162-c3e40036d3bb name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:47:26 old-k8s-version-190698 crio[631]: time="2024-09-17 18:47:26.284819234Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e0ecdd7-b9db-4dc1-b162-c3e40036d3bb name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:47:26 old-k8s-version-190698 crio[631]: time="2024-09-17 18:47:26.284852030Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2e0ecdd7-b9db-4dc1-b162-c3e40036d3bb name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:47:26 old-k8s-version-190698 crio[631]: time="2024-09-17 18:47:26.319967981Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=de2d702f-c1e3-4194-a226-8888d43e63d6 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:47:26 old-k8s-version-190698 crio[631]: time="2024-09-17 18:47:26.320048231Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=de2d702f-c1e3-4194-a226-8888d43e63d6 name=/runtime.v1.RuntimeService/Version
	Sep 17 18:47:26 old-k8s-version-190698 crio[631]: time="2024-09-17 18:47:26.321467359Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6b85fc8e-ca89-4633-b007-c17c20c49dbd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:47:26 old-k8s-version-190698 crio[631]: time="2024-09-17 18:47:26.322168060Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598846322124380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b85fc8e-ca89-4633-b007-c17c20c49dbd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:47:26 old-k8s-version-190698 crio[631]: time="2024-09-17 18:47:26.323724101Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92acff86-bfee-4290-a3c5-5c0e42328199 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:47:26 old-k8s-version-190698 crio[631]: time="2024-09-17 18:47:26.323889872Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92acff86-bfee-4290-a3c5-5c0e42328199 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:47:26 old-k8s-version-190698 crio[631]: time="2024-09-17 18:47:26.324019046Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=92acff86-bfee-4290-a3c5-5c0e42328199 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:47:26 old-k8s-version-190698 crio[631]: time="2024-09-17 18:47:26.360244633Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=416b5ee7-56fb-4fef-bd56-a13c6d79ab3d name=/runtime.v1.RuntimeService/Version
	Sep 17 18:47:26 old-k8s-version-190698 crio[631]: time="2024-09-17 18:47:26.360328542Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=416b5ee7-56fb-4fef-bd56-a13c6d79ab3d name=/runtime.v1.RuntimeService/Version
	Sep 17 18:47:26 old-k8s-version-190698 crio[631]: time="2024-09-17 18:47:26.361533610Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d3a66b9a-e3bd-4024-9a46-4d600ea06841 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:47:26 old-k8s-version-190698 crio[631]: time="2024-09-17 18:47:26.361990159Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726598846361965356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d3a66b9a-e3bd-4024-9a46-4d600ea06841 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 18:47:26 old-k8s-version-190698 crio[631]: time="2024-09-17 18:47:26.362623451Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1aa97685-6bf9-426c-af99-4a218b406e71 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:47:26 old-k8s-version-190698 crio[631]: time="2024-09-17 18:47:26.362674495Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1aa97685-6bf9-426c-af99-4a218b406e71 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 18:47:26 old-k8s-version-190698 crio[631]: time="2024-09-17 18:47:26.362717624Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1aa97685-6bf9-426c-af99-4a218b406e71 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep17 18:27] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054866] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.046421] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.149899] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.871080] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.681331] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep17 18:28] systemd-fstab-generator[559]: Ignoring "noauto" option for root device
	[  +0.066256] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072788] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.186947] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.145789] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.292905] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +6.819811] systemd-fstab-generator[885]: Ignoring "noauto" option for root device
	[  +0.084662] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.874135] systemd-fstab-generator[1008]: Ignoring "noauto" option for root device
	[ +13.062004] kauditd_printk_skb: 46 callbacks suppressed
	[Sep17 18:32] systemd-fstab-generator[5014]: Ignoring "noauto" option for root device
	[Sep17 18:34] systemd-fstab-generator[5292]: Ignoring "noauto" option for root device
	[  +0.068770] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:47:26 up 19 min,  0 users,  load average: 0.03, 0.07, 0.07
	Linux old-k8s-version-190698 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 17 18:47:25 old-k8s-version-190698 kubelet[6775]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0008f4ab0, 0xc0009a6f20)
	Sep 17 18:47:25 old-k8s-version-190698 kubelet[6775]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Sep 17 18:47:25 old-k8s-version-190698 kubelet[6775]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Sep 17 18:47:25 old-k8s-version-190698 kubelet[6775]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Sep 17 18:47:25 old-k8s-version-190698 kubelet[6775]: goroutine 161 [chan receive]:
	Sep 17 18:47:25 old-k8s-version-190698 kubelet[6775]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc0009e43f0)
	Sep 17 18:47:25 old-k8s-version-190698 kubelet[6775]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Sep 17 18:47:25 old-k8s-version-190698 kubelet[6775]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Sep 17 18:47:25 old-k8s-version-190698 kubelet[6775]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Sep 17 18:47:25 old-k8s-version-190698 kubelet[6775]: goroutine 162 [select]:
	Sep 17 18:47:25 old-k8s-version-190698 kubelet[6775]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009edef0, 0x4f0ac20, 0xc000205bd0, 0x1, 0xc00009e0c0)
	Sep 17 18:47:25 old-k8s-version-190698 kubelet[6775]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Sep 17 18:47:25 old-k8s-version-190698 kubelet[6775]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0009ba2a0, 0xc00009e0c0)
	Sep 17 18:47:25 old-k8s-version-190698 kubelet[6775]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Sep 17 18:47:25 old-k8s-version-190698 kubelet[6775]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Sep 17 18:47:25 old-k8s-version-190698 kubelet[6775]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Sep 17 18:47:25 old-k8s-version-190698 kubelet[6775]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0008f4af0, 0xc0009a6fe0)
	Sep 17 18:47:25 old-k8s-version-190698 kubelet[6775]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Sep 17 18:47:25 old-k8s-version-190698 kubelet[6775]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Sep 17 18:47:25 old-k8s-version-190698 kubelet[6775]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Sep 17 18:47:25 old-k8s-version-190698 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 17 18:47:25 old-k8s-version-190698 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 17 18:47:26 old-k8s-version-190698 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 138.
	Sep 17 18:47:26 old-k8s-version-190698 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 17 18:47:26 old-k8s-version-190698 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-190698 -n old-k8s-version-190698
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-190698 -n old-k8s-version-190698: exit status 2 (254.280715ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-190698" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (133.77s)

                                                
                                    

Test pass (245/312)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.74
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 8.5
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.14
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.59
22 TestOffline 89.38
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 178.43
31 TestAddons/serial/GCPAuth/Namespaces 0.16
35 TestAddons/parallel/InspektorGadget 11.99
37 TestAddons/parallel/HelmTiller 9.77
39 TestAddons/parallel/CSI 52.28
40 TestAddons/parallel/Headlamp 17.71
41 TestAddons/parallel/CloudSpanner 6.57
42 TestAddons/parallel/LocalPath 56.48
43 TestAddons/parallel/NvidiaDevicePlugin 6.53
44 TestAddons/parallel/Yakd 11.79
45 TestAddons/StoppedEnableDisable 7.57
46 TestCertOptions 77.99
47 TestCertExpiration 310.77
49 TestForceSystemdFlag 46.19
50 TestForceSystemdEnv 53.67
52 TestKVMDriverInstallOrUpdate 1.34
56 TestErrorSpam/setup 41.89
57 TestErrorSpam/start 0.35
58 TestErrorSpam/status 0.76
59 TestErrorSpam/pause 1.62
60 TestErrorSpam/unpause 1.87
61 TestErrorSpam/stop 5.56
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 80.4
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 34.54
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.09
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.55
73 TestFunctional/serial/CacheCmd/cache/add_local 1.12
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.73
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.11
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 35.12
82 TestFunctional/serial/ComponentHealth 0.07
83 TestFunctional/serial/LogsCmd 1.53
84 TestFunctional/serial/LogsFileCmd 1.5
85 TestFunctional/serial/InvalidService 4.12
87 TestFunctional/parallel/ConfigCmd 0.36
88 TestFunctional/parallel/DashboardCmd 15.4
89 TestFunctional/parallel/DryRun 0.29
90 TestFunctional/parallel/InternationalLanguage 0.14
91 TestFunctional/parallel/StatusCmd 0.81
95 TestFunctional/parallel/ServiceCmdConnect 22.71
96 TestFunctional/parallel/AddonsCmd 0.13
97 TestFunctional/parallel/PersistentVolumeClaim 45.42
99 TestFunctional/parallel/SSHCmd 0.47
100 TestFunctional/parallel/CpCmd 1.35
101 TestFunctional/parallel/MySQL 20.9
102 TestFunctional/parallel/FileSync 0.23
103 TestFunctional/parallel/CertSync 1.36
107 TestFunctional/parallel/NodeLabels 0.09
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
111 TestFunctional/parallel/License 0.19
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
124 TestFunctional/parallel/ServiceCmd/DeployApp 21.34
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
126 TestFunctional/parallel/ProfileCmd/profile_list 0.51
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
128 TestFunctional/parallel/MountCmd/any-port 8.68
129 TestFunctional/parallel/ServiceCmd/List 0.44
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.44
131 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
132 TestFunctional/parallel/ServiceCmd/Format 0.32
133 TestFunctional/parallel/ServiceCmd/URL 0.32
134 TestFunctional/parallel/Version/short 0.05
135 TestFunctional/parallel/Version/components 0.8
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.1
141 TestFunctional/parallel/ImageCommands/Setup 0.42
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.8
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.04
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.37
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.87
146 TestFunctional/parallel/MountCmd/specific-port 1.74
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.93
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 3.44
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.76
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.56
151 TestFunctional/delete_echo-server_images 0.04
152 TestFunctional/delete_my-image_image 0.02
153 TestFunctional/delete_minikube_cached_images 0.02
157 TestMultiControlPlane/serial/StartCluster 191.82
158 TestMultiControlPlane/serial/DeployApp 6.26
159 TestMultiControlPlane/serial/PingHostFromPods 1.25
160 TestMultiControlPlane/serial/AddWorkerNode 56.28
161 TestMultiControlPlane/serial/NodeLabels 0.07
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.56
163 TestMultiControlPlane/serial/CopyFile 13.25
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.5
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.4
169 TestMultiControlPlane/serial/DeleteSecondaryNode 16.72
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.38
172 TestMultiControlPlane/serial/RestartCluster 286.13
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.37
174 TestMultiControlPlane/serial/AddSecondaryNode 76.51
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.54
179 TestJSONOutput/start/Command 87.55
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.74
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.65
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 7.38
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.2
207 TestMainNoArgs 0.05
208 TestMinikubeProfile 92.61
211 TestMountStart/serial/StartWithMountFirst 28.1
212 TestMountStart/serial/VerifyMountFirst 0.37
213 TestMountStart/serial/StartWithMountSecond 28.41
214 TestMountStart/serial/VerifyMountSecond 0.39
215 TestMountStart/serial/DeleteFirst 0.7
216 TestMountStart/serial/VerifyMountPostDelete 0.39
217 TestMountStart/serial/Stop 1.28
218 TestMountStart/serial/RestartStopped 21.68
219 TestMountStart/serial/VerifyMountPostStop 0.37
222 TestMultiNode/serial/FreshStart2Nodes 110.47
223 TestMultiNode/serial/DeployApp2Nodes 4.79
224 TestMultiNode/serial/PingHostFrom2Pods 0.82
225 TestMultiNode/serial/AddNode 50.61
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.22
228 TestMultiNode/serial/CopyFile 7.33
229 TestMultiNode/serial/StopNode 2.4
230 TestMultiNode/serial/StartAfterStop 38.35
232 TestMultiNode/serial/DeleteNode 2.39
234 TestMultiNode/serial/RestartMultiNode 198.81
235 TestMultiNode/serial/ValidateNameConflict 47.28
242 TestScheduledStopUnix 114.25
246 TestRunningBinaryUpgrade 211.78
253 TestStoppedBinaryUpgrade/Setup 0.72
254 TestStoppedBinaryUpgrade/Upgrade 172.41
259 TestNetworkPlugins/group/false 3.34
271 TestPause/serial/Start 93.29
272 TestStoppedBinaryUpgrade/MinikubeLogs 0.95
274 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
275 TestNoKubernetes/serial/StartWithK8s 43.99
277 TestNoKubernetes/serial/StartWithStopK8s 17.31
278 TestNoKubernetes/serial/Start 39.77
279 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
280 TestNoKubernetes/serial/ProfileList 0.75
281 TestNoKubernetes/serial/Stop 1.29
282 TestNoKubernetes/serial/StartNoArgs 67.92
283 TestNetworkPlugins/group/auto/Start 129.09
284 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
285 TestNetworkPlugins/group/kindnet/Start 135.3
286 TestNetworkPlugins/group/auto/KubeletFlags 0.21
287 TestNetworkPlugins/group/auto/NetCatPod 11.24
288 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
289 TestNetworkPlugins/group/auto/DNS 0.19
290 TestNetworkPlugins/group/auto/Localhost 0.15
291 TestNetworkPlugins/group/auto/HairPin 0.19
292 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
293 TestNetworkPlugins/group/kindnet/NetCatPod 10.3
294 TestNetworkPlugins/group/calico/Start 73.52
295 TestNetworkPlugins/group/kindnet/DNS 0.19
296 TestNetworkPlugins/group/kindnet/Localhost 0.16
297 TestNetworkPlugins/group/kindnet/HairPin 0.16
298 TestNetworkPlugins/group/custom-flannel/Start 87.41
299 TestNetworkPlugins/group/bridge/Start 92.87
300 TestNetworkPlugins/group/calico/ControllerPod 6.01
301 TestNetworkPlugins/group/calico/KubeletFlags 0.22
302 TestNetworkPlugins/group/calico/NetCatPod 14.27
303 TestNetworkPlugins/group/calico/DNS 0.21
304 TestNetworkPlugins/group/calico/Localhost 0.21
305 TestNetworkPlugins/group/calico/HairPin 0.25
306 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
307 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.27
308 TestNetworkPlugins/group/custom-flannel/DNS 0.2
309 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
310 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
311 TestNetworkPlugins/group/flannel/Start 70.25
312 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
313 TestNetworkPlugins/group/bridge/NetCatPod 11.29
314 TestNetworkPlugins/group/enable-default-cni/Start 105.33
315 TestNetworkPlugins/group/bridge/DNS 0.17
316 TestNetworkPlugins/group/bridge/Localhost 0.15
317 TestNetworkPlugins/group/bridge/HairPin 0.16
321 TestStartStop/group/no-preload/serial/FirstStart 148.79
322 TestNetworkPlugins/group/flannel/ControllerPod 6.01
323 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
324 TestNetworkPlugins/group/flannel/NetCatPod 13.31
325 TestNetworkPlugins/group/flannel/DNS 0.21
326 TestNetworkPlugins/group/flannel/Localhost 0.16
327 TestNetworkPlugins/group/flannel/HairPin 0.14
329 TestStartStop/group/embed-certs/serial/FirstStart 56.62
330 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
331 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.26
332 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
333 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
334 TestNetworkPlugins/group/enable-default-cni/HairPin 0.22
336 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 87.36
337 TestStartStop/group/embed-certs/serial/DeployApp 9.3
338 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.07
340 TestStartStop/group/no-preload/serial/DeployApp 10.31
341 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.11
343 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.31
344 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.01
349 TestStartStop/group/embed-certs/serial/SecondStart 667
351 TestStartStop/group/no-preload/serial/SecondStart 608.09
353 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 559.68
354 TestStartStop/group/old-k8s-version/serial/Stop 5.58
355 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
366 TestStartStop/group/newest-cni/serial/FirstStart 52.02
367 TestStartStop/group/newest-cni/serial/DeployApp 0
368 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.29
369 TestStartStop/group/newest-cni/serial/Stop 7.35
370 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
371 TestStartStop/group/newest-cni/serial/SecondStart 37.2
372 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
373 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
374 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
375 TestStartStop/group/newest-cni/serial/Pause 3.47
x
+
TestDownloadOnly/v1.20.0/json-events (10.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-581824 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-581824 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (10.744126925s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-581824
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-581824: exit status 85 (56.321766ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-581824 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |          |
	|         | -p download-only-581824        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 16:55:30
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 16:55:30.999002   18271 out.go:345] Setting OutFile to fd 1 ...
	I0917 16:55:30.999261   18271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:55:30.999270   18271 out.go:358] Setting ErrFile to fd 2...
	I0917 16:55:30.999274   18271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:55:30.999501   18271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	W0917 16:55:30.999621   18271 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19662-11085/.minikube/config/config.json: open /home/jenkins/minikube-integration/19662-11085/.minikube/config/config.json: no such file or directory
	I0917 16:55:31.000230   18271 out.go:352] Setting JSON to true
	I0917 16:55:31.001137   18271 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2246,"bootTime":1726589885,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 16:55:31.001201   18271 start.go:139] virtualization: kvm guest
	I0917 16:55:31.003823   18271 out.go:97] [download-only-581824] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 16:55:31.003971   18271 notify.go:220] Checking for updates...
	W0917 16:55:31.003987   18271 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball: no such file or directory
	I0917 16:55:31.005422   18271 out.go:169] MINIKUBE_LOCATION=19662
	I0917 16:55:31.006967   18271 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 16:55:31.008283   18271 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 16:55:31.009458   18271 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 16:55:31.010820   18271 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0917 16:55:31.012890   18271 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0917 16:55:31.013079   18271 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 16:55:31.493972   18271 out.go:97] Using the kvm2 driver based on user configuration
	I0917 16:55:31.493996   18271 start.go:297] selected driver: kvm2
	I0917 16:55:31.494001   18271 start.go:901] validating driver "kvm2" against <nil>
	I0917 16:55:31.494331   18271 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 16:55:31.494463   18271 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19662-11085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0917 16:55:31.509808   18271 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0917 16:55:31.509852   18271 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 16:55:31.510370   18271 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0917 16:55:31.510542   18271 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 16:55:31.510572   18271 cni.go:84] Creating CNI manager for ""
	I0917 16:55:31.510613   18271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 16:55:31.510622   18271 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 16:55:31.510672   18271 start.go:340] cluster config:
	{Name:download-only-581824 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-581824 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 16:55:31.510848   18271 iso.go:125] acquiring lock: {Name:mkee7131e09b1c5eb94d106bda884bcbdbf6f906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 16:55:31.512924   18271 out.go:97] Downloading VM boot image ...
	I0917 16:55:31.512979   18271 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19662-11085/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0917 16:55:36.925730   18271 out.go:97] Starting "download-only-581824" primary control-plane node in "download-only-581824" cluster
	I0917 16:55:36.925797   18271 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0917 16:55:36.950222   18271 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0917 16:55:36.950254   18271 cache.go:56] Caching tarball of preloaded images
	I0917 16:55:36.950428   18271 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0917 16:55:36.952599   18271 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0917 16:55:36.952630   18271 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0917 16:55:36.983126   18271 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0917 16:55:40.311764   18271 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0917 16:55:40.311857   18271 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-581824 host does not exist
	  To start a cluster, run: "minikube start -p download-only-581824"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-581824
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (8.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-285125 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-285125 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (8.499694744s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (8.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-285125
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-285125: exit status 85 (58.608982ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-581824 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |                     |
	|         | -p download-only-581824        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| delete  | -p download-only-581824        | download-only-581824 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| start   | -o=json --download-only        | download-only-285125 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |                     |
	|         | -p download-only-285125        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 16:55:42
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 16:55:42.073954   18494 out.go:345] Setting OutFile to fd 1 ...
	I0917 16:55:42.074066   18494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:55:42.074075   18494 out.go:358] Setting ErrFile to fd 2...
	I0917 16:55:42.074079   18494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:55:42.074273   18494 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 16:55:42.074853   18494 out.go:352] Setting JSON to true
	I0917 16:55:42.075756   18494 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2257,"bootTime":1726589885,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 16:55:42.075858   18494 start.go:139] virtualization: kvm guest
	I0917 16:55:42.078065   18494 out.go:97] [download-only-285125] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 16:55:42.078192   18494 notify.go:220] Checking for updates...
	I0917 16:55:42.079434   18494 out.go:169] MINIKUBE_LOCATION=19662
	I0917 16:55:42.081040   18494 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 16:55:42.082428   18494 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 16:55:42.083704   18494 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 16:55:42.084990   18494 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0917 16:55:42.087581   18494 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0917 16:55:42.087930   18494 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 16:55:42.120175   18494 out.go:97] Using the kvm2 driver based on user configuration
	I0917 16:55:42.120200   18494 start.go:297] selected driver: kvm2
	I0917 16:55:42.120206   18494 start.go:901] validating driver "kvm2" against <nil>
	I0917 16:55:42.120525   18494 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 16:55:42.120602   18494 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19662-11085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0917 16:55:42.137294   18494 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0917 16:55:42.137354   18494 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 16:55:42.137849   18494 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0917 16:55:42.137985   18494 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 16:55:42.138014   18494 cni.go:84] Creating CNI manager for ""
	I0917 16:55:42.138061   18494 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 16:55:42.138069   18494 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 16:55:42.138121   18494 start.go:340] cluster config:
	{Name:download-only-285125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-285125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 16:55:42.138220   18494 iso.go:125] acquiring lock: {Name:mkee7131e09b1c5eb94d106bda884bcbdbf6f906 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 16:55:42.139935   18494 out.go:97] Starting "download-only-285125" primary control-plane node in "download-only-285125" cluster
	I0917 16:55:42.139955   18494 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 16:55:42.170082   18494 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0917 16:55:42.170110   18494 cache.go:56] Caching tarball of preloaded images
	I0917 16:55:42.170270   18494 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 16:55:42.172138   18494 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0917 16:55:42.172157   18494 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0917 16:55:42.196910   18494 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19662-11085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-285125 host does not exist
	  To start a cluster, run: "minikube start -p download-only-285125"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-285125
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-510758 --alsologtostderr --binary-mirror http://127.0.0.1:36709 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-510758" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-510758
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (89.38s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-624774 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-624774 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m28.314410383s)
helpers_test.go:175: Cleaning up "offline-crio-624774" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-624774
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-624774: (1.069652162s)
--- PASS: TestOffline (89.38s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-408385
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-408385: exit status 85 (48.414916ms)

                                                
                                                
-- stdout --
	* Profile "addons-408385" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-408385"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-408385
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-408385: exit status 85 (48.355494ms)

                                                
                                                
-- stdout --
	* Profile "addons-408385" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-408385"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (178.43s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-408385 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-408385 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m58.426471867s)
--- PASS: TestAddons/Setup (178.43s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-408385 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-408385 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.99s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-gk6jl" [de28ed39-ce09-4f1d-be90-3b5c0d786949] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005715923s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-408385
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-408385: (5.98504701s)
--- PASS: TestAddons/parallel/InspektorGadget (11.99s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.77s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.427048ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-r4h85" [6b8d783b-0417-4bca-bedd-0283ba1faf18] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003889825s
addons_test.go:475: (dbg) Run:  kubectl --context addons-408385 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-408385 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.125184097s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-408385 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.77s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 6.600089ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-408385 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408385 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408385 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408385 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-408385 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [e5374bae-48c9-4343-8c09-9463ac0199b6] Pending
helpers_test.go:344: "task-pv-pod" [e5374bae-48c9-4343-8c09-9463ac0199b6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [e5374bae-48c9-4343-8c09-9463ac0199b6] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004483398s
addons_test.go:590: (dbg) Run:  kubectl --context addons-408385 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-408385 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-408385 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-408385 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-408385 delete pod task-pv-pod: (1.3383214s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-408385 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-408385 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408385 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408385 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408385 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408385 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408385 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408385 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408385 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408385 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408385 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408385 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408385 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408385 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408385 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408385 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408385 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408385 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408385 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408385 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-408385 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b86ede50-2575-41e9-9c31-030ee450f3d2] Pending
helpers_test.go:344: "task-pv-pod-restore" [b86ede50-2575-41e9-9c31-030ee450f3d2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b86ede50-2575-41e9-9c31-030ee450f3d2] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004682131s
addons_test.go:632: (dbg) Run:  kubectl --context addons-408385 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-408385 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-408385 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-408385 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-408385 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.845821099s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-408385 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-amd64 -p addons-408385 addons disable volumesnapshots --alsologtostderr -v=1: (1.067003275s)
--- PASS: TestAddons/parallel/CSI (52.28s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-408385 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-xkmfd" [2c0d9263-1a2c-454a-9a7e-aaf81f650983] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-xkmfd" [2c0d9263-1a2c-454a-9a7e-aaf81f650983] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-xkmfd" [2c0d9263-1a2c-454a-9a7e-aaf81f650983] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.018507219s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-408385 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-408385 addons disable headlamp --alsologtostderr -v=1: (5.744962541s)
--- PASS: TestAddons/parallel/Headlamp (17.71s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-bts6k" [82c244a3-5a3d-428b-9b81-02ea087e5124] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004638695s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-408385
--- PASS: TestAddons/parallel/CloudSpanner (6.57s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.48s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-408385 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-408385 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408385 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408385 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408385 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408385 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408385 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-408385 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d96958d4-882d-423d-a7ce-48bf864ea17a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [d96958d4-882d-423d-a7ce-48bf864ea17a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [d96958d4-882d-423d-a7ce-48bf864ea17a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.003441108s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-408385 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-408385 ssh "cat /opt/local-path-provisioner/pvc-909e1d4d-bf3e-45b2-8d6d-fc1ce31d7fc6_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-408385 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-408385 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-408385 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-408385 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.624790564s)
--- PASS: TestAddons/parallel/LocalPath (56.48s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-95n5v" [48c0bfc6-64c7-473b-9f8c-429d8af8f349] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004574622s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-408385
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-sjbnb" [63f7f867-db6c-4e11-b32b-b52255c5a318] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.006237841s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-408385 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-408385 addons disable yakd --alsologtostderr -v=1: (5.781587991s)
--- PASS: TestAddons/parallel/Yakd (11.79s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (7.57s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-408385
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-408385: (7.295992265s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-408385
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-408385
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-408385
--- PASS: TestAddons/StoppedEnableDisable (7.57s)

                                                
                                    
x
+
TestCertOptions (77.99s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-111998 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0917 18:11:24.984710   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-111998 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m15.931177503s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-111998 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-111998 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-111998 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-111998" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-111998
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-111998: (1.585802017s)
--- PASS: TestCertOptions (77.99s)

                                                
                                    
x
+
TestCertExpiration (310.77s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-297256 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-297256 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m28.796138115s)
E0917 18:13:50.532111   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-297256 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-297256 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (40.944188765s)
helpers_test.go:175: Cleaning up "cert-expiration-297256" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-297256
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-297256: (1.027705453s)
--- PASS: TestCertExpiration (310.77s)

                                                
                                    
x
+
TestForceSystemdFlag (46.19s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-722424 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-722424 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (44.981936903s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-722424 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-722424" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-722424
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-722424: (1.018171992s)
--- PASS: TestForceSystemdFlag (46.19s)

                                                
                                    
x
+
TestForceSystemdEnv (53.67s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-085164 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-085164 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (52.845906552s)
helpers_test.go:175: Cleaning up "force-systemd-env-085164" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-085164
--- PASS: TestForceSystemdEnv (53.67s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.34s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.34s)

                                                
                                    
x
+
TestErrorSpam/setup (41.89s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-651223 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-651223 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-651223 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-651223 --driver=kvm2  --container-runtime=crio: (41.891602345s)
--- PASS: TestErrorSpam/setup (41.89s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-651223 --log_dir /tmp/nospam-651223 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-651223 --log_dir /tmp/nospam-651223 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-651223 --log_dir /tmp/nospam-651223 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-651223 --log_dir /tmp/nospam-651223 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-651223 --log_dir /tmp/nospam-651223 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-651223 --log_dir /tmp/nospam-651223 status
--- PASS: TestErrorSpam/status (0.76s)

                                                
                                    
x
+
TestErrorSpam/pause (1.62s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-651223 --log_dir /tmp/nospam-651223 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-651223 --log_dir /tmp/nospam-651223 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-651223 --log_dir /tmp/nospam-651223 pause
--- PASS: TestErrorSpam/pause (1.62s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.87s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-651223 --log_dir /tmp/nospam-651223 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-651223 --log_dir /tmp/nospam-651223 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-651223 --log_dir /tmp/nospam-651223 unpause
--- PASS: TestErrorSpam/unpause (1.87s)

                                                
                                    
x
+
TestErrorSpam/stop (5.56s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-651223 --log_dir /tmp/nospam-651223 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-651223 --log_dir /tmp/nospam-651223 stop: (2.313146227s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-651223 --log_dir /tmp/nospam-651223 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-651223 --log_dir /tmp/nospam-651223 stop: (1.992643406s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-651223 --log_dir /tmp/nospam-651223 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-651223 --log_dir /tmp/nospam-651223 stop: (1.252807675s)
--- PASS: TestErrorSpam/stop (5.56s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19662-11085/.minikube/files/etc/test/nested/copy/18259/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.4s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-853088 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0917 17:13:50.534831   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:13:50.541726   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:13:50.553162   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:13:50.574517   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:13:50.615911   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:13:50.697408   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:13:50.859156   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:13:51.181106   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:13:51.822446   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:13:53.104064   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:13:55.667034   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:14:00.788846   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:14:11.030755   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:14:31.512897   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-853088 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m20.395379612s)
--- PASS: TestFunctional/serial/StartWithProxy (80.40s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.54s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-853088 --alsologtostderr -v=8
E0917 17:15:12.474301   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-853088 --alsologtostderr -v=8: (34.539226843s)
functional_test.go:663: soft start took 34.539968512s for "functional-853088" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.54s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-853088 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-853088 cache add registry.k8s.io/pause:3.1: (1.148512464s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-853088 cache add registry.k8s.io/pause:3.3: (1.251252078s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-853088 cache add registry.k8s.io/pause:latest: (1.146859433s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-853088 /tmp/TestFunctionalserialCacheCmdcacheadd_local975331243/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 cache add minikube-local-cache-test:functional-853088
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 cache delete minikube-local-cache-test:functional-853088
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-853088
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-853088 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (211.93531ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-853088 cache reload: (1.028612941s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 kubectl -- --context functional-853088 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-853088 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-853088 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-853088 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.11889055s)
functional_test.go:761: restart took 35.119005521s for "functional-853088" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.12s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-853088 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-853088 logs: (1.530058083s)
--- PASS: TestFunctional/serial/LogsCmd (1.53s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 logs --file /tmp/TestFunctionalserialLogsFileCmd3940026441/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-853088 logs --file /tmp/TestFunctionalserialLogsFileCmd3940026441/001/logs.txt: (1.497061529s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.12s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-853088 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-853088
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-853088: exit status 115 (288.301178ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.158:32319 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-853088 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.12s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-853088 config get cpus: exit status 14 (53.146839ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-853088 config get cpus: exit status 14 (59.07047ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-853088 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-853088 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 28475: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.40s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-853088 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-853088 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (151.79685ms)

                                                
                                                
-- stdout --
	* [functional-853088] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:16:49.678386   28259 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:16:49.678530   28259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:16:49.678544   28259 out.go:358] Setting ErrFile to fd 2...
	I0917 17:16:49.678549   28259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:16:49.678752   28259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 17:16:49.679297   28259 out.go:352] Setting JSON to false
	I0917 17:16:49.680243   28259 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3525,"bootTime":1726589885,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 17:16:49.680340   28259 start.go:139] virtualization: kvm guest
	I0917 17:16:49.682477   28259 out.go:177] * [functional-853088] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 17:16:49.683942   28259 notify.go:220] Checking for updates...
	I0917 17:16:49.683967   28259 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 17:16:49.685396   28259 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 17:16:49.686743   28259 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 17:16:49.688060   28259 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 17:16:49.689519   28259 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 17:16:49.694973   28259 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 17:16:49.697137   28259 config.go:182] Loaded profile config "functional-853088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:16:49.697715   28259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:16:49.697803   28259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:16:49.715694   28259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34377
	I0917 17:16:49.716215   28259 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:16:49.716837   28259 main.go:141] libmachine: Using API Version  1
	I0917 17:16:49.716851   28259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:16:49.717206   28259 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:16:49.717404   28259 main.go:141] libmachine: (functional-853088) Calling .DriverName
	I0917 17:16:49.717646   28259 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 17:16:49.717927   28259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:16:49.717958   28259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:16:49.733619   28259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46015
	I0917 17:16:49.734173   28259 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:16:49.734708   28259 main.go:141] libmachine: Using API Version  1
	I0917 17:16:49.734738   28259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:16:49.735091   28259 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:16:49.735263   28259 main.go:141] libmachine: (functional-853088) Calling .DriverName
	I0917 17:16:49.773024   28259 out.go:177] * Using the kvm2 driver based on existing profile
	I0917 17:16:49.774620   28259 start.go:297] selected driver: kvm2
	I0917 17:16:49.774647   28259 start.go:901] validating driver "kvm2" against &{Name:functional-853088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-853088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:16:49.774751   28259 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 17:16:49.777052   28259 out.go:201] 
	W0917 17:16:49.778267   28259 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0917 17:16:49.779776   28259 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-853088 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-853088 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-853088 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (141.272568ms)

                                                
                                                
-- stdout --
	* [functional-853088] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:16:49.536004   28226 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:16:49.536117   28226 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:16:49.536126   28226 out.go:358] Setting ErrFile to fd 2...
	I0917 17:16:49.536132   28226 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:16:49.536384   28226 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 17:16:49.536913   28226 out.go:352] Setting JSON to false
	I0917 17:16:49.537866   28226 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3525,"bootTime":1726589885,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 17:16:49.537966   28226 start.go:139] virtualization: kvm guest
	I0917 17:16:49.540406   28226 out.go:177] * [functional-853088] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0917 17:16:49.541912   28226 notify.go:220] Checking for updates...
	I0917 17:16:49.541929   28226 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 17:16:49.543239   28226 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 17:16:49.544574   28226 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 17:16:49.545916   28226 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 17:16:49.547551   28226 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 17:16:49.548928   28226 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 17:16:49.550706   28226 config.go:182] Loaded profile config "functional-853088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:16:49.551076   28226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:16:49.551121   28226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:16:49.566549   28226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41899
	I0917 17:16:49.567016   28226 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:16:49.567519   28226 main.go:141] libmachine: Using API Version  1
	I0917 17:16:49.567542   28226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:16:49.567900   28226 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:16:49.568092   28226 main.go:141] libmachine: (functional-853088) Calling .DriverName
	I0917 17:16:49.568328   28226 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 17:16:49.568661   28226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:16:49.568702   28226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:16:49.584001   28226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33639
	I0917 17:16:49.584464   28226 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:16:49.584882   28226 main.go:141] libmachine: Using API Version  1
	I0917 17:16:49.584911   28226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:16:49.585346   28226 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:16:49.585548   28226 main.go:141] libmachine: (functional-853088) Calling .DriverName
	I0917 17:16:49.621550   28226 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0917 17:16:49.623065   28226 start.go:297] selected driver: kvm2
	I0917 17:16:49.623084   28226 start.go:901] validating driver "kvm2" against &{Name:functional-853088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-853088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:16:49.623199   28226 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 17:16:49.625196   28226 out.go:201] 
	W0917 17:16:49.626590   28226 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0917 17:16:49.627822   28226 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (22.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-853088 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-853088 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-5rhd6" [01c8b688-9dbd-4783-b0b7-40884ae96a00] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-5rhd6" [01c8b688-9dbd-4783-b0b7-40884ae96a00] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 22.003707834s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.158:30269
functional_test.go:1675: http://192.168.39.158:30269: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-5rhd6

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.158:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.158:30269
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (22.71s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [a570f5ae-da19-45d6-bb71-9f111f835b86] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.309358048s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-853088 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-853088 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-853088 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-853088 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-853088 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [648ccd62-4c4d-4243-8112-3e92bf969873] Pending
helpers_test.go:344: "sp-pod" [648ccd62-4c4d-4243-8112-3e92bf969873] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0917 17:16:34.396623   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [648ccd62-4c4d-4243-8112-3e92bf969873] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.004500527s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-853088 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-853088 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-853088 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bf802dda-d7af-401b-ac86-de2b934e3de5] Pending
helpers_test.go:344: "sp-pod" [bf802dda-d7af-401b-ac86-de2b934e3de5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bf802dda-d7af-401b-ac86-de2b934e3de5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004988692s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-853088 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.42s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh -n functional-853088 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 cp functional-853088:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd797895349/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh -n functional-853088 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh -n functional-853088 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-853088 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-sfrjv" [f3f6bcf0-0c1b-4b3f-acd5-9865c4dcc746] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-sfrjv" [f3f6bcf0-0c1b-4b3f-acd5-9865c4dcc746] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.004189152s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-853088 exec mysql-6cdb49bbb-sfrjv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-853088 exec mysql-6cdb49bbb-sfrjv -- mysql -ppassword -e "show databases;": exit status 1 (204.366761ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-853088 exec mysql-6cdb49bbb-sfrjv -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.90s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/18259/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh "sudo cat /etc/test/nested/copy/18259/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/18259.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh "sudo cat /etc/ssl/certs/18259.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/18259.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh "sudo cat /usr/share/ca-certificates/18259.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/182592.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh "sudo cat /etc/ssl/certs/182592.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/182592.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh "sudo cat /usr/share/ca-certificates/182592.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-853088 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-853088 ssh "sudo systemctl is-active docker": exit status 1 (228.403475ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-853088 ssh "sudo systemctl is-active containerd": exit status 1 (246.058187ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (21.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-853088 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-853088 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-rc7kv" [32b8d02b-afb5-4df3-8ffa-11de980cc915] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-rc7kv" [32b8d02b-afb5-4df3-8ffa-11de980cc915] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 21.003697547s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (21.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "373.642232ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "134.562212ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "251.059237ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "44.001552ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-853088 /tmp/TestFunctionalparallelMountCmdany-port2831612851/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726593406817065090" to /tmp/TestFunctionalparallelMountCmdany-port2831612851/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726593406817065090" to /tmp/TestFunctionalparallelMountCmdany-port2831612851/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726593406817065090" to /tmp/TestFunctionalparallelMountCmdany-port2831612851/001/test-1726593406817065090
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-853088 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (248.130328ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 17 17:16 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 17 17:16 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 17 17:16 test-1726593406817065090
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh cat /mount-9p/test-1726593406817065090
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-853088 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [905c90bc-0e70-414c-bee1-f198dfad6987] Pending
helpers_test.go:344: "busybox-mount" [905c90bc-0e70-414c-bee1-f198dfad6987] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [905c90bc-0e70-414c-bee1-f198dfad6987] Running
helpers_test.go:344: "busybox-mount" [905c90bc-0e70-414c-bee1-f198dfad6987] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [905c90bc-0e70-414c-bee1-f198dfad6987] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.0044852s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-853088 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-853088 /tmp/TestFunctionalparallelMountCmdany-port2831612851/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 service list -o json
functional_test.go:1494: Took "442.899049ms" to run "out/minikube-linux-amd64 -p functional-853088 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.158:31569
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.158:31569
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-853088 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-853088
localhost/kicbase/echo-server:functional-853088
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-853088 image ls --format short --alsologtostderr:
I0917 17:17:01.064934   29398 out.go:345] Setting OutFile to fd 1 ...
I0917 17:17:01.065035   29398 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:17:01.065045   29398 out.go:358] Setting ErrFile to fd 2...
I0917 17:17:01.065052   29398 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:17:01.065325   29398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
I0917 17:17:01.065980   29398 config.go:182] Loaded profile config "functional-853088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0917 17:17:01.066075   29398 config.go:182] Loaded profile config "functional-853088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0917 17:17:01.066519   29398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0917 17:17:01.066571   29398 main.go:141] libmachine: Launching plugin server for driver kvm2
I0917 17:17:01.082947   29398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36965
I0917 17:17:01.083386   29398 main.go:141] libmachine: () Calling .GetVersion
I0917 17:17:01.083889   29398 main.go:141] libmachine: Using API Version  1
I0917 17:17:01.083915   29398 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 17:17:01.084326   29398 main.go:141] libmachine: () Calling .GetMachineName
I0917 17:17:01.084502   29398 main.go:141] libmachine: (functional-853088) Calling .GetState
I0917 17:17:01.086478   29398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0917 17:17:01.086529   29398 main.go:141] libmachine: Launching plugin server for driver kvm2
I0917 17:17:01.106070   29398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43705
I0917 17:17:01.106506   29398 main.go:141] libmachine: () Calling .GetVersion
I0917 17:17:01.106995   29398 main.go:141] libmachine: Using API Version  1
I0917 17:17:01.107015   29398 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 17:17:01.107308   29398 main.go:141] libmachine: () Calling .GetMachineName
I0917 17:17:01.107438   29398 main.go:141] libmachine: (functional-853088) Calling .DriverName
I0917 17:17:01.107613   29398 ssh_runner.go:195] Run: systemctl --version
I0917 17:17:01.107639   29398 main.go:141] libmachine: (functional-853088) Calling .GetSSHHostname
I0917 17:17:01.110329   29398 main.go:141] libmachine: (functional-853088) DBG | domain functional-853088 has defined MAC address 52:54:00:77:db:2e in network mk-functional-853088
I0917 17:17:01.110684   29398 main.go:141] libmachine: (functional-853088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:db:2e", ip: ""} in network mk-functional-853088: {Iface:virbr1 ExpiryTime:2024-09-17 18:13:55 +0000 UTC Type:0 Mac:52:54:00:77:db:2e Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:functional-853088 Clientid:01:52:54:00:77:db:2e}
I0917 17:17:01.110710   29398 main.go:141] libmachine: (functional-853088) DBG | domain functional-853088 has defined IP address 192.168.39.158 and MAC address 52:54:00:77:db:2e in network mk-functional-853088
I0917 17:17:01.110855   29398 main.go:141] libmachine: (functional-853088) Calling .GetSSHPort
I0917 17:17:01.110982   29398 main.go:141] libmachine: (functional-853088) Calling .GetSSHKeyPath
I0917 17:17:01.111087   29398 main.go:141] libmachine: (functional-853088) Calling .GetSSHUsername
I0917 17:17:01.111183   29398 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/functional-853088/id_rsa Username:docker}
I0917 17:17:01.194133   29398 ssh_runner.go:195] Run: sudo crictl images --output json
I0917 17:17:01.245694   29398 main.go:141] libmachine: Making call to close driver server
I0917 17:17:01.245705   29398 main.go:141] libmachine: (functional-853088) Calling .Close
I0917 17:17:01.246043   29398 main.go:141] libmachine: (functional-853088) DBG | Closing plugin on server side
I0917 17:17:01.246049   29398 main.go:141] libmachine: Successfully made call to close driver server
I0917 17:17:01.246075   29398 main.go:141] libmachine: Making call to close connection to plugin binary
I0917 17:17:01.246089   29398 main.go:141] libmachine: Making call to close driver server
I0917 17:17:01.246098   29398 main.go:141] libmachine: (functional-853088) Calling .Close
I0917 17:17:01.246331   29398 main.go:141] libmachine: Successfully made call to close driver server
I0917 17:17:01.246367   29398 main.go:141] libmachine: Making call to close connection to plugin binary
I0917 17:17:01.246336   29398 main.go:141] libmachine: (functional-853088) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-853088 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-853088  | a30c65d8c3251 | 3.33kB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| docker.io/library/nginx                 | latest             | 39286ab8a5e14 | 192MB  |
| localhost/kicbase/echo-server           | functional-853088  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-853088 image ls --format table --alsologtostderr:
I0917 17:17:01.533136   29513 out.go:345] Setting OutFile to fd 1 ...
I0917 17:17:01.533424   29513 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:17:01.533434   29513 out.go:358] Setting ErrFile to fd 2...
I0917 17:17:01.533439   29513 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:17:01.533616   29513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
I0917 17:17:01.534199   29513 config.go:182] Loaded profile config "functional-853088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0917 17:17:01.534289   29513 config.go:182] Loaded profile config "functional-853088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0917 17:17:01.534641   29513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0917 17:17:01.534678   29513 main.go:141] libmachine: Launching plugin server for driver kvm2
I0917 17:17:01.551230   29513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33745
I0917 17:17:01.551729   29513 main.go:141] libmachine: () Calling .GetVersion
I0917 17:17:01.552338   29513 main.go:141] libmachine: Using API Version  1
I0917 17:17:01.552361   29513 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 17:17:01.552710   29513 main.go:141] libmachine: () Calling .GetMachineName
I0917 17:17:01.552921   29513 main.go:141] libmachine: (functional-853088) Calling .GetState
I0917 17:17:01.554952   29513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0917 17:17:01.554998   29513 main.go:141] libmachine: Launching plugin server for driver kvm2
I0917 17:17:01.577812   29513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38783
I0917 17:17:01.578320   29513 main.go:141] libmachine: () Calling .GetVersion
I0917 17:17:01.578897   29513 main.go:141] libmachine: Using API Version  1
I0917 17:17:01.578921   29513 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 17:17:01.579288   29513 main.go:141] libmachine: () Calling .GetMachineName
I0917 17:17:01.579581   29513 main.go:141] libmachine: (functional-853088) Calling .DriverName
I0917 17:17:01.579769   29513 ssh_runner.go:195] Run: systemctl --version
I0917 17:17:01.579803   29513 main.go:141] libmachine: (functional-853088) Calling .GetSSHHostname
I0917 17:17:01.582821   29513 main.go:141] libmachine: (functional-853088) DBG | domain functional-853088 has defined MAC address 52:54:00:77:db:2e in network mk-functional-853088
I0917 17:17:01.583186   29513 main.go:141] libmachine: (functional-853088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:db:2e", ip: ""} in network mk-functional-853088: {Iface:virbr1 ExpiryTime:2024-09-17 18:13:55 +0000 UTC Type:0 Mac:52:54:00:77:db:2e Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:functional-853088 Clientid:01:52:54:00:77:db:2e}
I0917 17:17:01.583216   29513 main.go:141] libmachine: (functional-853088) DBG | domain functional-853088 has defined IP address 192.168.39.158 and MAC address 52:54:00:77:db:2e in network mk-functional-853088
I0917 17:17:01.583420   29513 main.go:141] libmachine: (functional-853088) Calling .GetSSHPort
I0917 17:17:01.583617   29513 main.go:141] libmachine: (functional-853088) Calling .GetSSHKeyPath
I0917 17:17:01.583788   29513 main.go:141] libmachine: (functional-853088) Calling .GetSSHUsername
I0917 17:17:01.583937   29513 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/functional-853088/id_rsa Username:docker}
I0917 17:17:01.668604   29513 ssh_runner.go:195] Run: sudo crictl images --output json
I0917 17:17:01.718418   29513 main.go:141] libmachine: Making call to close driver server
I0917 17:17:01.718434   29513 main.go:141] libmachine: (functional-853088) Calling .Close
I0917 17:17:01.718729   29513 main.go:141] libmachine: Successfully made call to close driver server
I0917 17:17:01.718751   29513 main.go:141] libmachine: Making call to close connection to plugin binary
I0917 17:17:01.718762   29513 main.go:141] libmachine: Making call to close driver server
I0917 17:17:01.718767   29513 main.go:141] libmachine: (functional-853088) DBG | Closing plugin on server side
I0917 17:17:01.718771   29513 main.go:141] libmachine: (functional-853088) Calling .Close
I0917 17:17:01.719003   29513 main.go:141] libmachine: (functional-853088) DBG | Closing plugin on server side
I0917 17:17:01.719003   29513 main.go:141] libmachine: Successfully made call to close driver server
I0917 17:17:01.719031   29513 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-853088 image ls --format json --alsologtostderr:
[{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3","docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e"],"repoTags":["docker.io/library/nginx:latest"],"size":"191853369"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["regis
try.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"12968670680f
4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size"
:"43824855"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-853088"],"size":"49438
77"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"a30c65d8c3251903b251d8f0c8439e7af6c52cbe7de941642120f4cd505e3288","repoDigests":["localhost/minikube-local-cache-test@sha256:7a3ff0b895ff22dd5da1b073361f7fdd48d5dc886d26e77b3be5d82003d71682"],"repoTags":["localhost/minikube-local-cache-test:functional-853088"],"size":"3330"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns
/coredns:v1.11.3"],"size":"63273227"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@
sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-853088 image ls --format json --alsologtostderr:
I0917 17:17:01.304519   29446 out.go:345] Setting OutFile to fd 1 ...
I0917 17:17:01.304836   29446 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:17:01.304849   29446 out.go:358] Setting ErrFile to fd 2...
I0917 17:17:01.304858   29446 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:17:01.305272   29446 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
I0917 17:17:01.305964   29446 config.go:182] Loaded profile config "functional-853088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0917 17:17:01.306073   29446 config.go:182] Loaded profile config "functional-853088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0917 17:17:01.306487   29446 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0917 17:17:01.306529   29446 main.go:141] libmachine: Launching plugin server for driver kvm2
I0917 17:17:01.322105   29446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36161
I0917 17:17:01.322664   29446 main.go:141] libmachine: () Calling .GetVersion
I0917 17:17:01.323210   29446 main.go:141] libmachine: Using API Version  1
I0917 17:17:01.323236   29446 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 17:17:01.323587   29446 main.go:141] libmachine: () Calling .GetMachineName
I0917 17:17:01.323799   29446 main.go:141] libmachine: (functional-853088) Calling .GetState
I0917 17:17:01.325649   29446 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0917 17:17:01.325691   29446 main.go:141] libmachine: Launching plugin server for driver kvm2
I0917 17:17:01.342249   29446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37879
I0917 17:17:01.342807   29446 main.go:141] libmachine: () Calling .GetVersion
I0917 17:17:01.343314   29446 main.go:141] libmachine: Using API Version  1
I0917 17:17:01.343340   29446 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 17:17:01.343705   29446 main.go:141] libmachine: () Calling .GetMachineName
I0917 17:17:01.343878   29446 main.go:141] libmachine: (functional-853088) Calling .DriverName
I0917 17:17:01.344098   29446 ssh_runner.go:195] Run: systemctl --version
I0917 17:17:01.344133   29446 main.go:141] libmachine: (functional-853088) Calling .GetSSHHostname
I0917 17:17:01.347390   29446 main.go:141] libmachine: (functional-853088) DBG | domain functional-853088 has defined MAC address 52:54:00:77:db:2e in network mk-functional-853088
I0917 17:17:01.347807   29446 main.go:141] libmachine: (functional-853088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:db:2e", ip: ""} in network mk-functional-853088: {Iface:virbr1 ExpiryTime:2024-09-17 18:13:55 +0000 UTC Type:0 Mac:52:54:00:77:db:2e Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:functional-853088 Clientid:01:52:54:00:77:db:2e}
I0917 17:17:01.347838   29446 main.go:141] libmachine: (functional-853088) DBG | domain functional-853088 has defined IP address 192.168.39.158 and MAC address 52:54:00:77:db:2e in network mk-functional-853088
I0917 17:17:01.347993   29446 main.go:141] libmachine: (functional-853088) Calling .GetSSHPort
I0917 17:17:01.348148   29446 main.go:141] libmachine: (functional-853088) Calling .GetSSHKeyPath
I0917 17:17:01.348288   29446 main.go:141] libmachine: (functional-853088) Calling .GetSSHUsername
I0917 17:17:01.348452   29446 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/functional-853088/id_rsa Username:docker}
I0917 17:17:01.429034   29446 ssh_runner.go:195] Run: sudo crictl images --output json
I0917 17:17:01.477783   29446 main.go:141] libmachine: Making call to close driver server
I0917 17:17:01.477800   29446 main.go:141] libmachine: (functional-853088) Calling .Close
I0917 17:17:01.478199   29446 main.go:141] libmachine: (functional-853088) DBG | Closing plugin on server side
I0917 17:17:01.478208   29446 main.go:141] libmachine: Successfully made call to close driver server
I0917 17:17:01.478239   29446 main.go:141] libmachine: Making call to close connection to plugin binary
I0917 17:17:01.478249   29446 main.go:141] libmachine: Making call to close driver server
I0917 17:17:01.478263   29446 main.go:141] libmachine: (functional-853088) Calling .Close
I0917 17:17:01.478661   29446 main.go:141] libmachine: (functional-853088) DBG | Closing plugin on server side
I0917 17:17:01.478756   29446 main.go:141] libmachine: Successfully made call to close driver server
I0917 17:17:01.478789   29446 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-853088 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
- docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e
repoTags:
- docker.io/library/nginx:latest
size: "191853369"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-853088
size: "4943877"
- id: a30c65d8c3251903b251d8f0c8439e7af6c52cbe7de941642120f4cd505e3288
repoDigests:
- localhost/minikube-local-cache-test@sha256:7a3ff0b895ff22dd5da1b073361f7fdd48d5dc886d26e77b3be5d82003d71682
repoTags:
- localhost/minikube-local-cache-test:functional-853088
size: "3330"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-853088 image ls --format yaml --alsologtostderr:
I0917 17:17:01.060928   29399 out.go:345] Setting OutFile to fd 1 ...
I0917 17:17:01.061059   29399 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:17:01.061075   29399 out.go:358] Setting ErrFile to fd 2...
I0917 17:17:01.061080   29399 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:17:01.061322   29399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
I0917 17:17:01.061919   29399 config.go:182] Loaded profile config "functional-853088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0917 17:17:01.062015   29399 config.go:182] Loaded profile config "functional-853088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0917 17:17:01.062379   29399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0917 17:17:01.062422   29399 main.go:141] libmachine: Launching plugin server for driver kvm2
I0917 17:17:01.078856   29399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34511
I0917 17:17:01.079431   29399 main.go:141] libmachine: () Calling .GetVersion
I0917 17:17:01.080046   29399 main.go:141] libmachine: Using API Version  1
I0917 17:17:01.080068   29399 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 17:17:01.080437   29399 main.go:141] libmachine: () Calling .GetMachineName
I0917 17:17:01.080651   29399 main.go:141] libmachine: (functional-853088) Calling .GetState
I0917 17:17:01.083020   29399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0917 17:17:01.083093   29399 main.go:141] libmachine: Launching plugin server for driver kvm2
I0917 17:17:01.098670   29399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42999
I0917 17:17:01.099150   29399 main.go:141] libmachine: () Calling .GetVersion
I0917 17:17:01.099632   29399 main.go:141] libmachine: Using API Version  1
I0917 17:17:01.099653   29399 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 17:17:01.100009   29399 main.go:141] libmachine: () Calling .GetMachineName
I0917 17:17:01.100216   29399 main.go:141] libmachine: (functional-853088) Calling .DriverName
I0917 17:17:01.100422   29399 ssh_runner.go:195] Run: systemctl --version
I0917 17:17:01.100448   29399 main.go:141] libmachine: (functional-853088) Calling .GetSSHHostname
I0917 17:17:01.102983   29399 main.go:141] libmachine: (functional-853088) DBG | domain functional-853088 has defined MAC address 52:54:00:77:db:2e in network mk-functional-853088
I0917 17:17:01.103399   29399 main.go:141] libmachine: (functional-853088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:db:2e", ip: ""} in network mk-functional-853088: {Iface:virbr1 ExpiryTime:2024-09-17 18:13:55 +0000 UTC Type:0 Mac:52:54:00:77:db:2e Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:functional-853088 Clientid:01:52:54:00:77:db:2e}
I0917 17:17:01.103426   29399 main.go:141] libmachine: (functional-853088) DBG | domain functional-853088 has defined IP address 192.168.39.158 and MAC address 52:54:00:77:db:2e in network mk-functional-853088
I0917 17:17:01.103596   29399 main.go:141] libmachine: (functional-853088) Calling .GetSSHPort
I0917 17:17:01.103797   29399 main.go:141] libmachine: (functional-853088) Calling .GetSSHKeyPath
I0917 17:17:01.103923   29399 main.go:141] libmachine: (functional-853088) Calling .GetSSHUsername
I0917 17:17:01.104132   29399 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/functional-853088/id_rsa Username:docker}
I0917 17:17:01.184250   29399 ssh_runner.go:195] Run: sudo crictl images --output json
I0917 17:17:01.248302   29399 main.go:141] libmachine: Making call to close driver server
I0917 17:17:01.248315   29399 main.go:141] libmachine: (functional-853088) Calling .Close
I0917 17:17:01.248604   29399 main.go:141] libmachine: Successfully made call to close driver server
I0917 17:17:01.248620   29399 main.go:141] libmachine: Making call to close connection to plugin binary
I0917 17:17:01.248635   29399 main.go:141] libmachine: Making call to close driver server
I0917 17:17:01.248644   29399 main.go:141] libmachine: (functional-853088) Calling .Close
I0917 17:17:01.248920   29399 main.go:141] libmachine: Successfully made call to close driver server
I0917 17:17:01.248931   29399 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-853088 ssh pgrep buildkitd: exit status 1 (221.574064ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 image build -t localhost/my-image:functional-853088 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-853088 image build -t localhost/my-image:functional-853088 testdata/build --alsologtostderr: (2.651952456s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-853088 image build -t localhost/my-image:functional-853088 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 044b277a167
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-853088
--> 540bf44f768
Successfully tagged localhost/my-image:functional-853088
540bf44f768015f91a2aa26495b40837dd9d53f9636edeade3260348367e1e07
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-853088 image build -t localhost/my-image:functional-853088 testdata/build --alsologtostderr:
I0917 17:17:01.526210   29507 out.go:345] Setting OutFile to fd 1 ...
I0917 17:17:01.526434   29507 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:17:01.526450   29507 out.go:358] Setting ErrFile to fd 2...
I0917 17:17:01.526458   29507 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:17:01.526748   29507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
I0917 17:17:01.527774   29507 config.go:182] Loaded profile config "functional-853088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0917 17:17:01.528427   29507 config.go:182] Loaded profile config "functional-853088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0917 17:17:01.528814   29507 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0917 17:17:01.528859   29507 main.go:141] libmachine: Launching plugin server for driver kvm2
I0917 17:17:01.544662   29507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42689
I0917 17:17:01.545126   29507 main.go:141] libmachine: () Calling .GetVersion
I0917 17:17:01.545685   29507 main.go:141] libmachine: Using API Version  1
I0917 17:17:01.545708   29507 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 17:17:01.546015   29507 main.go:141] libmachine: () Calling .GetMachineName
I0917 17:17:01.546223   29507 main.go:141] libmachine: (functional-853088) Calling .GetState
I0917 17:17:01.548248   29507 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0917 17:17:01.548299   29507 main.go:141] libmachine: Launching plugin server for driver kvm2
I0917 17:17:01.563738   29507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36175
I0917 17:17:01.564200   29507 main.go:141] libmachine: () Calling .GetVersion
I0917 17:17:01.564767   29507 main.go:141] libmachine: Using API Version  1
I0917 17:17:01.564787   29507 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 17:17:01.565120   29507 main.go:141] libmachine: () Calling .GetMachineName
I0917 17:17:01.565304   29507 main.go:141] libmachine: (functional-853088) Calling .DriverName
I0917 17:17:01.565495   29507 ssh_runner.go:195] Run: systemctl --version
I0917 17:17:01.565531   29507 main.go:141] libmachine: (functional-853088) Calling .GetSSHHostname
I0917 17:17:01.568586   29507 main.go:141] libmachine: (functional-853088) DBG | domain functional-853088 has defined MAC address 52:54:00:77:db:2e in network mk-functional-853088
I0917 17:17:01.569002   29507 main.go:141] libmachine: (functional-853088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:db:2e", ip: ""} in network mk-functional-853088: {Iface:virbr1 ExpiryTime:2024-09-17 18:13:55 +0000 UTC Type:0 Mac:52:54:00:77:db:2e Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:functional-853088 Clientid:01:52:54:00:77:db:2e}
I0917 17:17:01.569025   29507 main.go:141] libmachine: (functional-853088) DBG | domain functional-853088 has defined IP address 192.168.39.158 and MAC address 52:54:00:77:db:2e in network mk-functional-853088
I0917 17:17:01.569156   29507 main.go:141] libmachine: (functional-853088) Calling .GetSSHPort
I0917 17:17:01.569366   29507 main.go:141] libmachine: (functional-853088) Calling .GetSSHKeyPath
I0917 17:17:01.569509   29507 main.go:141] libmachine: (functional-853088) Calling .GetSSHUsername
I0917 17:17:01.569681   29507 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/functional-853088/id_rsa Username:docker}
I0917 17:17:01.652315   29507 build_images.go:161] Building image from path: /tmp/build.3228830514.tar
I0917 17:17:01.652393   29507 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0917 17:17:01.664938   29507 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3228830514.tar
I0917 17:17:01.670702   29507 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3228830514.tar: stat -c "%s %y" /var/lib/minikube/build/build.3228830514.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3228830514.tar': No such file or directory
I0917 17:17:01.670731   29507 ssh_runner.go:362] scp /tmp/build.3228830514.tar --> /var/lib/minikube/build/build.3228830514.tar (3072 bytes)
I0917 17:17:01.699858   29507 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3228830514
I0917 17:17:01.714731   29507 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3228830514 -xf /var/lib/minikube/build/build.3228830514.tar
I0917 17:17:01.727661   29507 crio.go:315] Building image: /var/lib/minikube/build/build.3228830514
I0917 17:17:01.727722   29507 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-853088 /var/lib/minikube/build/build.3228830514 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0917 17:17:04.098058   29507 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-853088 /var/lib/minikube/build/build.3228830514 --cgroup-manager=cgroupfs: (2.370316199s)
I0917 17:17:04.098114   29507 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3228830514
I0917 17:17:04.111962   29507 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3228830514.tar
I0917 17:17:04.123090   29507 build_images.go:217] Built localhost/my-image:functional-853088 from /tmp/build.3228830514.tar
I0917 17:17:04.123128   29507 build_images.go:133] succeeded building to: functional-853088
I0917 17:17:04.123133   29507 build_images.go:134] failed building to: 
I0917 17:17:04.123154   29507 main.go:141] libmachine: Making call to close driver server
I0917 17:17:04.123163   29507 main.go:141] libmachine: (functional-853088) Calling .Close
I0917 17:17:04.123466   29507 main.go:141] libmachine: Successfully made call to close driver server
I0917 17:17:04.123487   29507 main.go:141] libmachine: Making call to close connection to plugin binary
I0917 17:17:04.123526   29507 main.go:141] libmachine: Making call to close driver server
I0917 17:17:04.123538   29507 main.go:141] libmachine: (functional-853088) Calling .Close
I0917 17:17:04.123768   29507 main.go:141] libmachine: Successfully made call to close driver server
I0917 17:17:04.123784   29507 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 image ls
2024/09/17 17:17:04 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-853088
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 image load --daemon kicbase/echo-server:functional-853088 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-853088 image load --daemon kicbase/echo-server:functional-853088 --alsologtostderr: (1.514950404s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 image load --daemon kicbase/echo-server:functional-853088 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-853088
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 image load --daemon kicbase/echo-server:functional-853088 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 image save kicbase/echo-server:functional-853088 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-853088 /tmp/TestFunctionalparallelMountCmdspecific-port196079312/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-853088 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (281.159976ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-853088 /tmp/TestFunctionalparallelMountCmdspecific-port196079312/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-853088 ssh "sudo umount -f /mount-9p": exit status 1 (258.639783ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-853088 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-853088 /tmp/TestFunctionalparallelMountCmdspecific-port196079312/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 image rm kicbase/echo-server:functional-853088 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-853088 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (3.221725049s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-853088 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3645022975/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-853088 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3645022975/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-853088 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3645022975/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-853088 ssh "findmnt -T" /mount1: exit status 1 (298.344843ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-853088 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-853088 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3645022975/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-853088 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3645022975/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-853088 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3645022975/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-853088
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-853088 image save --daemon kicbase/echo-server:functional-853088 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-853088
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-853088
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-853088
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-853088
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (191.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-181247 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0917 17:18:50.532257   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:19:18.238880   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-181247 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m11.123078304s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (191.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-181247 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-181247 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-181247 -- rollout status deployment/busybox: (3.947714356s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-181247 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-181247 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-181247 -- exec busybox-7dff88458-96b8c -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-181247 -- exec busybox-7dff88458-mxrbl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-181247 -- exec busybox-7dff88458-w8wxj -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-181247 -- exec busybox-7dff88458-96b8c -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-181247 -- exec busybox-7dff88458-mxrbl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-181247 -- exec busybox-7dff88458-w8wxj -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-181247 -- exec busybox-7dff88458-96b8c -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-181247 -- exec busybox-7dff88458-mxrbl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-181247 -- exec busybox-7dff88458-w8wxj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-181247 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-181247 -- exec busybox-7dff88458-96b8c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-181247 -- exec busybox-7dff88458-96b8c -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-181247 -- exec busybox-7dff88458-mxrbl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-181247 -- exec busybox-7dff88458-mxrbl -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-181247 -- exec busybox-7dff88458-w8wxj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-181247 -- exec busybox-7dff88458-w8wxj -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-181247 -v=7 --alsologtostderr
E0917 17:21:24.983400   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:21:24.989892   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:21:25.001322   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:21:25.022791   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:21:25.064197   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:21:25.145665   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:21:25.307326   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:21:25.628868   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:21:26.270346   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-181247 -v=7 --alsologtostderr: (55.391176784s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 status -v=7 --alsologtostderr
E0917 17:21:27.552463   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-181247 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 cp testdata/cp-test.txt ha-181247:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 cp ha-181247:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3499385804/001/cp-test_ha-181247.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247 "sudo cat /home/docker/cp-test.txt"
E0917 17:21:30.114799   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 cp ha-181247:/home/docker/cp-test.txt ha-181247-m02:/home/docker/cp-test_ha-181247_ha-181247-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247-m02 "sudo cat /home/docker/cp-test_ha-181247_ha-181247-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 cp ha-181247:/home/docker/cp-test.txt ha-181247-m03:/home/docker/cp-test_ha-181247_ha-181247-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247-m03 "sudo cat /home/docker/cp-test_ha-181247_ha-181247-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 cp ha-181247:/home/docker/cp-test.txt ha-181247-m04:/home/docker/cp-test_ha-181247_ha-181247-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247-m04 "sudo cat /home/docker/cp-test_ha-181247_ha-181247-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 cp testdata/cp-test.txt ha-181247-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 cp ha-181247-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3499385804/001/cp-test_ha-181247-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 cp ha-181247-m02:/home/docker/cp-test.txt ha-181247:/home/docker/cp-test_ha-181247-m02_ha-181247.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247 "sudo cat /home/docker/cp-test_ha-181247-m02_ha-181247.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 cp ha-181247-m02:/home/docker/cp-test.txt ha-181247-m03:/home/docker/cp-test_ha-181247-m02_ha-181247-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247-m03 "sudo cat /home/docker/cp-test_ha-181247-m02_ha-181247-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 cp ha-181247-m02:/home/docker/cp-test.txt ha-181247-m04:/home/docker/cp-test_ha-181247-m02_ha-181247-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247-m02 "sudo cat /home/docker/cp-test.txt"
E0917 17:21:35.236708   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247-m04 "sudo cat /home/docker/cp-test_ha-181247-m02_ha-181247-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 cp testdata/cp-test.txt ha-181247-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 cp ha-181247-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3499385804/001/cp-test_ha-181247-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 cp ha-181247-m03:/home/docker/cp-test.txt ha-181247:/home/docker/cp-test_ha-181247-m03_ha-181247.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247 "sudo cat /home/docker/cp-test_ha-181247-m03_ha-181247.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 cp ha-181247-m03:/home/docker/cp-test.txt ha-181247-m02:/home/docker/cp-test_ha-181247-m03_ha-181247-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247-m02 "sudo cat /home/docker/cp-test_ha-181247-m03_ha-181247-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 cp ha-181247-m03:/home/docker/cp-test.txt ha-181247-m04:/home/docker/cp-test_ha-181247-m03_ha-181247-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247-m04 "sudo cat /home/docker/cp-test_ha-181247-m03_ha-181247-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 cp testdata/cp-test.txt ha-181247-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 cp ha-181247-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3499385804/001/cp-test_ha-181247-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 cp ha-181247-m04:/home/docker/cp-test.txt ha-181247:/home/docker/cp-test_ha-181247-m04_ha-181247.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247 "sudo cat /home/docker/cp-test_ha-181247-m04_ha-181247.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 cp ha-181247-m04:/home/docker/cp-test.txt ha-181247-m02:/home/docker/cp-test_ha-181247-m04_ha-181247-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247-m02 "sudo cat /home/docker/cp-test_ha-181247-m04_ha-181247-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 cp ha-181247-m04:/home/docker/cp-test.txt ha-181247-m03:/home/docker/cp-test_ha-181247-m04_ha-181247-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 ssh -n ha-181247-m03 "sudo cat /home/docker/cp-test_ha-181247-m04_ha-181247-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.49800523s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 node delete m03 -v=7 --alsologtostderr
E0917 17:31:24.983238   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-181247 node delete m03 -v=7 --alsologtostderr: (15.961329517s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (286.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-181247 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0917 17:36:24.983220   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:37:48.047068   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-181247 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m45.362788184s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (286.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-181247 --control-plane -v=7 --alsologtostderr
E0917 17:38:50.532338   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-181247 --control-plane -v=7 --alsologtostderr: (1m15.638584583s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-181247 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.54s)

                                                
                                    
x
+
TestJSONOutput/start/Command (87.55s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-953181 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0917 17:41:24.983763   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-953181 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m27.547828503s)
--- PASS: TestJSONOutput/start/Command (87.55s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-953181 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-953181 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.38s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-953181 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-953181 --output=json --user=testUser: (7.383787426s)
--- PASS: TestJSONOutput/stop/Command (7.38s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-144123 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-144123 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.11164ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d2228ffa-58fc-4c26-9823-e441029ebff0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-144123] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"50ab3d6a-85ea-443d-bc71-7e2665fb1748","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19662"}}
	{"specversion":"1.0","id":"1d323393-0991-4ff6-bdb8-fb6dad78ab42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e99604da-b078-4dc7-879b-6ffaeec53fe8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig"}}
	{"specversion":"1.0","id":"2e0c0e0b-ab17-427a-9121-ade63270f491","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube"}}
	{"specversion":"1.0","id":"013d841a-ec01-4c2b-906b-324dacfc45fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"984cbbcb-1763-4515-88fa-555e486e96b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8cd75540-751c-4633-9459-a433271777e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-144123" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-144123
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (92.61s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-400208 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-400208 --driver=kvm2  --container-runtime=crio: (45.379025041s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-424695 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-424695 --driver=kvm2  --container-runtime=crio: (44.301624403s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-400208
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-424695
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-424695" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-424695
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-424695: (1.026758171s)
helpers_test.go:175: Cleaning up "first-400208" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-400208
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-400208: (1.017969096s)
--- PASS: TestMinikubeProfile (92.61s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.1s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-419508 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-419508 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.094682008s)
E0917 17:43:50.532026   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMountStart/serial/StartWithMountFirst (28.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-419508 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-419508 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.41s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-436586 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-436586 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.408186467s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-436586 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-436586 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-419508 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-436586 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-436586 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-436586
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-436586: (1.281519873s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.68s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-436586
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-436586: (20.680111706s)
--- PASS: TestMountStart/serial/RestartStopped (21.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-436586 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-436586 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (110.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-178778 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0917 17:46:24.983084   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-178778 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m50.061244028s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (110.47s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178778 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178778 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-178778 -- rollout status deployment/busybox: (3.281352009s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178778 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178778 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178778 -- exec busybox-7dff88458-dh729 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178778 -- exec busybox-7dff88458-vcxjz -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178778 -- exec busybox-7dff88458-dh729 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178778 -- exec busybox-7dff88458-vcxjz -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178778 -- exec busybox-7dff88458-dh729 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178778 -- exec busybox-7dff88458-vcxjz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.79s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178778 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178778 -- exec busybox-7dff88458-dh729 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178778 -- exec busybox-7dff88458-dh729 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178778 -- exec busybox-7dff88458-vcxjz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-178778 -- exec busybox-7dff88458-vcxjz -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-178778 -v 3 --alsologtostderr
E0917 17:46:53.603253   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-178778 -v 3 --alsologtostderr: (50.040089117s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.61s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-178778 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 cp testdata/cp-test.txt multinode-178778:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 ssh -n multinode-178778 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 cp multinode-178778:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile460922367/001/cp-test_multinode-178778.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 ssh -n multinode-178778 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 cp multinode-178778:/home/docker/cp-test.txt multinode-178778-m02:/home/docker/cp-test_multinode-178778_multinode-178778-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 ssh -n multinode-178778 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 ssh -n multinode-178778-m02 "sudo cat /home/docker/cp-test_multinode-178778_multinode-178778-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 cp multinode-178778:/home/docker/cp-test.txt multinode-178778-m03:/home/docker/cp-test_multinode-178778_multinode-178778-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 ssh -n multinode-178778 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 ssh -n multinode-178778-m03 "sudo cat /home/docker/cp-test_multinode-178778_multinode-178778-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 cp testdata/cp-test.txt multinode-178778-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 ssh -n multinode-178778-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 cp multinode-178778-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile460922367/001/cp-test_multinode-178778-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 ssh -n multinode-178778-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 cp multinode-178778-m02:/home/docker/cp-test.txt multinode-178778:/home/docker/cp-test_multinode-178778-m02_multinode-178778.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 ssh -n multinode-178778-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 ssh -n multinode-178778 "sudo cat /home/docker/cp-test_multinode-178778-m02_multinode-178778.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 cp multinode-178778-m02:/home/docker/cp-test.txt multinode-178778-m03:/home/docker/cp-test_multinode-178778-m02_multinode-178778-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 ssh -n multinode-178778-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 ssh -n multinode-178778-m03 "sudo cat /home/docker/cp-test_multinode-178778-m02_multinode-178778-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 cp testdata/cp-test.txt multinode-178778-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 ssh -n multinode-178778-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 cp multinode-178778-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile460922367/001/cp-test_multinode-178778-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 ssh -n multinode-178778-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 cp multinode-178778-m03:/home/docker/cp-test.txt multinode-178778:/home/docker/cp-test_multinode-178778-m03_multinode-178778.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 ssh -n multinode-178778-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 ssh -n multinode-178778 "sudo cat /home/docker/cp-test_multinode-178778-m03_multinode-178778.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 cp multinode-178778-m03:/home/docker/cp-test.txt multinode-178778-m02:/home/docker/cp-test_multinode-178778-m03_multinode-178778-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 ssh -n multinode-178778-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 ssh -n multinode-178778-m02 "sudo cat /home/docker/cp-test_multinode-178778-m03_multinode-178778-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-178778 node stop m03: (1.518174443s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-178778 status: exit status 7 (445.948139ms)

                                                
                                                
-- stdout --
	multinode-178778
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-178778-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-178778-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-178778 status --alsologtostderr: exit status 7 (435.438914ms)

                                                
                                                
-- stdout --
	multinode-178778
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-178778-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-178778-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:47:41.670903   47032 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:47:41.671012   47032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:47:41.671023   47032 out.go:358] Setting ErrFile to fd 2...
	I0917 17:47:41.671027   47032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:47:41.671227   47032 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 17:47:41.671435   47032 out.go:352] Setting JSON to false
	I0917 17:47:41.671467   47032 mustload.go:65] Loading cluster: multinode-178778
	I0917 17:47:41.671571   47032 notify.go:220] Checking for updates...
	I0917 17:47:41.671958   47032 config.go:182] Loaded profile config "multinode-178778": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 17:47:41.671974   47032 status.go:255] checking status of multinode-178778 ...
	I0917 17:47:41.672424   47032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:47:41.672460   47032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:47:41.688373   47032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39489
	I0917 17:47:41.688895   47032 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:47:41.689548   47032 main.go:141] libmachine: Using API Version  1
	I0917 17:47:41.689570   47032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:47:41.689898   47032 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:47:41.690093   47032 main.go:141] libmachine: (multinode-178778) Calling .GetState
	I0917 17:47:41.691651   47032 status.go:330] multinode-178778 host status = "Running" (err=<nil>)
	I0917 17:47:41.691668   47032 host.go:66] Checking if "multinode-178778" exists ...
	I0917 17:47:41.691957   47032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:47:41.691997   47032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:47:41.707354   47032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35451
	I0917 17:47:41.707805   47032 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:47:41.708363   47032 main.go:141] libmachine: Using API Version  1
	I0917 17:47:41.708399   47032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:47:41.708727   47032 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:47:41.708901   47032 main.go:141] libmachine: (multinode-178778) Calling .GetIP
	I0917 17:47:41.711487   47032 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:47:41.711859   47032 main.go:141] libmachine: (multinode-178778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:92:d1", ip: ""} in network mk-multinode-178778: {Iface:virbr1 ExpiryTime:2024-09-17 18:45:00 +0000 UTC Type:0 Mac:52:54:00:c4:92:d1 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-178778 Clientid:01:52:54:00:c4:92:d1}
	I0917 17:47:41.711885   47032 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined IP address 192.168.39.35 and MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:47:41.711987   47032 host.go:66] Checking if "multinode-178778" exists ...
	I0917 17:47:41.712270   47032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:47:41.712303   47032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:47:41.727935   47032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46257
	I0917 17:47:41.728324   47032 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:47:41.728799   47032 main.go:141] libmachine: Using API Version  1
	I0917 17:47:41.728824   47032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:47:41.729154   47032 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:47:41.729335   47032 main.go:141] libmachine: (multinode-178778) Calling .DriverName
	I0917 17:47:41.729501   47032 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:47:41.729535   47032 main.go:141] libmachine: (multinode-178778) Calling .GetSSHHostname
	I0917 17:47:41.732002   47032 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:47:41.732401   47032 main.go:141] libmachine: (multinode-178778) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:92:d1", ip: ""} in network mk-multinode-178778: {Iface:virbr1 ExpiryTime:2024-09-17 18:45:00 +0000 UTC Type:0 Mac:52:54:00:c4:92:d1 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-178778 Clientid:01:52:54:00:c4:92:d1}
	I0917 17:47:41.732437   47032 main.go:141] libmachine: (multinode-178778) DBG | domain multinode-178778 has defined IP address 192.168.39.35 and MAC address 52:54:00:c4:92:d1 in network mk-multinode-178778
	I0917 17:47:41.732576   47032 main.go:141] libmachine: (multinode-178778) Calling .GetSSHPort
	I0917 17:47:41.732736   47032 main.go:141] libmachine: (multinode-178778) Calling .GetSSHKeyPath
	I0917 17:47:41.732850   47032 main.go:141] libmachine: (multinode-178778) Calling .GetSSHUsername
	I0917 17:47:41.732986   47032 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/multinode-178778/id_rsa Username:docker}
	I0917 17:47:41.826648   47032 ssh_runner.go:195] Run: systemctl --version
	I0917 17:47:41.833788   47032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:47:41.848936   47032 kubeconfig.go:125] found "multinode-178778" server: "https://192.168.39.35:8443"
	I0917 17:47:41.848978   47032 api_server.go:166] Checking apiserver status ...
	I0917 17:47:41.849030   47032 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:47:41.865821   47032 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1071/cgroup
	W0917 17:47:41.876211   47032 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1071/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 17:47:41.876293   47032 ssh_runner.go:195] Run: ls
	I0917 17:47:41.881087   47032 api_server.go:253] Checking apiserver healthz at https://192.168.39.35:8443/healthz ...
	I0917 17:47:41.885337   47032 api_server.go:279] https://192.168.39.35:8443/healthz returned 200:
	ok
	I0917 17:47:41.885360   47032 status.go:422] multinode-178778 apiserver status = Running (err=<nil>)
	I0917 17:47:41.885372   47032 status.go:257] multinode-178778 status: &{Name:multinode-178778 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:47:41.885393   47032 status.go:255] checking status of multinode-178778-m02 ...
	I0917 17:47:41.885703   47032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:47:41.885745   47032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:47:41.902305   47032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46575
	I0917 17:47:41.902780   47032 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:47:41.903269   47032 main.go:141] libmachine: Using API Version  1
	I0917 17:47:41.903288   47032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:47:41.903643   47032 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:47:41.903848   47032 main.go:141] libmachine: (multinode-178778-m02) Calling .GetState
	I0917 17:47:41.905474   47032 status.go:330] multinode-178778-m02 host status = "Running" (err=<nil>)
	I0917 17:47:41.905490   47032 host.go:66] Checking if "multinode-178778-m02" exists ...
	I0917 17:47:41.905767   47032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:47:41.905800   47032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:47:41.922314   47032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38865
	I0917 17:47:41.922729   47032 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:47:41.923209   47032 main.go:141] libmachine: Using API Version  1
	I0917 17:47:41.923229   47032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:47:41.923565   47032 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:47:41.923749   47032 main.go:141] libmachine: (multinode-178778-m02) Calling .GetIP
	I0917 17:47:41.926777   47032 main.go:141] libmachine: (multinode-178778-m02) DBG | domain multinode-178778-m02 has defined MAC address 52:54:00:c9:b7:cc in network mk-multinode-178778
	I0917 17:47:41.927159   47032 main.go:141] libmachine: (multinode-178778-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:b7:cc", ip: ""} in network mk-multinode-178778: {Iface:virbr1 ExpiryTime:2024-09-17 18:45:59 +0000 UTC Type:0 Mac:52:54:00:c9:b7:cc Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-178778-m02 Clientid:01:52:54:00:c9:b7:cc}
	I0917 17:47:41.927178   47032 main.go:141] libmachine: (multinode-178778-m02) DBG | domain multinode-178778-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:c9:b7:cc in network mk-multinode-178778
	I0917 17:47:41.927316   47032 host.go:66] Checking if "multinode-178778-m02" exists ...
	I0917 17:47:41.927621   47032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:47:41.927660   47032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:47:41.943947   47032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45389
	I0917 17:47:41.944466   47032 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:47:41.944968   47032 main.go:141] libmachine: Using API Version  1
	I0917 17:47:41.944987   47032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:47:41.945365   47032 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:47:41.945563   47032 main.go:141] libmachine: (multinode-178778-m02) Calling .DriverName
	I0917 17:47:41.945774   47032 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:47:41.945804   47032 main.go:141] libmachine: (multinode-178778-m02) Calling .GetSSHHostname
	I0917 17:47:41.948457   47032 main.go:141] libmachine: (multinode-178778-m02) DBG | domain multinode-178778-m02 has defined MAC address 52:54:00:c9:b7:cc in network mk-multinode-178778
	I0917 17:47:41.948901   47032 main.go:141] libmachine: (multinode-178778-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:b7:cc", ip: ""} in network mk-multinode-178778: {Iface:virbr1 ExpiryTime:2024-09-17 18:45:59 +0000 UTC Type:0 Mac:52:54:00:c9:b7:cc Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-178778-m02 Clientid:01:52:54:00:c9:b7:cc}
	I0917 17:47:41.948928   47032 main.go:141] libmachine: (multinode-178778-m02) DBG | domain multinode-178778-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:c9:b7:cc in network mk-multinode-178778
	I0917 17:47:41.949059   47032 main.go:141] libmachine: (multinode-178778-m02) Calling .GetSSHPort
	I0917 17:47:41.949214   47032 main.go:141] libmachine: (multinode-178778-m02) Calling .GetSSHKeyPath
	I0917 17:47:41.949343   47032 main.go:141] libmachine: (multinode-178778-m02) Calling .GetSSHUsername
	I0917 17:47:41.949502   47032 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19662-11085/.minikube/machines/multinode-178778-m02/id_rsa Username:docker}
	I0917 17:47:42.028647   47032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:47:42.044049   47032 status.go:257] multinode-178778-m02 status: &{Name:multinode-178778-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:47:42.044094   47032 status.go:255] checking status of multinode-178778-m03 ...
	I0917 17:47:42.044391   47032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 17:47:42.044428   47032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 17:47:42.059885   47032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46681
	I0917 17:47:42.060262   47032 main.go:141] libmachine: () Calling .GetVersion
	I0917 17:47:42.060794   47032 main.go:141] libmachine: Using API Version  1
	I0917 17:47:42.060813   47032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 17:47:42.061132   47032 main.go:141] libmachine: () Calling .GetMachineName
	I0917 17:47:42.061332   47032 main.go:141] libmachine: (multinode-178778-m03) Calling .GetState
	I0917 17:47:42.063013   47032 status.go:330] multinode-178778-m03 host status = "Stopped" (err=<nil>)
	I0917 17:47:42.063029   47032 status.go:343] host is not running, skipping remaining checks
	I0917 17:47:42.063036   47032 status.go:257] multinode-178778-m03 status: &{Name:multinode-178778-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-178778 node start m03 -v=7 --alsologtostderr: (37.705254585s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.35s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-178778 node delete m03: (1.85102648s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (198.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-178778 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0917 17:56:24.985368   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:58:50.531864   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-178778 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m18.260108351s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-178778 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (198.81s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-178778
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-178778-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-178778-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (74.710789ms)

                                                
                                                
-- stdout --
	* [multinode-178778-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-178778-m02' is duplicated with machine name 'multinode-178778-m02' in profile 'multinode-178778'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-178778-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-178778-m03 --driver=kvm2  --container-runtime=crio: (45.886481974s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-178778
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-178778: exit status 80 (224.248936ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-178778 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-178778-m03 already exists in multinode-178778-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-178778-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-178778-m03: (1.049552515s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.28s)

                                                
                                    
x
+
TestScheduledStopUnix (114.25s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-054634 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-054634 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.610781988s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-054634 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-054634 -n scheduled-stop-054634
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-054634 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-054634 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-054634 -n scheduled-stop-054634
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-054634
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-054634 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0917 18:06:24.984535   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-054634
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-054634: exit status 7 (62.127617ms)

                                                
                                                
-- stdout --
	scheduled-stop-054634
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-054634 -n scheduled-stop-054634
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-054634 -n scheduled-stop-054634: exit status 7 (63.97604ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-054634" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-054634
--- PASS: TestScheduledStopUnix (114.25s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (211.78s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3847207952 start -p running-upgrade-271344 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3847207952 start -p running-upgrade-271344 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m10.223260377s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-271344 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-271344 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m19.784658714s)
helpers_test.go:175: Cleaning up "running-upgrade-271344" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-271344
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-271344: (1.19623624s)
--- PASS: TestRunningBinaryUpgrade (211.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (172.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1506704182 start -p stopped-upgrade-775296 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1506704182 start -p stopped-upgrade-775296 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m41.386544893s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1506704182 -p stopped-upgrade-775296 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1506704182 -p stopped-upgrade-775296 stop: (2.152634773s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-775296 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0917 18:08:50.534726   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-775296 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m8.873012673s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (172.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-639892 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-639892 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (107.471461ms)

                                                
                                                
-- stdout --
	* [false-639892] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 18:06:56.511443   54719 out.go:345] Setting OutFile to fd 1 ...
	I0917 18:06:56.511563   54719 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:06:56.511568   54719 out.go:358] Setting ErrFile to fd 2...
	I0917 18:06:56.511573   54719 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:06:56.511773   54719 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-11085/.minikube/bin
	I0917 18:06:56.512466   54719 out.go:352] Setting JSON to false
	I0917 18:06:56.513575   54719 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6531,"bootTime":1726589885,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 18:06:56.513697   54719 start.go:139] virtualization: kvm guest
	I0917 18:06:56.516015   54719 out.go:177] * [false-639892] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 18:06:56.517523   54719 notify.go:220] Checking for updates...
	I0917 18:06:56.517538   54719 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 18:06:56.519214   54719 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 18:06:56.520730   54719 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	I0917 18:06:56.522232   54719 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	I0917 18:06:56.523559   54719 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 18:06:56.525029   54719 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 18:06:56.527530   54719 config.go:182] Loaded profile config "kubernetes-upgrade-644038": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0917 18:06:56.527633   54719 config.go:182] Loaded profile config "offline-crio-624774": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 18:06:56.527708   54719 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 18:06:56.566381   54719 out.go:177] * Using the kvm2 driver based on user configuration
	I0917 18:06:56.567617   54719 start.go:297] selected driver: kvm2
	I0917 18:06:56.567633   54719 start.go:901] validating driver "kvm2" against <nil>
	I0917 18:06:56.567644   54719 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 18:06:56.569562   54719 out.go:201] 
	W0917 18:06:56.570746   54719 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0917 18:06:56.571999   54719 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-639892 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-639892

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-639892

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-639892

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-639892

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-639892

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-639892

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-639892

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-639892

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-639892

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-639892

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-639892

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-639892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-639892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-639892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-639892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-639892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-639892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-639892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-639892" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-639892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-639892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-639892" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-639892

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639892"

                                                
                                                
----------------------- debugLogs end: false-639892 [took: 3.060176944s] --------------------------------
helpers_test.go:175: Cleaning up "false-639892" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-639892
--- PASS: TestNetworkPlugins/group/false (3.34s)

                                                
                                    
x
+
TestPause/serial/Start (93.29s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-246701 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-246701 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m33.288732488s)
--- PASS: TestPause/serial/Start (93.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-775296
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-267093 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-267093 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (68.523897ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-267093] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19662-11085/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-11085/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-267093 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-267093 --driver=kvm2  --container-runtime=crio: (43.72997617s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-267093 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-267093 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-267093 --no-kubernetes --driver=kvm2  --container-runtime=crio: (16.072807297s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-267093 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-267093 status -o json: exit status 2 (227.294748ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-267093","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-267093
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-267093: (1.005224064s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (39.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-267093 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0917 18:11:08.051501   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-267093 --no-kubernetes --driver=kvm2  --container-runtime=crio: (39.767809237s)
--- PASS: TestNoKubernetes/serial/Start (39.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-267093 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-267093 "sudo systemctl is-active --quiet service kubelet": exit status 1 (194.615519ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-267093
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-267093: (1.291074531s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (67.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-267093 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-267093 --driver=kvm2  --container-runtime=crio: (1m7.920474903s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (67.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (129.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-639892 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-639892 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (2m9.093118821s)
--- PASS: TestNetworkPlugins/group/auto/Start (129.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-267093 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-267093 "sudo systemctl is-active --quiet service kubelet": exit status 1 (196.568609ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (135.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-639892 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-639892 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (2m15.300727103s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (135.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-639892 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-639892 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-v6c7f" [57e1a606-31ca-436f-ae20-516201dd8882] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-v6c7f" [57e1a606-31ca-436f-ae20-516201dd8882] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004487513s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-rrk8s" [50d00c45-65a2-4f28-990d-fb7550265eab] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00537318s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-639892 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-639892 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-639892 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-639892 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-639892 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-t6nx8" [37e44037-b88e-4c83-86b4-7796c1255e77] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-t6nx8" [37e44037-b88e-4c83-86b4-7796c1255e77] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004953565s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (73.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-639892 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-639892 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m13.519461702s)
--- PASS: TestNetworkPlugins/group/calico/Start (73.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-639892 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-639892 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-639892 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (87.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-639892 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-639892 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m27.411975063s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (87.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (92.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-639892 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-639892 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m32.872638086s)
--- PASS: TestNetworkPlugins/group/bridge/Start (92.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-8dzgb" [4d4d94cc-74ae-4a1d-8f6d-24e72237bddd] Running
E0917 18:16:24.983340   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/functional-853088/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005816608s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-639892 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-639892 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-m5tpm" [c621e6d2-565a-485a-afc1-7863f260801c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-m5tpm" [c621e6d2-565a-485a-afc1-7863f260801c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.003761872s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-639892 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-639892 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-639892 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-639892 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-639892 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8khzz" [9966b483-5a34-4514-a3c1-e8714278602b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8khzz" [9966b483-5a34-4514-a3c1-e8714278602b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004886453s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-639892 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-639892 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-639892 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (70.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-639892 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-639892 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m10.248348498s)
--- PASS: TestNetworkPlugins/group/flannel/Start (70.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-639892 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-639892 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-27fds" [0ecc3ae0-5e1e-4e7f-a694-b90f765bb73b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-27fds" [0ecc3ae0-5e1e-4e7f-a694-b90f765bb73b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005243853s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (105.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-639892 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-639892 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m45.329679828s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (105.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-639892 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-639892 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-639892 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (148.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-328741 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-328741 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (2m28.790314523s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (148.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-5df7w" [827c3886-d365-4df4-9404-6fe8af1734f5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004876067s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-639892 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-639892 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bc2d8" [a08966ad-dc81-4f67-8531-0b44dea2ab1c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-bc2d8" [a08966ad-dc81-4f67-8531-0b44dea2ab1c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.00428645s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-639892 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-639892 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-639892 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (56.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-081863 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0917 18:18:50.532270   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-081863 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (56.615755382s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (56.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-639892 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-639892 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8mksb" [00184f37-11b2-4300-9dfe-ffd44103a151] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8mksb" [00184f37-11b2-4300-9dfe-ffd44103a151] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.00509399s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-639892 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-639892 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-639892 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)
E0917 18:48:10.820561   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/flannel-639892/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-438836 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-438836 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m27.359151472s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-081863 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [27a34430-a38d-47d2-bc93-86ed5e647b06] Pending
helpers_test.go:344: "busybox" [27a34430-a38d-47d2-bc93-86ed5e647b06] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [27a34430-a38d-47d2-bc93-86ed5e647b06] Running
E0917 18:19:48.896456   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:19:48.902830   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:19:48.914568   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:19:48.936019   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:19:48.977506   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:19:49.058961   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:19:49.220497   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:19:49.542176   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:19:50.183796   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:19:51.465212   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.007820392s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-081863 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-081863 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0917 18:19:54.027313   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-081863 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-328741 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [65deaf3a-816b-4141-b0dc-1f188c4e9e39] Pending
helpers_test.go:344: "busybox" [65deaf3a-816b-4141-b0dc-1f188c4e9e39] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0917 18:20:01.337640   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [65deaf3a-816b-4141-b0dc-1f188c4e9e39] Running
E0917 18:20:06.459508   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/kindnet-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:20:09.391929   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/auto-639892/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004346022s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-328741 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-328741 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-328741 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.025917689s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-328741 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-438836 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [353e376b-312a-4793-a814-4bb9c2ccec00] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [353e376b-312a-4793-a814-4bb9c2ccec00] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004030173s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-438836 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-438836 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-438836 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (667s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-081863 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-081863 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (11m6.731956706s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-081863 -n embed-certs-081863
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (667.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (608.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-328741 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0917 18:22:43.182771   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/bridge-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:22:43.827935   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/calico-639892/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:23:06.482598   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/custom-flannel-639892/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-328741 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (10m7.823456327s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-328741 -n no-preload-328741
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (608.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (559.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-438836 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-438836 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (9m19.4254066s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-438836 -n default-k8s-diff-port-438836
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (559.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (5.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-190698 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-190698 --alsologtostderr -v=3: (5.580872956s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (5.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-190698 -n old-k8s-version-190698
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-190698 -n old-k8s-version-190698: exit status 7 (64.548299ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-190698 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0917 18:23:50.532593   18259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-11085/.minikube/profiles/addons-408385/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (52.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-089562 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-089562 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (52.022108485s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (52.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-089562 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-089562 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.293424273s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-089562 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-089562 --alsologtostderr -v=3: (7.345736415s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-089562 -n newest-cni-089562
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-089562 -n newest-cni-089562: exit status 7 (64.857892ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-089562 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-089562 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-089562 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (36.800064755s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-089562 -n newest-cni-089562
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-089562 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-089562 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-089562 --alsologtostderr -v=1: (1.083344763s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-089562 -n newest-cni-089562
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-089562 -n newest-cni-089562: exit status 2 (323.360883ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-089562 -n newest-cni-089562
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-089562 -n newest-cni-089562: exit status 2 (284.93866ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-089562 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-089562 -n newest-cni-089562
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-089562 -n newest-cni-089562
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.47s)

                                                
                                    

Test skip (37/312)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.1/cached-images 0
15 TestDownloadOnly/v1.31.1/binaries 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
38 TestAddons/parallel/Olm 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
122 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
252 TestNetworkPlugins/group/kubenet 3.19
262 TestNetworkPlugins/group/cilium 3.6
268 TestStartStop/group/disable-driver-mounts 0.16
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-639892 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-639892

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-639892

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-639892

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-639892

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-639892

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-639892

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-639892

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-639892

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-639892

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-639892

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-639892

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-639892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-639892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-639892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-639892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-639892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-639892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-639892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-639892" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-639892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-639892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-639892" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-639892

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639892"

                                                
                                                
----------------------- debugLogs end: kubenet-639892 [took: 3.010589597s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-639892" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-639892
--- SKIP: TestNetworkPlugins/group/kubenet (3.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-639892 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-639892

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-639892

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-639892

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-639892

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-639892

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-639892

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-639892

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-639892

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-639892

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-639892

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-639892

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-639892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-639892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-639892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-639892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-639892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-639892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-639892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-639892" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-639892

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-639892

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-639892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-639892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-639892

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-639892

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-639892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-639892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-639892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-639892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-639892" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-639892

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-639892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639892"

                                                
                                                
----------------------- debugLogs end: cilium-639892 [took: 3.439318588s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-639892" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-639892
--- SKIP: TestNetworkPlugins/group/cilium (3.60s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-671774" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-671774
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard